CN103400351B - Low light based on KINECT depth map shines image enchancing method and system - Google Patents

Low light based on KINECT depth map shines image enchancing method and system Download PDF

Info

Publication number
CN103400351B
CN103400351B CN201310325623.1A CN201310325623A CN103400351B CN 103400351 B CN103400351 B CN 103400351B CN 201310325623 A CN201310325623 A CN 201310325623A CN 103400351 B CN103400351 B CN 103400351B
Authority
CN
China
Prior art keywords
depth
value
map
contrast stretching
normalized value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310325623.1A
Other languages
Chinese (zh)
Other versions
CN103400351A (en
Inventor
胡瑞敏
胡金晖
王中元
郭春辉
侯志强
谭晓琼
杨阿慧
石艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201310325623.1A priority Critical patent/CN103400351B/en
Publication of CN103400351A publication Critical patent/CN103400351A/en
Application granted granted Critical
Publication of CN103400351B publication Critical patent/CN103400351B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

本发明公开了一种基于KINECT深度图的低光照图像增强方法及系统,包括步骤:步骤1,通过KINECT采集低光照监控图像的深度图和纹理图,并对深度图进行前处理;步骤2,基于步骤1所获深度图中像素点及其邻域像素点的深度值构建深度信息高斯核,将深度信息高斯核作为核因子引入基于高斯核的去噪算法并对纹理图进行去噪;步骤3,根据步骤1所获深度图的深度信息获取深度信息的归一化值,将深度信息的归一化值引入对比度拉伸算法并对去噪后的纹理图进行对比度拉伸。本发明同时利用了图像的颜色信息和深度信息来实现低光照图像增强,进一步增强了低光照图像的轮廓清晰度、层次感和视觉效果。

The invention discloses a low-light image enhancement method and system based on a KINECT depth map, comprising steps: step 1, collecting a depth map and a texture map of a low-light monitoring image through KINECT, and performing pre-processing on the depth map; step 2, Construct the depth information Gaussian kernel based on the depth values of the pixels in the depth map obtained in step 1 and its neighborhood pixels, and introduce the depth information Gaussian kernel as a kernel factor into the denoising algorithm based on the Gaussian kernel and denoise the texture map; steps 3. Obtain the normalized value of the depth information according to the depth information of the depth map obtained in step 1, introduce the normalized value of the depth information into the contrast stretching algorithm and perform contrast stretching on the denoised texture map. The present invention simultaneously utilizes the color information and the depth information of the image to realize low-light image enhancement, and further enhances the outline clarity, layering and visual effects of the low-light image.

Description

基于KINECT深度图的低光照图像增强方法及系统Low-light image enhancement method and system based on KINECT depth map

技术领域technical field

本发明涉及一种低光照监控图像增强方法,尤其是涉及一种基于KINECT深度图的低光照图像增强方法及系统。The invention relates to a low-light monitoring image enhancement method, in particular to a low-light image enhancement method and system based on a KINECT depth map.

背景技术Background technique

目前,视频监控系统被广泛应用于城市安全系统中。白天,通过摄像头拍摄的图像具有较好质量;然而,在夜间或者低光照条件下,由于CCD传感器的固件特性,导致拍摄的图像不但噪声较多,而且对比度的动态范围较窄。如果安全系统要求在低光照条件下工作的话,增强低光照监控图像的视觉效果就至关重要了。Currently, video surveillance systems are widely used in urban security systems. During the day, the image captured by the camera has better quality; however, at night or under low light conditions, due to the firmware characteristics of the CCD sensor, the captured image not only has more noise, but also has a narrow dynamic range of contrast. Visual enhancement of low-light surveillance images is critical if security systems are required to operate in low-light conditions.

低光照监控图像增强的关键技术包括去噪和对比度拉伸,这对于具有较低动态范围和较高噪声的图像而言,无疑是一个巨大的挑战。2007年,Henrik[1]等利用基于各向异性的结构张量滤波实现了图像去噪,同时,利用受限的直方图均衡化实现了图像对比度增强。2010年,QingXu[2]等人首先通过时域-空域的非局部均值算法对图像去噪,然后通过色调映射算法进行对比度拉伸,最后在YCbCr空间进行滤波;他们还根据运动信息对权值进行判决,提高了3D非局部均值的时域滤波效果。2011年,XuanDong[3]等人通过将低光照图像取反,然后利用去雾算法将取反后的低光照图像进行增强。尽管上述方法获得了一定成效,但仍存在以下两个问题:(1)上述方法在进行低光照图像增强时,仅利用了图像中诸如颜色、亮度等2D信息,未利用场景的深度信息,导致增强后的图像缺乏层次感;(2)增强后前景目标对象的边缘轮廓仍然不够清晰。The key technologies of low-light surveillance image enhancement include denoising and contrast stretching, which is undoubtedly a great challenge for images with low dynamic range and high noise. In 2007, Henrik [1] achieved image denoising by using anisotropy-based structured tensor filtering, and at the same time, used restricted histogram equalization to achieve image contrast enhancement. In 2010, QingXu [2] and others first denoised the image through the non-local mean algorithm in the time domain-space domain, then stretched the contrast through the tone mapping algorithm, and finally filtered in the YCbCr space; they also based on the motion information. The decision is made to improve the temporal filtering effect of the 3D non-local mean. In 2011, XuanDong [3] et al. reversed the low-light image, and then enhanced the reversed low-light image by using the dehazing algorithm. Although the above method has achieved certain results, there are still two problems as follows: (1) The above method only uses 2D information such as color and brightness in the image when enhancing low-light images, and does not use the depth information of the scene, resulting in The enhanced image lacks a sense of hierarchy; (2) The edge contour of the foreground target object is still not clear enough after enhancement.

文中涉及如下参考文献:The following references are involved in the article:

[1]H.Malm,M.Oskarssonetal,“Adaptiveenhancementandnoisereductioninverylowlight-levelvideo,”inProc.11thInternationalConferenceonComputerVision,pp.1–8,2007.[1] H. Malm, M. Oskarsson et al, "Adaptive enhancement and noise reduction in very low light-level video," inProc.11thInternational Conference on Computer Vision, pp.1–8, 2007.

[2]QingXu,HailinJiang,RiccardoScopigno,andMateuSbert,“Anewapproachforverydarkvideodenoisingandenhancement,”inProc.17thInternationalConferenceonImageProcessing,pp.1185-1188,2010[2] QingXu, Hailin Jiang, Riccardo Scopigno, and Mateu Sbert, "A new approach for very dark video denoising and denhancement," in Proc.17thInternational Conference on Image Processing, pp.1185-1188, 2010

[3]XuanDong,GuanWang,Yi(Amy)Pangetal,“Fastefficientalgorithmforenhancementoflowlightingvideo,”inProc.IEEEInternationalConferenceonMultimediaandExpo,pp.1-6,2011.[3] XuanDong, GuanWang, Yi(Amy) Pangetal, "Fast efficient algorithm for enhancement of flowlighting video," inProc. IEEE International Conference on Multimedia and Expo, pp.1-6, 2011.

发明内容Contents of the invention

针对现有技术存在的不足,本发明提出了一种同时利用了图像的颜色信息和深度信息的低光照图像增强方法,该方法可进一步增强低光照图像的轮廓清晰度、层次感和视觉效果。Aiming at the deficiencies in the prior art, the present invention proposes a low-light image enhancement method that utilizes the color information and depth information of the image at the same time, which can further enhance the outline definition, layering and visual effects of the low-light image.

为解决上述技术问题,本发明采用如下的技术方案:In order to solve the problems of the technologies described above, the present invention adopts the following technical solutions:

一、基于KINECT深度图的低光照图像增强方法,包括步骤:1. A low-light image enhancement method based on the KINECT depth map, including steps:

步骤1,通过KINECT采集低光照监控图像的深度图和纹理图,并对深度图进行前处理;Step 1, collect the depth map and texture map of the low-light monitoring image through KINECT, and pre-process the depth map;

步骤2,基于步骤1所获深度图中像素点及其邻域像素点的深度值构建深度信息高斯核,将深度信息高斯核作为核因子引入基于高斯核的去噪算法并对纹理图进行去噪;Step 2: Construct the depth information Gaussian kernel based on the depth values of the pixels in the depth map obtained in step 1 and the depth values of their neighborhood pixels, introduce the depth information Gaussian kernel as a kernel factor into the denoising algorithm based on the Gaussian kernel, and de-noise the texture image noise;

步骤3,根据步骤1所获深度图的深度信息获取深度信息的归一化值,将深度信息的归一化值引入对比度拉伸算法对去噪后的纹理图进行对比度拉伸,所述的深度信息的归一化值包括深度值的归一化值和深度值梯度的模的归一化值。Step 3, obtain the normalized value of the depth information according to the depth information of the depth map obtained in step 1, introduce the normalized value of the depth information into the contrast stretching algorithm to perform contrast stretching on the texture map after denoising, the described The normalized value of the depth information includes the normalized value of the depth value and the normalized value of the modulus of the gradient of the depth value.

步骤2中所述的深度信息高斯核其中,y表示步骤1所获深度图中的像素点;D(y)表示像素点y的深度值;x为步骤1所获深度图中像素点y的邻域像素点;D(x)是像素点x的深度值;hd是高斯核的衰变因子,根据纹理图中噪声强度取值;||·||2表示二范数。Depth information Gaussian kernel as described in step 2 Among them, y represents the pixel in the depth map obtained in step 1; D(y) represents the depth value of pixel y; x is the neighborhood pixel of pixel y in the depth map obtained in step 1; D(x) is The depth value of the pixel x; h d is the decay factor of the Gaussian kernel, and the value is taken according to the noise intensity in the texture map; ||·|| 2 represents the two norm.

步骤3中所述的将深度信息的归一化值引入对比度拉伸算法对去噪后的纹理图进行对比度拉伸包括对去噪后的纹理图进行全局对比度拉伸和局部对比度拉伸。In step 3, introducing the normalized value of the depth information into the contrast stretching algorithm to perform contrast stretching on the denoised texture map includes performing global contrast stretching and local contrast stretching on the denoised texture map.

所述的将深度信息的归一化值引入对比度拉伸算法并对去噪后的纹理图进行全局对比度拉伸进一步包括子步骤:The introduction of the normalized value of the depth information into the contrast stretching algorithm and performing global contrast stretching on the denoised texture map further includes sub-steps:

3-1a根据步骤1所获深度图的深度值获取深度值的归一化值Norm(D(y)),并获取自适应参数b(y)=λ·e-Norm(D(y)),其中,λ为常数,依据增强程度取值;3-1a Obtain the normalized value Norm(D(y)) of the depth value according to the depth value of the depth map obtained in step 1, and obtain the adaptive parameter b(y)=λ e -Norm(D(y)) , where λ is a constant, which is selected according to the degree of enhancement;

3-2a将自适应参数b(y)引入全局对比度拉伸算法对去噪后的纹理图进行全局对比度拉伸。3-2a Introduce the adaptive parameter b(y) into the global contrast stretching algorithm to perform global contrast stretching on the denoised texture map.

所述的将深度信息的归一化值引入对比度拉伸算法并对去噪后的纹理图进行局部对比度拉伸具体为:The described method of introducing the normalized value of the depth information into the contrast stretching algorithm and performing local contrast stretching on the denoised texture map is as follows:

根据步骤1所获深度图的深度值梯度的模的归一化值大小,将深度值的归一化值或深度值梯度的模的归一化值引入局部对比度拉伸算法对去噪后的纹理图进行局部对比度拉伸。According to the normalized value of the modulus of the depth value gradient of the depth map obtained in step 1, the normalized value of the depth value or the normalized value of the modulus of the depth value gradient is introduced into the local contrast stretching algorithm for denoising Texture maps are locally contrast stretched.

二、基于KINECT深度图的低光照图像增强系统,包括:2. Low-light image enhancement system based on KINECT depth map, including:

前处理模块,用来通过KINECT采集低光照监控图像的深度图和纹理图,并对深度图进行前处理;The pre-processing module is used to collect the depth map and texture map of the low-light monitoring image through KINECT, and pre-process the depth map;

去噪模块,用来基于深度图中像素点及其邻域像素点的深度值构建深度信息高斯核,将深度信息高斯核作为核因子引入基于高斯核的去噪算法并对纹理图进行去噪;The denoising module is used to construct the depth information Gaussian kernel based on the depth value of the pixel point in the depth map and its neighboring pixels, and introduce the depth information Gaussian kernel as a kernel factor into the denoising algorithm based on the Gaussian kernel and denoise the texture image ;

对比度拉伸模块,用来根据深度图的深度信息获取深度信息的归一化值,将深度信息的归一化值引入对比度拉伸算法对去噪后的纹理图进行对比度拉伸,所述的深度信息的归一化值包括深度值的归一化值和深度值梯度的模的归一化值。The contrast stretching module is used to obtain the normalized value of the depth information according to the depth information of the depth map, and introduces the normalized value of the depth information into the contrast stretching algorithm to perform contrast stretching on the texture map after denoising, the described The normalized value of the depth information includes the normalized value of the depth value and the normalized value of the modulus of the gradient of the depth value.

上述深度信息高斯核其中,y表示深度图中的像素点;D(y)表示像素点y的深度值;x为深度图中像素点y的邻域像素点;D(x)是像素点x的深度值;hd是高斯核的衰变因子,根据纹理图中噪声强度取值;||·||2表示二范数。The above depth information Gaussian kernel Among them, y represents the pixel in the depth map; D(y) represents the depth value of pixel y; x is the neighborhood pixel of pixel y in the depth map; D(x) is the depth value of pixel x; h d is the decay factor of the Gaussian kernel, and the value is taken according to the noise intensity in the texture map; ||·|| 2 means the two norm.

上述对比度拉伸模块进一步包括全局对比度拉伸模块和局部对比度拉伸模块,其中:The above-mentioned contrast stretching module further includes a global contrast stretching module and a local contrast stretching module, wherein:

全局对比度拉伸模块进一步包括子模块:The global contrast stretching module further includes submodules:

自适应参数获取模块,用来根据深度图的深度值获取深度值的归一化值Norm(D(y)),并获取自适应参数b(y)=λ·e-Norm(D(y)),其中,λ为常数,依据增强程度取值;The adaptive parameter acquisition module is used to obtain the normalized value Norm(D(y)) of the depth value according to the depth value of the depth map, and obtain the adaptive parameter b(y)=λ·e -Norm(D(y) ) , where λ is a constant, and the value is taken according to the degree of enhancement;

拉伸模块,用来将自适应参数b(y)引入全局对比度拉伸算法对去噪后的纹理图进行全局对比度拉伸;A stretching module is used to introduce the adaptive parameter b(y) into the global contrast stretching algorithm to perform global contrast stretching on the denoised texture map;

局部对比度拉伸模块,用来根据深度图的深度值梯度的模的归一化值大小,将深度值的归一化值或深度值梯度的模的归一化值引入局部对比度拉伸算法对去噪后的纹理图进行局部对比度拉伸。The local contrast stretching module is used to introduce the normalized value of the depth value or the normalized value of the modulus of the depth value gradient into the local contrast stretching algorithm according to the normalized value of the modulus of the depth value gradient of the depth map. The denoised texture map is subjected to local contrast stretching.

与现有技术相比,本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:

本发明利用了场景的深度信息对低光照图像进行增强,增强后的图像轮廓更清晰,具有更好的层次感和视觉效果。The invention utilizes the depth information of the scene to enhance the low-light image, and the enhanced image outline is clearer, and has better layering and visual effects.

附图说明Description of drawings

图1为本发明方法流程图;Fig. 1 is a flow chart of the method of the present invention;

图2为KINECT采集的低光照监控图像的深度图和纹理图,其中,图(a)为深度图,图(b)为纹理图;Figure 2 is the depth map and texture map of the low-light monitoring image collected by KINECT, where picture (a) is the depth map and picture (b) is the texture map;

图3为KINECT采集的另一低光照监控图像的深度图和纹理图,其中,图(a)为深度图,图(b)为纹理图;Figure 3 is the depth map and texture map of another low-light monitoring image collected by KINECT, where picture (a) is a depth map and picture (b) is a texture map;

图4为采用本发明和现有技术分别对图2进行增强的增强效果对比图,其中,图(a)为本发明增强效果,图(b)为现有技术增强效果;Fig. 4 is a comparison diagram of the enhancement effect of Fig. 2 enhanced by the present invention and the prior art, wherein, figure (a) is the enhancement effect of the present invention, and figure (b) is the enhancement effect of the prior art;

图5为采用本发明和现有技术分别对图3进行增强的增强效果对比图,其中,图(a)为本发明增强效果,图(b)为现有技术增强效果。Fig. 5 is a comparison diagram of the enhancement effects of Fig. 3 enhanced by the present invention and the prior art, wherein, figure (a) is the enhancement effect of the present invention, and figure (b) is the enhancement effect of the prior art.

具体实施方式Detailed ways

下面将对本发明的具体实施进行详细说明。The specific implementation of the present invention will be described in detail below.

本发明基于KINECT深度图的低光照监控图像增强方法,包括步骤:The present invention is based on the low-light monitoring image enhancement method of KINECT depth map, comprises steps:

步骤1,通过KINECT采集低光照监控图像的深度图和纹理图,并对深度图进行处理获得平滑的深度图。Step 1, collect the depth map and texture map of the low-light monitoring image through KINECT, and process the depth map to obtain a smooth depth map.

该步骤进一步包括子步骤:This step further includes sub-steps:

步骤1.1,通过KINECT采集低光照监控图像的深度图和纹理图,并对深度图进行视角配准。本具体实施中,视角配准可采用PrimeSense中的OpenNI工具箱实现。Step 1.1, collect the depth map and texture map of the low-light monitoring image through KINECT, and perform perspective registration on the depth map. In this specific implementation, view registration can be implemented using the OpenNI toolbox in PrimeSense.

步骤1.2:对视角配准后的深度图依次进行空洞填充、中值滤波,得到平滑的深度图。本具体实施中,空洞填充利用in-painting算法实现,中值滤波采用中值滤波算法实现。Step 1.2: Hole filling and median filtering are performed on the depth map after viewing angle registration to obtain a smooth depth map. In this specific implementation, the hole filling is realized by using the in-painting algorithm, and the median filtering is realized by using the median filtering algorithm.

步骤2,基于深度图的深度信息,采用非局部均值去噪法对纹理图进行去噪。Step 2, based on the depth information of the depth map, the non-local mean denoising method is used to denoise the texture map.

本步骤中,所述的深度图为步骤1中经处理后获得的平滑的深度图;所述的纹理图为通过KINECT采集低光照监控图像的纹理图。In this step, the depth map is a smooth depth map obtained after processing in step 1; the texture map is a texture map of a low-light monitoring image collected by KINECT.

该步骤进一步包括子步骤:This step further includes sub-steps:

步骤2.1,基于深度图的深度信息构建高斯核:Step 2.1, build a Gaussian kernel based on the depth information of the depth map:

e - | | D ( y ) - D ( x ) | | 2 2 h d 2 式(1) e - | | D. ( the y ) - D. ( x ) | | 2 2 h d 2 Formula 1)

式(1)中:In formula (1):

y表示深度图中的像素点;y represents the pixel in the depth map;

D(y)表示深度图中像素点y的深度值;D(y) represents the depth value of pixel point y in the depth map;

x为像素点y的邻域像素点;x is the neighborhood pixel of pixel y;

D(x)是像素点x的深度值;D(x) is the depth value of pixel x;

hd是高斯核的衰变因子,根据纹理图中噪声强度取值,若噪声较大,则取值较大;h d is the decay factor of the Gaussian kernel, and the value is selected according to the noise intensity in the texture map. If the noise is large, the value is larger;

||·||2表示二范数。||·|| 2 represents the second norm.

基于深度图中各像素及其邻域像素的深度值,采用公式(1)分别构建深度图中各像素对应的深度信息高斯核。Based on the depth values of each pixel in the depth map and its neighboring pixels, formula (1) is used to construct the depth information Gaussian kernel corresponding to each pixel in the depth map.

步骤2.2,将深度信息高斯核作为核因子加入非局部均值去噪算法,并对纹理图进行去噪,具体去噪基于式(2)实现:In step 2.2, the depth information Gaussian kernel is added as a kernel factor to the non-local mean denoising algorithm, and the texture image is denoised. The specific denoising is realized based on formula (2):

I ′ ( y ) = Σ x ∈ S y ( 1 Z ( y ) e - | | v ( P y ) - v ( P x ) | | 2 , α 2 h 2 e - | | D ( y ) - D ( x ) | | 2 2 h d 2 I ( x ) ) 式(2) I ′ ( the y ) = Σ x ∈ S the y ( 1 Z ( the y ) e - | | v ( P the y ) - v ( P x ) | | 2 , α 2 h 2 e - | | D. ( the y ) - D. ( x ) | | 2 2 h d 2 I ( x ) ) Formula (2)

式(2)中:In formula (2):

Z ( y ) = Σ x ∈ S y ( e - | | v ( P y ) - v ( P x ) | | 2 , α 2 h 2 e - | | D ( y ) - D ( x ) | | 2 2 h d 2 ) , 用来实现归一化; Z ( the y ) = Σ x ∈ S the y ( e - | | v ( P the y ) - v ( P x ) | | 2 , α 2 h 2 e - | | D. ( the y ) - D. ( x ) | | 2 2 h d 2 ) , used to achieve normalization;

y为纹理图中当前待去噪像素点,Py表示纹理图中以当前待去噪像素点y为中心的图像块,I'(y)为纹理图中当前待去噪像素点y去噪后的像素值;y is the current pixel point to be denoised in the texture image, P y represents the image block centered on the current pixel point y to be denoised in the texture image, I'(y) is the current pixel point y to be denoised in the texture image After the pixel value;

x为纹理图中当前待去噪像素点y的邻域像素点,Px表示纹理图中以邻域像素点x为中心的图像块,I(x)为纹理图中邻域像素点x的像素值;x is the neighborhood pixel of the current pixel y to be denoised in the texture map, P x represents the image block centered on the neighborhood pixel x in the texture map, and I(x) is the neighborhood pixel x in the texture map Pixel values;

Sy表示纹理图中以当前待去噪像素点y为中心的邻域;S y represents the neighborhood in the texture map centered on the current pixel point y to be denoised;

h为纹理图中像素的衰变因子,α为高斯核的标准差,h和α均根据纹理图中噪声强度来取值,噪声越大,h和α取值越大;h is the decay factor of the pixel in the texture image, α is the standard deviation of the Gaussian kernel, h and α are valued according to the noise intensity in the texture image, the larger the noise, the larger the value of h and α;

||·||2,α表示高斯加权欧式距离;||·|| 2, α means Gaussian weighted Euclidean distance;

v(Px)和v(Py)分别表示图像块Px和Py中各像素点的像素值构成的列向量。v(P x ) and v(P y ) represent the column vectors composed of the pixel values of the pixels in the image blocks P x and P y respectively.

本具体实施中,深度图和纹理图中位置相同的像素点均采用相同的字母表示。In this specific implementation, pixels at the same position in the depth map and the texture map are represented by the same letter.

步骤3,根据深度图的深度信息获取深度信息的归一化值,将深度信息的归一化值引入对比度拉伸算法并对去噪后的纹理图进行对比度拉伸。Step 3, obtain the normalized value of the depth information according to the depth information of the depth map, introduce the normalized value of the depth information into the contrast stretching algorithm and perform contrast stretching on the denoised texture map.

本发明中所述的深度信息的归一化值包括深度值的归一化值和深度值梯度的模的归一化值。The normalized value of the depth information in the present invention includes the normalized value of the depth value and the normalized value of the modulus of the gradient of the depth value.

本具体实施中,对纹理图进行对比度拉伸包括对纹理图进行全局对比度拉伸和局部对比度拉伸。本步骤中,所述的深度图为步骤1中经处理后获得的平滑的深度图。In this specific implementation, performing contrast stretching on the texture map includes performing global contrast stretching and local contrast stretching on the texture map. In this step, the depth map is the smooth depth map obtained after processing in step 1.

本步骤进一步包括子步骤:This step further includes sub-steps:

步骤3.1,基于深度图的深度信息对去噪后的纹理图进行全局对比度拉伸,在拉伸图像动态范围的同时可增强图像的层次感。In step 3.1, global contrast stretching is performed on the denoised texture map based on the depth information of the depth map, which can enhance the layering of the image while stretching the dynamic range of the image.

首先,根据深度图的深度值,获取规一化深度值Norm(D(y)):First, obtain the normalized depth value Norm(D(y)) according to the depth value of the depth map:

Norm ( D ( y ) ) = D ( y ) - D min D max - D min 式(3) Norm ( D. ( the y ) ) = D. ( the y ) - D. min D. max - D. min Formula (3)

式(3)中:In formula (3):

y为去噪后的纹理图中的当前像素点,D(y)是纹理图中当前像素点y在深度图中对应的深度值;y is the current pixel in the denoised texture map, and D(y) is the corresponding depth value of the current pixel y in the texture map in the depth map;

Dmin和Dmax分别为深度图中所有像素点深度值的最小值和最大值。D min and D max are the minimum and maximum depth values of all pixels in the depth map, respectively.

然后,根据归一化深度值Norm(D(y))获取自适应参数b(y):Then, the adaptive parameter b(y) is obtained according to the normalized depth value Norm(D(y)):

b(y)=λ·e-Norm(D(y))式(4)b(y)=λ·e -Norm(D(y)) formula (4)

式(4)中,λ为常数,依据增强程度取值,想获得较强的增强程度,可取较小值;反之亦然。In formula (4), λ is a constant, and the value is selected according to the degree of enhancement. If you want to obtain a stronger degree of enhancement, you can take a smaller value; and vice versa.

最后,基于自适应参数b(y)对去噪后的纹理图进行全局对比度拉伸,本具体实施中采用色调映射增强算法对纹理图进行全局对比度拉伸,具体如下:Finally, based on the adaptive parameter b(y), the global contrast stretching is performed on the denoised texture image. In this specific implementation, the tone mapping enhancement algorithm is used to perform global contrast stretching on the texture image, as follows:

I en _ g ( y ) = log ( I ( y ) + 1 I max + 1 ) · log 10 log ( 2 + ( ( I ( y ) I max ) log b log 0.5 ) · 8 ) · I max 式(5) I en _ g ( the y ) = log ( I ( the y ) + 1 I max + 1 ) &Center Dot; log 10 log ( 2 + ( ( I ( the y ) I max ) log b log 0.5 ) &Center Dot; 8 ) · I max Formula (5)

式(5)中:In formula (5):

I(y)表示全局对比度拉伸前当前像素点y在R通道、G通道或B通道中的像素值;I(y) represents the pixel value of the current pixel point y in the R channel, G channel or B channel before global contrast stretching;

Imax是所有像素点在R通道、G通道或B通道中像素值的最大值;I max is the maximum value of all pixels in the R channel, G channel or B channel;

Ien_g(y)是全局对比度拉伸后当前像素点y对应的像素值。I en_g (y) is the pixel value corresponding to the current pixel point y after global contrast stretching.

本发明在全局对比度拉伸中引入自适应参数,从而可根据深度图信息对全局对比度拉伸进行自适应调整。The present invention introduces adaptive parameters into the global contrast stretching, so that the global contrast stretching can be adaptively adjusted according to the depth map information.

本步骤对去噪后的纹理图进行全局对比度拉伸,具体为分别对纹理图的R通道、G通道和B通道执行式(5)。In this step, global contrast stretching is performed on the denoised texture image, specifically, formula (5) is executed on the R channel, G channel, and B channel of the texture image respectively.

步骤3.2,基于深度图的深度信息对去噪后的纹理图进行局部对比度拉伸,以增强图像的局部层次感。In step 3.2, local contrast stretching is performed on the denoised texture map based on the depth information of the depth map to enhance the local layering of the image.

由于纹理图像前景和背景的边界处有较强的深度变化,所以对图像进行局部对比度拉伸以增强前背景的层次差异,本具体实施利用深度图的深度信息梯度值实现纹理图的局部对比度拉伸,具体如下:Because there is a strong depth change at the boundary between the foreground and the background of the texture image, the local contrast stretching is performed on the image to enhance the level difference between the foreground and the background. extension, as follows:

I en _ l ( y ) = I en _ g ( y ) + Norm ( α ( y ) ) · ( I en _ g ( y ) - I ‾ en _ g ( y ) ) 式(6) I en _ l ( the y ) = I en _ g ( the y ) + Norm ( α ( the y ) ) · ( I en _ g ( the y ) - I ‾ en _ g ( the y ) ) Formula (6)

式(6)中:In formula (6):

Ien_g(y)是全局对比度拉伸后当前像素点y对应的像素值;I en_g (y) is the pixel value corresponding to the current pixel point y after global contrast stretching;

Ien_l(x)是局部对比度拉伸后当前像素点y对应的像素值;I en_l (x) is the pixel value corresponding to the current pixel point y after local contrast stretching;

为当前像素点y的邻域像素点的像素平均值; is the pixel average value of the neighboring pixels of the current pixel point y;

α(y)表示深度信息权值判别式:α(y) represents the depth information weight discriminant:

α ( y ) = | | ▿ D ( y ) | | if | | ▿ D ( y ) | | ≠ 0 D ( y ) else 式(7) α ( the y ) = | | ▿ D. ( the y ) | | if | | ▿ D. ( the y ) | | ≠ 0 D. ( the y ) else Formula (7)

其中,||▽D(y)||表示像素点y在深度图中的深度值D(y)的梯度的模。Among them, ||▽D(y)|| represents the modulus of the gradient of the depth value D(y) of the pixel point y in the depth map.

当α(y)=||▽D(y)||时, Norm ( α ( y ) ) = | | ▿ D ( y ) | | - | | ▿ D | | min | | ▿ D | | max - | | ▿ D | | min , 其中,||▽D(y)||表示像素点y的深度值D(y)的梯度的模,||▽D||min和||▽D||max分别为深度图中所有像素点深度值的梯度的模的最小值和最大值。When α(y)=||▽D(y)||, Norm ( α ( the y ) ) = | | ▿ D. ( the y ) | | - | | ▿ D. | | min | | ▿ D. | | max - | | ▿ D. | | min , Among them, ||▽D(y)|| represents the modulus of the gradient of the depth value D(y) of pixel y, and ||▽D|| min and ||▽D|| max are all pixels in the depth map, respectively The minimum and maximum values of the magnitude of the gradient of the depth value.

下面将结合具体应用实例进一步说明本发明技术方案。The technical solutions of the present invention will be further described below in conjunction with specific application examples.

首先,通过KINECT采集低光照监控图像的深度图和纹理图,并对深度图进行前处理;然后,以非局部均值去噪算法和色调映射增强算法为基础,加入深度图的深度信息,实现低光照图像增强。First, the depth map and texture map of the low-light monitoring image are collected by KINECT, and the depth map is pre-processed; then, based on the non-local mean denoising algorithm and the tone mapping enhancement algorithm, the depth information of the depth map is added to achieve low-light Light image enhancement.

本实施例中,将纹理图中像素的衰变因子h和高斯核的衰变因子hd均设置为15,非局部均值算法的搜索范围和图像块大小分别为21x21和7x7,高斯核标准差α=0.15。图2和图3为采用KINECT采集低光照监控图像的深度图和纹理图,其中,图2(a)和图3(a)为深度图,图2(b)和图3(b)为纹理图。下面以图2~3为例,详细说明本发明方法步骤:In this embodiment, the decay factor h of the pixels in the texture map and the decay factor h d of the Gaussian kernel are both set to 15, the search range and image block size of the non-local mean algorithm are 21x21 and 7x7 respectively, and the standard deviation of the Gaussian kernel α= 0.15. Figure 2 and Figure 3 are the depth map and texture map of low-light surveillance images collected by KINECT, where Figure 2(a) and Figure 3(a) are depth maps, Figure 2(b) and Figure 3(b) are texture picture. Take Fig. 2~3 as example below, describe the method step of the present invention in detail:

步骤1,采用PrimeSense中的OpenNI工具箱对图2和图3中的深度图进行视角配准,然后,对视角配准后的深度图进行空洞填充和滤波处理,具体为:利用in-painting算法对深度图进行空洞填充,采用中值滤波算法进行滤波得到平滑的深度图。Step 1, use the OpenNI toolbox in PrimeSense to perform perspective registration on the depth maps in Figure 2 and Figure 3, and then perform hole filling and filtering processing on the depth maps after perspective registration, specifically: use the in-painting algorithm The depth map is filled with holes, and the median filter algorithm is used to filter to obtain a smooth depth map.

步骤2,基于步骤1获得的深度图信息,通过非局部均值去噪算法,实现纹理图的去噪。Step 2, based on the depth map information obtained in step 1, denoise the texture map through a non-local mean denoising algorithm.

步骤2.1,对于纹理图中的待去噪像素点y,在深度图中以待去噪像素点y为中心的21x21的搜索范围内,计算中心像素点y与搜索范围内的各邻域像素点x的欧氏距离,并生成高斯核其中:D(y)表示深度图中待去噪像素点y的深度值;D(x)是邻域像素点x的深度值;hd是高斯核的衰变因子,本实施例中,hd取15;||·||2表示二范数。Step 2.1, for the pixel point y to be denoised in the texture map, within the search range of 21x21 centered on the pixel point y to be denoised in the depth map, calculate the center pixel point y and each neighboring pixel point within the search range Euclidean distance of x, and generate a Gaussian kernel Wherein: D(y) represents the depth value of pixel point y to be denoised in the depth map; D(x) is the depth value of neighborhood pixel point x; h d is the decay factor of Gaussian kernel, in the present embodiment, h d Take 15; ||·|| 2 means two norms.

步骤2.2,将深度图的高斯核作为核因子加入非局部均值去噪算法获得上述公式(2) I ′ ( y ) = Σ x ∈ S y ( 1 Z ( y ) e - | | v ( P y ) - v ( P x ) | | 2 , α 2 h 2 e - | | D ( y ) - D ( x ) | | 2 2 h d 2 I ( x ) ) , 采用公式(2)对纹理图进行去噪,其中: Z ( y ) = Σ x ∈ S y ( e - | | v ( P y ) - v ( P x ) | | 2 , α 2 h 2 e - | | D ( y ) - D ( x ) | | 2 2 h d 2 ) 是归一化过程;Py表示纹理图中以当前待去噪像素点y为中心的图像块,该图像块大小为7x7;I'(y)为纹理图中当前待去噪像素点y去噪后的像素值;Px表示纹理图中以邻域像素点x为中心的图像块,该图像块大小为7x7;I(x)为纹理图中邻域像素点x的像素值;||·||2,α表示高斯加权欧式距离;v(Px)和v(Py)分别表示图像块Px和Py中各像素点的像素值构成的列向量;Sy表示纹理图中以当前待去噪像素点y为中心的邻域,该邻域大小为非局部均值算法搜索范围,即为21x21;h为纹理图中像素的衰变因子,α为高斯核的标准差,本实施例中,h设为15,α设为0.15。Step 2.2, add the Gaussian kernel of the depth map as a kernel factor to the non-local mean denoising algorithm to obtain the above formula (2) I ′ ( the y ) = Σ x ∈ S the y ( 1 Z ( the y ) e - | | v ( P the y ) - v ( P x ) | | 2 , α 2 h 2 e - | | D. ( the y ) - D. ( x ) | | 2 2 h d 2 I ( x ) ) , The texture map is denoised using formula (2), where: Z ( the y ) = Σ x ∈ S the y ( e - | | v ( P the y ) - v ( P x ) | | 2 , α 2 h 2 e - | | D. ( the y ) - D. ( x ) | | 2 2 h d 2 ) is the normalization process; P y represents the image block centered on the current pixel point y to be denoised in the texture image, and the size of the image block is 7x7; I'(y) is the current pixel point y to be denoised in the texture image. The pixel value after noise; P x represents the image block centered on the neighborhood pixel point x in the texture map, and the size of the image block is 7x7; I(x) is the pixel value of the neighborhood pixel point x in the texture map;|| ·|| 2, α represents the Gaussian weighted Euclidean distance; v(P x ) and v(P y ) represent the column vectors composed of the pixel values of each pixel in the image block P x and P y respectively; S y represents the texture map The neighborhood centered on the current pixel point y to be denoised, the size of the neighborhood is the search range of the non-local mean algorithm, which is 21x21; h is the decay factor of the pixel in the texture map, and α is the standard deviation of the Gaussian kernel. This implementation In the example, h is set to 15 and α is set to 0.15.

步骤3,利用深度图的深度信息对去噪后的纹理图进行对比度拉伸。Step 3, use the depth information of the depth map to perform contrast stretching on the denoised texture map.

本发明中的对比度拉伸包括全局对比度拉伸和局部对比度拉伸。The contrast stretching in the present invention includes global contrast stretching and local contrast stretching.

步骤3.1,利用深度图的深度信息对去噪后的纹理图进行全局对比度拉伸。Step 3.1, using the depth information of the depth map to perform global contrast stretching on the denoised texture map.

首先,基于深度图的深度信息计算归一化的深度值Norm(D(y));First, calculate the normalized depth value Norm(D(y)) based on the depth information of the depth map;

然后,基于归一化的深度值Norm(D(y))获取自适应参数b(y)=λ·e-Norm(D(y)),其中,λ为常数,本实施例中,λ设为0.6。Then, based on the normalized depth value Norm(D(y)), the adaptive parameter b(y)=λ·e -Norm(D(y)) is obtained, where λ is a constant, and in this embodiment, λ is set is 0.6.

最后,基于自适应参数b(y)采用上述公式(5)对去噪后的纹理图进行全局对比度拉伸。Finally, based on the adaptive parameter b(y), the above formula (5) is used to perform global contrast stretching on the denoised texture map.

步骤3.2,基于深度图的深度信息对去噪后的纹理图的局部对比度拉伸。Step 3.2, local contrast stretching of the denoised texture map based on the depth information of the depth map.

由于图像前景和背景的边界处有较强的深度变化,所以对图像进行局部对比度拉伸以增强前背景的层次差异。Due to the strong depth variation at the boundary of the image foreground and background, local contrast stretching is performed on the image to enhance the layer difference of the foreground and background.

本具体实施基于深度图的深度信息,采用公式(6)对去噪后的纹理图进行局部对比度拉伸。This specific implementation is based on the depth information of the depth map, and uses the formula (6) to perform local contrast stretching on the denoised texture map.

本实施例对图2和图3进行增强,增强效果分别见图4(a)和图5(a)。同时,本具体实施中还采用传统的非局部均值去噪算法和色调映射增强算法对图2和图3进行去噪和对比度拉伸,增强效果见图4(b)和图5(b)。从图4~5中看出:本发明所得增强图像轮廓更清晰,且具有更好的层次感和视觉效果。In this embodiment, Fig. 2 and Fig. 3 are enhanced, and the enhancement effects are shown in Fig. 4(a) and Fig. 5(a) respectively. At the same time, in this specific implementation, the traditional non-local mean denoising algorithm and tone mapping enhancement algorithm are used to denoise and contrast stretch Figure 2 and Figure 3, and the enhancement effects are shown in Figure 4(b) and Figure 5(b). It can be seen from Figures 4 to 5 that the outline of the enhanced image obtained by the present invention is clearer, and has better layering and visual effects.

本文中所描述的具体实施例仅仅是对本发明精神作举例说明。本发明所属技术领域的技术人员可以对所描述的具体实施例做各种各样的修改或补充或采用类似的方式替代,但并不会偏离本发明的精神或者超越所附权利要求书所定义的范围。The specific embodiments described herein are merely illustrative of the spirit of the invention. Those skilled in the art to which the present invention belongs can make various modifications or supplements to the described specific embodiments or adopt similar methods to replace them, but they will not deviate from the spirit of the present invention or go beyond the definition of the appended claims range.

Claims (2)

1.基于KINECT深度图的低光照图像增强方法,其特征是,包括步骤:1. based on the low-light image enhancement method of KINECT depth map, it is characterized in that, comprising steps: 步骤1,通过KINECT采集低光照监控图像的深度图和纹理图,并对深度图进行视角配准、空洞填充、中值滤波;Step 1, collect the depth map and texture map of the low-light surveillance image through KINECT, and perform perspective registration, hole filling, and median filtering on the depth map; 步骤2,基于步骤1所获深度图中像素点及其邻域像素点的深度值构建深度信息高斯核将深度信息高斯核作为核因子引入基于高斯核的去噪算法并对纹理图进行去噪,其中,y表示步骤1所获深度图中的像素点;D(y)表示像素点y的深度值;x为步骤1所获深度图中像素点y的邻域像素点;D(x)是像素点x的深度值;hd是高斯核的衰变因子,根据纹理图中噪声强度取值;||·||2表示二范数;Step 2, construct depth information Gaussian kernel based on the depth values of the pixels in the depth map obtained in step 1 and their neighboring pixels Introduce the depth information Gaussian kernel as a kernel factor into the denoising algorithm based on the Gaussian kernel and denoise the texture image, where y represents the pixel in the depth map obtained in step 1; D(y) represents the depth value of pixel y ; x is the neighborhood pixel of pixel y in the depth map obtained in step 1; D(x) is the depth value of pixel x; h d is the decay factor of the Gaussian kernel, and the value is taken according to the noise intensity in the texture map;| |·|| 2 means two norms; 步骤3,根据步骤1所获深度图的深度信息获取深度信息的归一化值,将深度信息的归一化值引入对比度拉伸算法对去噪后的纹理图进行对比度拉伸,所述的深度信息的归一化值包括深度值的归一化值和深度值梯度的模的归一化值;所述的将深度信息的归一化值引入对比度拉伸算法对去噪后的纹理图进行对比度拉伸包括对去噪后的纹理图进行全局对比度拉伸和局部对比度拉伸,其中:Step 3, obtain the normalized value of the depth information according to the depth information of the depth map obtained in step 1, introduce the normalized value of the depth information into the contrast stretching algorithm to perform contrast stretching on the texture map after denoising, the described The normalized value of the depth information includes the normalized value of the depth value and the normalized value of the modulus of the depth value gradient; the described normalized value of the depth information is introduced into the contrast stretching algorithm to the denoised texture map Performing contrast stretching includes performing global contrast stretching and local contrast stretching on the denoised texture map, where: 将深度信息的归一化值引入对比度拉伸算法对去噪后的纹理图进行全局对比度拉伸进一步包括子步骤:Introducing the normalized value of the depth information into the contrast stretching algorithm to perform global contrast stretching on the denoised texture map further includes sub-steps: 3-1a根据步骤1所获深度图的深度值获取深度值的归一化值Norm(D(y)),并获取自适应参数b(y)=λ·e-Norm(D(y)),其中,λ为常数,依据增强程度取值;3-1a Obtain the normalized value Norm(D(y)) of the depth value according to the depth value of the depth map obtained in step 1, and obtain the adaptive parameter b(y)=λ e -Norm(D(y)) , where λ is a constant, which is selected according to the degree of enhancement; 3-2a将自适应参数b(y)引入全局对比度拉伸算法对去噪后的纹理图进行全局对比度拉伸;3-2a Introduce the adaptive parameter b(y) into the global contrast stretching algorithm to perform global contrast stretching on the denoised texture map; 将深度信息的归一化值引入对比度拉伸算法对去噪后的纹理图进行局部对比度拉伸具体为:The normalized value of the depth information is introduced into the contrast stretching algorithm to perform local contrast stretching on the denoised texture map as follows: 根据步骤1所获深度图的深度值梯度的模的归一化值大小,将深度值的归一化值或深度值梯度的模的归一化值引入局部对比度拉伸算法对去噪后的纹理图进行局部对比度拉伸。According to the normalized value of the modulus of the depth value gradient of the depth map obtained in step 1, the normalized value of the depth value or the normalized value of the modulus of the depth value gradient is introduced into the local contrast stretching algorithm for denoising Texture maps are locally contrast stretched. 2.基于KINECT深度图的低光照图像增强系统,其特征是,包括:2. The low-light image enhancement system based on the KINECT depth map is characterized in that it comprises: 前处理模块,用来通过KINECT采集低光照监控图像的深度图和纹理图,并对深度图进行视角配准、空洞填充、中值滤波;The pre-processing module is used to collect the depth map and texture map of the low-light monitoring image through KINECT, and perform viewing angle registration, hole filling, and median filtering on the depth map; 去噪模块,用来基于深度图中像素点及其邻域像素点的深度值构建深度信息高斯核将深度信息高斯核作为核因子引入基于高斯核的去噪算法并对纹理图进行去噪,其中,y表示步骤1所获深度图中的像素点;D(y)表示像素点y的深度值;x为步骤1所获深度图中像素点y的邻域像素点;D(x)是像素点x的深度值;hd是高斯核的衰变因子,根据纹理图中噪声强度取值;||·||2表示二范数;The denoising module is used to construct the depth information Gaussian kernel based on the depth values of the pixels in the depth map and their neighboring pixels Introduce the depth information Gaussian kernel as a kernel factor into the denoising algorithm based on the Gaussian kernel and denoise the texture image, where y represents the pixel in the depth map obtained in step 1; D(y) represents the depth value of pixel y ; x is the neighborhood pixel of pixel y in the depth map obtained in step 1; D(x) is the depth value of pixel x; h d is the decay factor of the Gaussian kernel, and the value is taken according to the noise intensity in the texture map;| |·|| 2 means two norms; 对比度拉伸模块,用来根据深度图的深度信息获取深度信息的归一化值,将深度信息的归一化值引入对比度拉伸算法对去噪后的纹理图进行对比度拉伸,所述的深度信息的归一化值包括深度值的归一化值和深度值梯度的模的归一化值;The contrast stretching module is used to obtain the normalized value of the depth information according to the depth information of the depth map, and introduces the normalized value of the depth information into the contrast stretching algorithm to perform contrast stretching on the denoised texture map, the described The normalized value of the depth information includes the normalized value of the depth value and the normalized value of the modulus of the depth value gradient; 对比度拉伸模块进一步包括全局对比度拉伸模块和局部对比度拉伸模块,其中,全局对比度拉伸模块进一步包括子模块:The contrast stretching module further includes a global contrast stretching module and a local contrast stretching module, wherein the global contrast stretching module further includes submodules: 自适应参数获取模块,用来根据深度图的深度值获取深度值的归一化值Norm(D(y)),并获取自适应参数b(y)=λ·e-Norm(D(y)),其中,λ为常数,依据增强程度取值;The adaptive parameter acquisition module is used to obtain the normalized value Norm(D(y)) of the depth value according to the depth value of the depth map, and obtain the adaptive parameter b(y)=λ·e -Norm(D(y) ) , where λ is a constant, and the value is taken according to the degree of enhancement; 拉伸模块,用来将自适应参数b(y)引入全局对比度拉伸算法对去噪后的纹理图进行全局对比度拉伸;A stretching module is used to introduce the adaptive parameter b(y) into the global contrast stretching algorithm to perform global contrast stretching on the denoised texture map; 局部对比度拉伸模块,用来根据深度图的深度值梯度的模的归一化值大小,将深度值的归一化值或深度值梯度的模的归一化值引入局部对比度拉伸算法对去噪后的纹理图进行局部对比度拉伸。The local contrast stretching module is used to introduce the normalized value of the depth value or the normalized value of the modulus of the depth value gradient into the local contrast stretching algorithm according to the normalized value of the modulus of the depth value gradient of the depth map. The denoised texture map is subjected to local contrast stretching.
CN201310325623.1A 2013-07-30 2013-07-30 Low light based on KINECT depth map shines image enchancing method and system Expired - Fee Related CN103400351B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310325623.1A CN103400351B (en) 2013-07-30 2013-07-30 Low light based on KINECT depth map shines image enchancing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310325623.1A CN103400351B (en) 2013-07-30 2013-07-30 Low light based on KINECT depth map shines image enchancing method and system

Publications (2)

Publication Number Publication Date
CN103400351A CN103400351A (en) 2013-11-20
CN103400351B true CN103400351B (en) 2015-12-23

Family

ID=49563963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310325623.1A Expired - Fee Related CN103400351B (en) 2013-07-30 2013-07-30 Low light based on KINECT depth map shines image enchancing method and system

Country Status (1)

Country Link
CN (1) CN103400351B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103713525B (en) * 2014-01-02 2016-04-27 南京航空航天大学 Adopt Spacecraft Rendezvous docking ground demonstration verification system and the method for Kinect
ES2563098B1 (en) 2015-06-15 2016-11-29 Davantis Technologies Sl IR image enhancement procedure based on scene information for video analysis
US10325351B2 (en) * 2016-03-11 2019-06-18 Qualcomm Technologies, Inc. Systems and methods for normalizing an image
CN108734670B (en) * 2017-04-20 2021-05-18 天津工业大学 Method for restoring single night weak-illumination haze image
CN111275642B (en) * 2020-01-16 2022-05-20 西安交通大学 A low-light image enhancement method based on saliency foreground content
CN113126944B (en) * 2021-05-17 2021-11-09 北京的卢深视科技有限公司 Depth map display method, display device, electronic device, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7366325B2 (en) * 2003-10-09 2008-04-29 Honda Motor Co., Ltd. Moving object detection using low illumination depth capable computer vision
US7234641B2 (en) * 2004-12-01 2007-06-26 Datalogic Scanning, Inc. Illumination pulsing method for a data reader
JP5397190B2 (en) * 2009-11-27 2014-01-22 ソニー株式会社 Image processing apparatus, image processing method, and program
CN102663741B (en) * 2012-03-22 2014-09-24 侯克杰 Method for carrying out visual stereo perception enhancement on color digit image and system thereof

Also Published As

Publication number Publication date
CN103400351A (en) 2013-11-20

Similar Documents

Publication Publication Date Title
CN111292258B (en) Image defogging method based on dark channel prior and bright channel prior
CN103400351B (en) Low light based on KINECT depth map shines image enchancing method and system
WO2018099136A1 (en) Method and device for denoising image with low illumination, and storage medium
Lin et al. Dehazing for image and video using guided filter
CN103955905B (en) Based on the single image to the fog method that fast wavelet transform and weighted image merge
CN109509164B (en) Multi-sensor image fusion method and system based on GDGF
CN105046658B (en) A kind of low-light (level) image processing method and device
CN102750674A (en) Video image defogging method based on self-adapting allowance
CN108765309A (en) The image defogging method of linear adaptive impovement overall situation atmosphere light based on dark
CN106157267A (en) A kind of image mist elimination absorbance optimization method based on dark channel prior
CN116823686B (en) Night infrared and visible light image fusion method based on image enhancement
Wang et al. Fusion-based underwater image enhancement by wavelet decomposition
Park et al. Single image haze removal with WLS-based edge-preserving smoothing filter
CN102231792A (en) Electronic image stabilization method based on characteristic coupling
CN105550999A (en) Video image enhancement processing method based on background reuse
CN107784642A (en) A kind of infrared video and visible light video method for self-adaption amalgamation
Yan et al. Method to Enhance Degraded Image in Dust Environment.
CN107146209A (en) A Single Image Dehazing Method Based on Gradient Domain
CN115689960A (en) A fusion method of infrared and visible light images based on adaptive illumination in nighttime scenes
CN105184761A (en) Image rain removing method based on wavelet analysis and system
CN110400274A (en) An infrared image enhancement method for vehicle-mounted infrared pedestrian detection
CN111311503A (en) A low-brightness image enhancement system at night
CN104778673B (en) A kind of improved gauss hybrid models depth image enhancement method
Chen et al. Visual depth guided image rain streaks removal via sparse coding
Hu et al. Kinect depth map based enhancement for low light surveillance image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151223

Termination date: 20190730