CN113160096A - Low-light image enhancement method based on retina model - Google Patents

Low-light image enhancement method based on retina model Download PDF

Info

Publication number
CN113160096A
CN113160096A CN202110581353.5A CN202110581353A CN113160096A CN 113160096 A CN113160096 A CN 113160096A CN 202110581353 A CN202110581353 A CN 202110581353A CN 113160096 A CN113160096 A CN 113160096A
Authority
CN
China
Prior art keywords
component
low
image
final
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110581353.5A
Other languages
Chinese (zh)
Other versions
CN113160096B (en
Inventor
魏本征
侯昊
侯迎坤
丁鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Traditional Chinese Medicine
Original Assignee
Shandong University of Traditional Chinese Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Traditional Chinese Medicine filed Critical Shandong University of Traditional Chinese Medicine
Priority to CN202110581353.5A priority Critical patent/CN113160096B/en
Publication of CN113160096A publication Critical patent/CN113160096A/en
Application granted granted Critical
Publication of CN113160096B publication Critical patent/CN113160096B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

本发明为一种基于视网膜模型的低光图像增强方法,属于图像处理技术领域,包括:步骤S1:获得相似像素群组;步骤S2:在相似像素群组上执行哈尔变换,利用像素级的非局部哈尔变换分别在R、G、B三个通道分别获得光照分量和反射分量;步骤S3:找到最终反射分量;步骤S4:找到增强光照分量;步骤S5:将增强光照分量的最小分量作为最终光照分量;步骤S6:将最终反射分量与最终光照分量应用于视网膜模型,获得增强的图像。本发明的低光图像增强方法快速有效,经本方法处理后的图像颜色不会过于鲜艳,能够很好地保留原始图像中的信息,并且能够很好地解决增强后光照不均的问题,不会引入假信号,图像边缘信息保真度非常高。

Figure 202110581353

The present invention is a low-light image enhancement method based on retinal model, belonging to the technical field of image processing, comprising: step S1: obtaining a similar pixel group; step S2: performing Haar transform on the similar pixel group, using pixel-level The non-local Haar transform obtains the illumination component and the reflection component respectively in the three channels of R, G and B; Step S3: find the final reflection component; Step S4: find the enhanced illumination component; Step S5: take the minimum component of the enhanced illumination component as final illumination component; step S6: applying the final reflection component and the final illumination component to the retinal model to obtain an enhanced image. The low-light image enhancement method of the present invention is fast and effective, the color of the image processed by the method will not be too bright, the information in the original image can be well preserved, and the problem of uneven illumination after the enhancement can be well solved, and no False signals will be introduced, and the fidelity of image edge information is very high.

Figure 202110581353

Description

一种基于视网膜模型的低光图像增强方法A low-light image enhancement method based on retinal model

技术领域technical field

本发明属于图像处理技术领域,特别涉及一种基于视网膜模型的低光图像增强方法。The invention belongs to the technical field of image processing, and particularly relates to a low-light image enhancement method based on a retinal model.

背景技术Background technique

低光图像增强就是将一些弱光环境拍摄的亮度过低的图像增强,目的是得到光照效果较好的图像。如夜晚或者密闭环境内所拍摄的照片,往往存在着亮度过低、不能有效辨认图片具体内容的问题,因此低光图像增强一直是计算机视觉领域的热门研究方向之一。Low-light image enhancement is to enhance some low-brightness images captured in low-light environments, in order to obtain images with better lighting effects. For example, photos taken at night or in a closed environment often have the problem that the brightness is too low and the specific content of the picture cannot be effectively identified. Therefore, low-light image enhancement has always been one of the hot research directions in the field of computer vision.

当前的低光图像增强方法大都是单纯基于视网膜模型的,比如MSR、MSRCR和SIRE以及RRM等,此类方法往往存在增强效果不明显、颜色失真、产生伪影等缺陷,并且现有的低光图像增强方法在处理图像时耗时较多。Most of the current low-light image enhancement methods are purely based on retinal models, such as MSR, MSRCR, SIRE, and RRM. Such methods often have defects such as insignificant enhancement effects, color distortion, and artifacts, and the existing low-light Image enhancement methods take a lot of time to process images.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于克服现有技术的至少一个缺点,提供一种基于视网膜模型的低光图像增强方法,可以有效解决很亮的地方增强所导致的过曝问题,以及很暗的地方增强所导致的失真问题,从而得到更好的低光图像增强效果。The purpose of the present invention is to overcome at least one disadvantage of the prior art, and to provide a low-light image enhancement method based on a retinal model, which can effectively solve the problem of overexposure caused by enhancement in very bright places, and the problem caused by enhancement in very dark places. distortion problems, resulting in better low-light image enhancement.

本发明公开一种基于视网膜模型的低光图像增强方法,包括:The invention discloses a low-light image enhancement method based on a retinal model, comprising:

步骤S1:获得相似像素群组;Step S1: obtaining similar pixel groups;

步骤S2:在相似像素群组上执行哈尔变换,利用像素级的非局部哈尔变换的低频系数和高频系数分别在R、G、B三个通道分别获得光照分量和反射分量;Step S2: performing Haar transform on similar pixel groups, and using the low-frequency coefficients and high-frequency coefficients of pixel-level non-local Haar transforms to obtain illumination components and reflection components in three channels of R, G, and B, respectively;

步骤S3:找到R、G、B三个通道反射分量的最小分量作为最终反射分量;Step S3: find the minimum component of the reflection components of the three channels R, G, and B as the final reflection component;

步骤S4:找到R、G、B三个通道光照分量的最大分量,运用指数和对数相结合的方法进行增强,得出增强光照分量;Step S4: Find the maximum component of the three channel illumination components of R, G, and B, and use the combination of exponential and logarithmic methods to enhance to obtain enhanced illumination components;

步骤S5:将增强光照分量的最小分量作为最终光照分量;Step S5: take the minimum component of the enhanced illumination component as the final illumination component;

步骤S6:将所述最终反射分量与所述最终光照分量应用于视网膜模型,获得增强的图像。Step S6: Apply the final reflection component and the final illumination component to a retinal model to obtain an enhanced image.

优选的,所述步骤S1具体包括:Preferably, the step S1 specifically includes:

步骤S1a:在RGB颜色空间中的R、G、B三个通道中分别执行块匹配与行匹配,按一定滑动步长在其中选取一个尺寸为

Figure BDA0003086196830000021
参考图像块Br,在以Br左上角坐标为中心的一个给定大小的邻域内执行块匹配获得与Br最相似的N2-1个图像块,从而获得连同Br在内N2个相似图像块;Step S1a: Perform block matching and line matching respectively in the three channels of R, G, and B in the RGB color space, and select a size among them according to a certain sliding step.
Figure BDA0003086196830000021
Referring to the image block B r , perform block matching in a neighborhood of a given size centered on the upper-left corner coordinate of Br to obtain N2-1 image blocks most similar to B r , thereby obtaining N2 similar image blocks together with B r image block;

步骤S1b:将每一个尺寸为

Figure BDA0003086196830000022
的图像块拉伸成一个列向量,标记为
Figure BDA0003086196830000023
Figure BDA0003086196830000024
将所有Vl按列拼接为一个
Figure BDA0003086196830000025
行N2列的矩阵Mb;Step S1b: Convert each dimension to
Figure BDA0003086196830000022
The image blocks are stretched into a column vector, labeled as
Figure BDA0003086196830000023
Figure BDA0003086196830000024
Concatenate all V l by column into one
Figure BDA0003086196830000025
matrix M b with row N2 columns;

步骤S1c:选取矩阵Mb其中一行Rr作为参考行,计算Rr与其他所有行的欧氏距离以找到与其最相似的N3-1行,连同Rr在内构造一个尺寸为N3×N2相似像素矩阵MsStep S1c: Select one row R r of the matrix M b as a reference row, calculate the Euclidean distance between R r and all other rows to find the most similar N 3 -1 row, and construct a size N 3 × N 2 similar pixel matrix M s .

优选的,所述步骤S1c具体为:Preferably, the step S1c is specifically:

将矩阵Mb中第i行作为参考行,计算第i行与其余所有行的欧式距离为:Taking the i-th row in the matrix M b as the reference row, the Euclidean distance between the i-th row and all other rows is calculated as:

Figure BDA0003086196830000026
Figure BDA0003086196830000026

选择与第i行距离最小的N3-1行,连同第i行最终获得尺寸为N3×N2的相似像素矩阵MsThe N 3 -1 row with the smallest distance from the ith row is selected, and together with the ith row, a similar pixel matrix M s of size N 3 ×N 2 is finally obtained.

优选的,所述步骤S2中所述哈尔变换具体包括:Preferably, the Haar transform in the step S2 specifically includes:

对相似像素矩阵Ms分别执行纵向与横向的可分提升哈尔变换,即:The vertical and horizontal separable lifting Haar transforms are performed on the similar pixel matrix M s , namely:

Ch=Hl*MS*Hr C h =H l *M S *H r

其中,Ch为哈尔变换后的谱矩阵,Hl与Hr为哈尔矩阵。Among them, C h is the spectral matrix after Haar transformation, and H l and H r are Haar matrices.

优选的,所述步骤S2中所述光照分量的获得方法具体包括:Preferably, the method for obtaining the illumination component in the step S2 specifically includes:

定义Ch(1,1)为低频系数,利用Ch(1,1)执行逆哈尔变换后重构图像获得光照分量IlC h (1,1) is defined as a low-frequency coefficient, and the illumination component I l is obtained by reconstructing the image after performing inverse Haar transform with C h (1,1).

优选的,所述步骤S2中所述反射分量的获得方法具体包括:Preferably, the method for obtaining the reflection component in the step S2 specifically includes:

定义Ch(1,1)为低频系数,利用Ch-Ch(1,1)这N3×N2-1个变换系数,执行逆哈尔变换后重构图像而获得反射分量IrDefine C h (1,1) as the low-frequency coefficient, and use the N 3 ×N 2 -1 transformation coefficients of C h -C h (1,1) to perform inverse Haar transform and reconstruct the image to obtain the reflection component I r .

优选的,所述步骤S4具体包括:Preferably, the step S4 specifically includes:

步骤S4a:对于光照分量,通过R、G、B三个通道分别获得3个光照分量

Figure BDA0003086196830000031
Figure BDA0003086196830000032
Figure BDA0003086196830000033
Figure BDA0003086196830000034
Figure BDA0003086196830000035
进行比较,获得光照分量的最大分量;Step S4a: For the illumination components, obtain 3 illumination components through the R, G, and B channels respectively
Figure BDA0003086196830000031
Figure BDA0003086196830000032
and
Figure BDA0003086196830000033
right
Figure BDA0003086196830000034
and
Figure BDA0003086196830000035
Make a comparison to obtain the maximum component of the illumination component;

步骤S4b:用不同的指数γ1和γ2来执行增强步骤,Step S4b : Perform the enhancement step with different exponents γ1 and γ2,

其中γ1通过如下的方法计算:where γ1 is calculated by the following method:

Figure BDA0003086196830000036
Figure BDA0003086196830000036

其中γ2通过如下的方法计算:where γ2 is calculated by the following method:

如果if

Figure BDA0003086196830000037
Figure BDA0003086196830000037

but

Figure BDA0003086196830000038
Figure BDA0003086196830000038

否则otherwise

Figure BDA0003086196830000039
Figure BDA0003086196830000039

步骤S4c:获得第一增强光照分量

Figure BDA00030861968300000310
Step S4c: obtaining the first enhanced illumination component
Figure BDA00030861968300000310

Figure BDA00030861968300000311
Figure BDA00030861968300000311

步骤S4cd:获得第二增强光照分量

Figure BDA00030861968300000312
Step S4cd: obtaining the second enhanced illumination component
Figure BDA00030861968300000312

Figure BDA00030861968300000313
Figure BDA00030861968300000313

优选的,所述步骤S5具体包括:Preferably, the step S5 specifically includes:

步骤S5a:获得最终光照分量

Figure BDA00030861968300000314
Step S5a: Obtain the final illumination component
Figure BDA00030861968300000314

Figure BDA00030861968300000315
Figure BDA00030861968300000315

其中,

Figure BDA00030861968300000316
为最终光照分量,
Figure BDA00030861968300000317
为第一增强光照分量,
Figure BDA00030861968300000318
为第二增强光照分量;in,
Figure BDA00030861968300000316
is the final illumination component,
Figure BDA00030861968300000317
is the first enhanced illumination component,
Figure BDA00030861968300000318
is the second enhanced illumination component;

步骤S5b:将图像归一化为灰度值[0,1];Step S5b: normalize the image to gray value [0,1];

步骤S5c:对图像的灰度范围进行压缩。Step S5c: compress the grayscale range of the image.

优选的,所述步骤S6具体包括:Preferably, the step S6 specifically includes:

步骤S6a:将所述最终增强光照分量和所述最终反射分量,应用于视网膜模型,即Step S6a: apply the final enhanced illumination component and the final reflection component to the retina model, that is,

Figure BDA0003086196830000041
Figure BDA0003086196830000041

其中Ie为最后增强的图像,⊙为点乘运算;where I e is the final enhanced image, ⊙ is the dot product operation;

步骤S6b:将Ie表示为Y’,用Y’替换HSV颜色空间中的V通道后转换回RGB颜色空间,获得最后增强的彩色图像。Step S6b: Denote I e as Y', replace the V channel in the HSV color space with Y', and then convert back to the RGB color space to obtain a final enhanced color image.

与现有技术相比,本发明的有益效果:本发明一种基于视网膜模型的低光图像增强方法快速有效,经本方法处理后的图像颜色不会过于鲜艳,能够很好地保留原始图像中的信息,并且能够很好地解决增强后光照不均的问题,不会引入假信号,图像边缘信息保真度非常高。Compared with the prior art, the beneficial effects of the present invention are as follows: a low-light image enhancement method based on a retinal model of the present invention is fast and effective, the color of the image processed by the method will not be too bright, and the original image can be well preserved. It can solve the problem of uneven illumination after enhancement, and will not introduce false signals, and the fidelity of image edge information is very high.

附图说明Description of drawings

图1为本发明一种基于视网膜模型的低光图像增强方法的流程图;1 is a flowchart of a retinal model-based low-light image enhancement method of the present invention;

图2为本发明低光图像增强方法与部分现有低光增强方法处理后图像效果的第一对比图;FIG. 2 is a first comparison diagram of image effects processed by the low-light image enhancement method of the present invention and some existing low-light enhancement methods;

图3为本发明低光图像增强方法与部分现有低光增强方法处理后图像效果的第二对比图。FIG. 3 is a second comparison diagram of the image effects processed by the low-light image enhancement method of the present invention and some existing low-light enhancement methods.

具体实施方式Detailed ways

下面结合附图对本发明做进一步的描述,此描述仅用于解释本发明的具体实施方式,而不能以任何形式理解成是对本发明的限制,具体实施方式如下:The present invention will be further described below in conjunction with the accompanying drawings. This description is only used to explain the specific embodiments of the present invention, and should not be construed as limiting the present invention in any form. The specific embodiments are as follows:

如图1所示,本发明提出了一种基于视网膜模型相结合的低光图像增强方法,包括以下步骤:As shown in Fig. 1, the present invention proposes a low-light image enhancement method based on the combination of retinal models, including the following steps:

步骤S1:获得相似像素群组;Step S1: obtaining similar pixel groups;

步骤S2:在相似像素群组上执行哈尔变换,利用像素级的非局部哈尔变换的低频系数和高频系数分别在R、G、B三个通道分别获得光照分量和反射分量;Step S2: performing Haar transform on similar pixel groups, and using the low-frequency coefficients and high-frequency coefficients of pixel-level non-local Haar transforms to obtain illumination components and reflection components in three channels of R, G, and B, respectively;

步骤S3:找到R、G、B三个通道反射分量的最小分量作为最终反射分量;Step S3: find the minimum component of the reflection components of the three channels R, G, and B as the final reflection component;

步骤S4:找到R、G、B三个通道光照分量的最大分量,运用指数和对数相结合的方法进行增强,得出增强光照分量;Step S4: Find the maximum component of the three channel illumination components of R, G, and B, and use the combination of exponential and logarithmic methods to enhance to obtain enhanced illumination components;

步骤S5:将增强光照分量的最小分量作为最终光照分量;Step S5: take the minimum component of the enhanced illumination component as the final illumination component;

步骤S6:将所述最终反射分量与所述最终光照分量应用于视网膜模型,获得增强的图像。Step S6: Apply the final reflection component and the final illumination component to a retinal model to obtain an enhanced image.

具体包括:Specifically include:

获得相似像素群组。Get groups of similar pixels.

在一幅RGB颜色空间的低光彩色图像I∈Rh×w×c中,将I从RGB颜色空间同步转换到HSV颜色空间。In a low-light color image I∈R h×w×c in RGB color space, I is synchronously converted from RGB color space to HSV color space.

在RGB颜色空间中的R、G、B三个通道中分别执行块匹配与行匹配,按一定滑动步长在其中选取一个尺寸为

Figure BDA0003086196830000051
参考图像块Br,在以Br左上角坐标为中心的一个给定大小的邻域内执行块匹配获得与Br最相似的N2-1个图像块,从而获得连同Br在内N2个相似图像块。将每一个尺寸为
Figure BDA0003086196830000052
的图像块拉伸成一个列向量,标记为
Figure BDA0003086196830000053
将所有Vl按列拼接为一个
Figure BDA0003086196830000054
Figure BDA0003086196830000055
行N2列的矩阵Mb。Perform block matching and line matching in the three channels of R, G, and B in the RGB color space, and select a size according to a certain sliding step.
Figure BDA0003086196830000051
Referring to the image block B r , perform block matching in a neighborhood of a given size centered on the upper-left corner coordinate of Br to obtain N2-1 image blocks most similar to B r , thereby obtaining N2 similar image blocks together with B r image block. convert each dimension to
Figure BDA0003086196830000052
The image blocks are stretched into a column vector, labeled as
Figure BDA0003086196830000053
Concatenate all V l by column into one
Figure BDA0003086196830000054
Figure BDA0003086196830000055
Matrix M b with rows and N2 columns.

为了更好地挖掘图像中的自相似性,我们进一步在Mb上实施行匹配。To better mine self-similarity in images, we further implement row matching on M b .

选取其中一行Rr作为参考行计算其与其他所有行的欧氏距离以找到与其最相似的N3-1行,连同Rr在内构造一个尺寸为N3×N2相似像素矩阵MsOne row R r is selected as a reference row to calculate its Euclidean distance with all other rows to find the most similar N 3 -1 row, and together with R r , a similar pixel matrix M s of size N 3 ×N 2 is constructed.

具体来说,对于第i行作为参考行,计算第i行与其余所有行的欧式距离为:Specifically, for the i-th row as the reference row, the Euclidean distance between the i-th row and all the remaining rows is calculated as:

Figure BDA0003086196830000056
Figure BDA0003086196830000056

然后选择与第i行距离最小的N3-1行,连同第i行最终获得尺寸为N3×N2的相似像素矩阵MsThen, the N 3 -1 row with the smallest distance from the i-th row is selected, and together with the i-th row, a similar pixel matrix M s of size N 3 ×N 2 is finally obtained.

在相似像素群组Ms上执行可分的哈尔变换。A separable Haar transform is performed on groups of similar pixels M s .

分别对Ms执行纵向与横向的可分提升哈尔变换,即:Perform longitudinal and transverse separable lifting Haar transforms on M s respectively, namely:

Ch=Hl*MS*Hr C h =H l *M S *H r

其中,Ch为哈尔变换后的谱矩阵,Hl与Hr为哈尔矩阵。Among them, C h is the spectral matrix after Haar transformation, and H l and H r are Haar matrices.

由于可分提升哈尔变换的特性,Ch(1,1)为MS的所有像素的加权平均,我们将其定义为低频系数,仅用Ch(1,1)执行逆哈尔变换后重构图像就可以获得理想的光照分量Il;反之,利用Ch-Ch(1,1)这N3×N2-1个变换系数(即中、高频系数)执行逆哈尔变换后重构图像而获得理想的反射分量Ir。此种方法能够有效、快速地分离出图像的光照和反射分量,是低光图像增强的重要步骤。Due to the characteristics of separable lifting Haar transform, C h (1, 1) is the weighted average of all pixels of MS, we define it as the low frequency coefficient, after performing the inverse Haar transform with only C h (1, 1) The ideal illumination component I l can be obtained by reconstructing the image; on the contrary, the inverse Haar transform is performed by using the N 3 ×N 2 -1 transform coefficients (ie medium and high frequency coefficients) of C h -C h (1,1). After reconstructing the image, the ideal reflection component I r is obtained. This method can effectively and quickly separate the illumination and reflection components of the image, and is an important step in low-light image enhancement.

对反射分量执行增强操作:Perform an enhancement operation on the reflected component:

对于反射分量,通过R、G、B三个通道分别获得3个反射分量

Figure BDA0003086196830000061
Figure BDA0003086196830000062
Figure BDA0003086196830000063
Figure BDA0003086196830000064
进行比较,获得反射分量的最小分量,然后选取最小分量作为最终反射分量lr。For the reflection component, 3 reflection components are obtained through the R, G, and B channels respectively
Figure BDA0003086196830000061
and
Figure BDA0003086196830000062
right
Figure BDA0003086196830000063
and
Figure BDA0003086196830000064
For comparison, the minimum component of the reflection component is obtained, and then the minimum component is selected as the final reflection component l r .

对光照分量执行增强操作:Perform an augmentation operation on the light component:

对于光照分量,通过R、G、B三个通道分别获得3个光照分量

Figure BDA0003086196830000065
Figure BDA0003086196830000066
Figure BDA0003086196830000067
Figure BDA0003086196830000068
进行比较,获得光照分量的最大分量,即得到最亮的光照分量。用不同的指数γ1和γ2来执行增强步骤,γ1和γ2分别可以通过如下的方法计算:For the illumination components, 3 illumination components are obtained through the R, G, and B channels respectively.
Figure BDA0003086196830000065
and
Figure BDA0003086196830000066
right
Figure BDA0003086196830000067
and
Figure BDA0003086196830000068
For comparison, the maximum component of the illumination component is obtained, that is, the brightest illumination component is obtained. The enhancement step is performed with different exponents γ1 and γ2 , respectively, which can be calculated as follows :

Figure BDA0003086196830000069
Figure BDA0003086196830000069

γ2分为以下两种情况:γ 2 is divided into the following two cases:

如果if

Figure BDA00030861968300000610
Figure BDA00030861968300000610

but

Figure BDA00030861968300000611
Figure BDA00030861968300000611

否则otherwise

Figure BDA00030861968300000612
Figure BDA00030861968300000612

第一增强光照分量

Figure BDA00030861968300000613
可由如下方法获得:first enhanced illumination component
Figure BDA00030861968300000613
It can be obtained by the following methods:

Figure BDA00030861968300000614
Figure BDA00030861968300000614

第二增强光照分量

Figure BDA00030861968300000615
可由如下方法获得:Second Enhanced Illumination Component
Figure BDA00030861968300000615
It can be obtained by the following methods:

Figure BDA0003086196830000071
Figure BDA0003086196830000071

在实践中发现,如果仅用指数变换来增强图像中低亮度部分的光照分量,则亮度值增长速度过快,会导致亮度不足的问题;如果只用对数变换来增强图像中高亮度部分的光照分量,亮度值的增长速度同样会过快,也会导致亮度不均匀。In practice, it is found that if only the exponential transformation is used to enhance the illumination component of the low-brightness part of the image, the brightness value increases too fast, which will lead to the problem of insufficient brightness; if only the logarithmic transformation is used to enhance the illumination of the high-brightness part of the image. Component, the brightness value will also increase too fast, which will also cause uneven brightness.

因此,为了解决上述问题,本发明通过下面的方式获得最终光照分量IlTherefore, in order to solve the above problems, the present invention obtains the final illumination component I l in the following manner:

Figure BDA0003086196830000072
Figure BDA0003086196830000072

将图像归一化为灰度值[0,1],然后对图像的灰度范围进行压缩,原始图像中的暗区会有很大程度的亮区,而亮区变化很小,实现了低光图像增强的效果,保证了增强结果在不同光照区域的自适应性。Normalize the image to gray value [0,1], and then compress the grayscale range of the image. The dark area in the original image will have a large degree of bright area, while the bright area changes very little, achieving low The effect of light image enhancement ensures the adaptability of the enhancement results in different lighting areas.

将最终光照分量与最终反射分量应用于视网膜模型,即Apply the final illumination component and the final reflection component to the retina model, i.e.

Figure BDA0003086196830000073
Figure BDA0003086196830000073

其中Ie为最后增强的图像,⊙为点乘运算。where I e is the final enhanced image, and ⊙ is the dot product operation.

将Ie表示为Y’,用Y’替换HSV颜色空间中的V通道后转换回RGB颜色空间,即可获得增强后的最终彩色图像。Denote I e as Y', replace the V channel in the HSV color space with Y' and convert it back to the RGB color space to obtain the final enhanced color image.

本发明对CVPR2021UG2+challenge数据集中随机选取的200张低光图像所组成的数据集以及35幅低光图像数据集用MATLAB软件进行了增强实验,执行本发明的算法,得到增强结果图像,并与HE、MSRCR、CVC、NPE、SIRE、MF、WVM、CRM、BIMEF、LIME、Jiep以及STAR这些现有技术中的经典方法进行比较,其图像效果如图2及图3所示。其中CVPR2021UG2+challenge数据集地址为:(http://cvpr2021.ug2challenge.org/dataset21_t1.html)The present invention conducts enhancement experiments on a dataset composed of 200 low-light images randomly selected from the CVPR2021UG2+challenge data set and 35 low-light image datasets with MATLAB software, executes the algorithm of the present invention, and obtains an enhanced result image, which is combined with The classical methods in the prior art such as HE, MSRCR, CVC, NPE, SIRE, MF, WVM, CRM, BIMEF, LIME, Jiep and STAR are compared, and the image effects are shown in Figures 2 and 3. The CVPR2021UG2+challenge dataset address is: ( http://cvpr2021.ug2challenge.org/dataset21_t1.html )

从图2及图3中可以看出,经本发明方法增强的图像颜色不会过于鲜艳,能够很好地保留原始图像中的信息,并且能够很好地解决增强后光照不均的问题,不会引入假信号,图像边缘信息保真度非常高。It can be seen from Fig. 2 and Fig. 3 that the color of the image enhanced by the method of the present invention will not be too bright, the information in the original image can be well preserved, and the problem of uneven illumination after enhancement can be well solved. False signals will be introduced, and the fidelity of image edge information is very high.

经本发明方法增强后图像与现有技术增强后图像的NIQE、LOE、TMQI及FSIM值比较如下表所示:The NIQE, LOE, TMQI and FSIM values of the image enhanced by the method of the present invention and the enhanced image of the prior art are compared as shown in the following table:

方法method NIQENIQE LOELOE TMQITMQI FSIMFSIM HEHE 3.623.62 740.30740.30 0.92200.9220 0.71740.7174 MSRCRMSRCR 3.173.17 702.85702.85 0.85060.8506 0.69690.6969 CVCCVC 3.113.11 654.82654.82 0.87150.8715 0.85780.8578 NPENPE 3.223.22 710.21710.21 0.88910.8891 0.81930.8193 SIRESIRE 3.013.01 637.70637.70 0.86800.8680 0.89910.8991 MFMF 3.383.38 776.41776.41 0.89970.8997 0.82330.8233 WVMWVM 2.992.99 633.40633.40 0.86740.8674 0.89990.8999 CRMCRM 3.133.13 744.61744.61 0.89640.8964 0.81230.8123 BIMEFBIMEF 3.043.04 703.16703.16 0.90170.9017 0.88980.8898 LIMELIME 3.393.39 779.73779.73 0.87910.8791 0.71310.7131 JiePJieP 2.992.99 724.52724.52 0.87660.8766 0.87490.8749 STARSTAR 2.932.93 677.43677.43 0.87840.8784 0.90470.9047 本发明方法method of the invention 2.762.76 546.63546.63 0.86160.8616 0.92500.9250

需要说明的是:NIQE、LOE、TMQI越低,则说明图像质量越高,FSIM越高,则说明图像质量越高。It should be noted that: the lower NIQE, LOE, and TMQI are, the higher the image quality is, and the higher the FSIM is, the higher the image quality is.

从表中数据可以看出,经过本发明低光图像增强方法处理后的图像的上述四个指标值对于图像的处理效果均优于现有技术中低光图像增强方法的结果。It can be seen from the data in the table that the processing effects of the above four index values of the image processed by the low-light image enhancement method of the present invention are better than the results of the low-light image enhancement method in the prior art.

至此,本领域技术人员应认识到,虽然本文已详尽示出和描述了本发明的示例性实施例,但是,在不脱离本发明精神和范围的情况下,仍可根据本发明公开的内容直接确定或推导出符合本发明原理的许多其他变型或修改。因此,本发明的范围应被理解和认定为覆盖了所有这些其他变型或修改。By now, those skilled in the art will appreciate that, although exemplary embodiments of the present invention have been illustrated and described in detail herein, it is still possible to directly follow the present disclosure without departing from the spirit and scope of the present invention. Numerous other variations or modifications can be identified or derived consistent with the principles of the present invention. Accordingly, the scope of the present invention should be understood and deemed to cover all such other variations or modifications.

Claims (9)

1.一种基于视网膜模型的低光图像增强方法,其特征在于,包括:1. a low-light image enhancement method based on retinal model, is characterized in that, comprises: 步骤S1:获得相似像素群组;Step S1: obtaining similar pixel groups; 步骤S2:在相似像素群组上执行哈尔变换,利用像素级的非局部哈尔变换的低频系数和高频系数分别在R、G、B三个通道分别获得光照分量和反射分量;Step S2: performing Haar transform on similar pixel groups, using the low-frequency coefficients and high-frequency coefficients of pixel-level non-local Haar transforms to obtain illumination components and reflection components in three channels of R, G, and B, respectively; 步骤S3:找到R、G、B三个通道反射分量的最小分量作为最终反射分量;Step S3: find the minimum component of the reflection components of the three channels R, G, and B as the final reflection component; 步骤S4:找到R、G、B三个通道光照分量的最大分量,运用指数和对数相结合的方法进行增强,得出增强光照分量;Step S4: Find the maximum component of the three channel illumination components of R, G, and B, and use the combination of exponential and logarithmic methods to enhance to obtain enhanced illumination components; 步骤S5:将增强光照分量的最小分量作为最终光照分量;Step S5: take the minimum component of the enhanced illumination component as the final illumination component; 步骤S6:将所述最终反射分量与所述最终光照分量应用于视网膜模型,获得增强的图像。Step S6: Apply the final reflection component and the final illumination component to a retinal model to obtain an enhanced image. 2.根据权利要求1所述的低光图像增强方法,其特征在于,所述步骤S1具体包括:2. The low-light image enhancement method according to claim 1, wherein the step S1 specifically comprises: 步骤S1a:在RGB颜色空间中的R、G、B三个通道中分别执行块匹配与行匹配,按一定滑动步长在其中选取一个尺寸为
Figure FDA0003086196820000011
参考图像块Br,在以Br左上角坐标为中心的一个给定大小的邻域内执行块匹配获得与Br最相似的N2-1个图像块,从而获得连同Br在内N2个相似图像块;
Step S1a: Perform block matching and line matching respectively in the three channels of R, G, and B in the RGB color space, and select a size among them according to a certain sliding step.
Figure FDA0003086196820000011
Referring to the image block B r , perform block matching in a neighborhood of a given size centered on the upper-left corner coordinate of Br to obtain N2-1 image blocks most similar to B r , thereby obtaining N2 similar image blocks together with B r image block;
步骤S1b:将每一个尺寸为
Figure FDA0003086196820000012
的图像块拉伸成一个列向量,标记为
Figure FDA0003086196820000013
Figure FDA0003086196820000014
将所有Vl按列拼接为一个
Figure FDA0003086196820000015
行N2列的矩阵Mb
Step S1b: Convert each dimension to
Figure FDA0003086196820000012
The image blocks are stretched into a column vector, labeled as
Figure FDA0003086196820000013
Figure FDA0003086196820000014
Concatenate all V l by column into one
Figure FDA0003086196820000015
matrix M b with row N2 columns;
步骤S1c:选取矩阵Mb其中一行Rr作为参考行,计算Rr与其他所有行的欧氏距离以找到与其最相似的N3-1行,连同Rr在内构造一个尺寸为N3×N2相似像素矩阵MsStep S1c: Select one row R r of the matrix M b as a reference row, calculate the Euclidean distance between R r and all other rows to find the most similar N 3 -1 row, and construct a size N 3 × N 2 similar pixel matrix M s .
3.根据权利要求2所述的低光图像增强方法,其特征在于,所述步骤S1c具体为:3. The low-light image enhancement method according to claim 2, wherein the step S1c is specifically: 将矩阵Mb中第i行作为参考行,计算第i行与其余所有行的欧式距离为:Taking the i-th row in the matrix M b as the reference row, the Euclidean distance between the i-th row and all other rows is calculated as:
Figure FDA0003086196820000016
Figure FDA0003086196820000016
选择与第i行距离最小的N3-1行,连同第i行最终获得尺寸为N3×N2的相似像素矩阵MsThe N 3 -1 row with the smallest distance from the ith row is selected, and together with the ith row, a similar pixel matrix M s of size N 3 ×N 2 is finally obtained.
4.根据权利要求3所述的低光图像增强方法,其特征在于,所述步骤S2中所述哈尔变换具体包括:4. The low-light image enhancement method according to claim 3, wherein the Haar transform in the step S2 specifically comprises: 对相似像素矩阵Ms分别执行纵向与横向的可分提升哈尔变换,即:The vertical and horizontal separable lifting Haar transforms are performed on the similar pixel matrix M s , namely: Ch=Hl*Ms*Hr C h =H l *M s *H r 其中,Ch为哈尔变换后的谱矩阵,Hl与Hr为哈尔矩阵。Among them, C h is the spectral matrix after Haar transformation, and H l and H r are Haar matrices. 5.根据权利要求4所述的低光图像增强方法,其特征在于,所述步骤S2中所述光照分量的获得方法具体包括:5. The low-light image enhancement method according to claim 4, wherein the method for obtaining the illumination component in the step S2 specifically comprises: 定义Ch(1,1)为低频系数,利用Ch(1,1)执行逆哈尔变换后重构图像获得光照分量IlC h (1,1) is defined as a low-frequency coefficient, and the illumination component I l is obtained by reconstructing the image after performing inverse Haar transform with C h (1,1). 6.根据权利要求4所述的低光图像增强方法,其特征在于,所述步骤S2中所述反射分量的获得方法具体包括:6. The low-light image enhancement method according to claim 4, wherein the method for obtaining the reflection component in the step S2 specifically comprises: 定义Ch(1,1)为低频系数,利用Ch-Ch(1,1)这N3×N2-1个变换系数,执行逆哈尔变换后重构图像而获得反射分量IrDefine C h (1,1) as the low-frequency coefficient, and use the N 3 ×N 2 -1 transformation coefficients of C h -C h (1,1) to perform inverse Haar transform and reconstruct the image to obtain the reflection component I r . 7.根据权利要求1所述的低光图像增强方法,其特征在于,所述步骤S4具体包括:7. The low-light image enhancement method according to claim 1, wherein the step S4 specifically comprises: 步骤S4a:对于光照分量,通过R、G、B三个通道分别获得3个光照分量
Figure FDA0003086196820000021
Figure FDA0003086196820000022
Figure FDA0003086196820000023
Figure FDA0003086196820000024
Figure FDA0003086196820000025
进行比较,获得光照分量的最大分量;
Step S4a: For the illumination components, obtain 3 illumination components through the R, G, and B channels respectively
Figure FDA0003086196820000021
Figure FDA0003086196820000022
and
Figure FDA0003086196820000023
right
Figure FDA0003086196820000024
and
Figure FDA0003086196820000025
Make a comparison to obtain the maximum component of the illumination component;
步骤S4b:用不同的指数γ1和γ2来执行增强步骤,Step S4b : Perform the enhancement step with different exponents γ1 and γ2, 其中γ1通过如下的方法计算:where γ1 is calculated by the following method:
Figure FDA0003086196820000026
Figure FDA0003086196820000026
其中γ2通过如下的方法计算:where γ2 is calculated by the following method: 如果if
Figure FDA0003086196820000027
Figure FDA0003086196820000027
but
Figure FDA0003086196820000028
Figure FDA0003086196820000028
否则otherwise
Figure FDA0003086196820000031
Figure FDA0003086196820000031
步骤S4c:获得第一增强光照分量
Figure FDA0003086196820000032
Step S4c: obtaining the first enhanced illumination component
Figure FDA0003086196820000032
Figure FDA0003086196820000033
Figure FDA0003086196820000033
步骤S4cd:获得第二增强光照分量
Figure FDA0003086196820000034
Step S4cd: obtaining the second enhanced illumination component
Figure FDA0003086196820000034
Figure FDA0003086196820000035
Figure FDA0003086196820000035
8.根据权利要求7所述的低光图像增强方法,其特征在于,所述步骤S5具体包括:8. The low-light image enhancement method according to claim 7, wherein the step S5 specifically comprises: 步骤S5a:获得最终光照分量
Figure FDA0003086196820000036
Step S5a: Obtain the final illumination component
Figure FDA0003086196820000036
Figure FDA0003086196820000037
Figure FDA0003086196820000037
其中,
Figure FDA0003086196820000038
为最终光照分量,
Figure FDA0003086196820000039
为第一增强光照分量,
Figure FDA00030861968200000310
为第二增强光照分量;
in,
Figure FDA0003086196820000038
is the final illumination component,
Figure FDA0003086196820000039
is the first enhanced illumination component,
Figure FDA00030861968200000310
is the second enhanced illumination component;
步骤S5b:将图像归一化为灰度值[0,1];Step S5b: normalize the image to gray value [0,1]; 步骤S5c:对图像的灰度范围进行压缩。Step S5c: compress the grayscale range of the image.
9.根据权利要求8所述的低光图像增强方法,其特征在于,所述步骤S6具体包括:9. The low-light image enhancement method according to claim 8, wherein the step S6 specifically comprises: 步骤S6a:将所述最终增强光照分量和所述最终反射分量,应用于视网膜模型,即Step S6a: apply the final enhanced illumination component and the final reflection component to the retina model, that is,
Figure FDA00030861968200000311
Figure FDA00030861968200000311
其中Ie为最后增强的图像,⊙为点乘运算;where I e is the final enhanced image, ⊙ is the dot product operation; 步骤S6b:将Ie表示为Y’,用Y’替换HSV颜色空间中的V通道后转换回RGB颜色空间,获得最后增强的彩色图像。Step S6b: Denote I e as Y', replace the V channel in the HSV color space with Y', and then convert back to the RGB color space to obtain a final enhanced color image.
CN202110581353.5A 2021-05-27 2021-05-27 Low-light image enhancement method based on retina model Expired - Fee Related CN113160096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110581353.5A CN113160096B (en) 2021-05-27 2021-05-27 Low-light image enhancement method based on retina model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110581353.5A CN113160096B (en) 2021-05-27 2021-05-27 Low-light image enhancement method based on retina model

Publications (2)

Publication Number Publication Date
CN113160096A true CN113160096A (en) 2021-07-23
CN113160096B CN113160096B (en) 2023-12-08

Family

ID=76877698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110581353.5A Expired - Fee Related CN113160096B (en) 2021-05-27 2021-05-27 Low-light image enhancement method based on retina model

Country Status (1)

Country Link
CN (1) CN113160096B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114298943A (en) * 2021-12-30 2022-04-08 桂林理工大学 Low-light image enhancement method based on block matching three-dimensional transformation

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014169579A1 (en) * 2013-04-19 2014-10-23 华为技术有限公司 Color enhancement method and device
CN106780417A (en) * 2016-11-22 2017-05-31 北京交通大学 A kind of Enhancement Method and system of uneven illumination image
CN107578383A (en) * 2017-08-29 2018-01-12 北京华易明新科技有限公司 A kind of low-light (level) image enhancement processing method
CN109493295A (en) * 2018-10-31 2019-03-19 泰山学院 A kind of non local Haar transform image de-noising method
US20190333200A1 (en) * 2017-01-17 2019-10-31 Peking University Shenzhen Graduate School Method for enhancing low-illumination image
CN111223068A (en) * 2019-11-12 2020-06-02 西安建筑科技大学 An adaptive non-uniform low-light image enhancement method based on Retinex
CN111583123A (en) * 2019-02-17 2020-08-25 郑州大学 Wavelet transform-based image enhancement algorithm for fusing high-frequency and low-frequency information
CN111626945A (en) * 2020-04-23 2020-09-04 泰山学院 Depth image restoration method based on pixel-level self-similarity model
CN112116536A (en) * 2020-08-24 2020-12-22 山东师范大学 Low-illumination image enhancement method and system
CN112365425A (en) * 2020-11-24 2021-02-12 中国人民解放军陆军炮兵防空兵学院 Low-illumination image enhancement method and system
US20210118110A1 (en) * 2019-10-21 2021-04-22 Illumina, Inc. Increased Calculation Efficiency for Structured Illumination Microscopy
WO2021088481A1 (en) * 2019-11-08 2021-05-14 南京理工大学 High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014169579A1 (en) * 2013-04-19 2014-10-23 华为技术有限公司 Color enhancement method and device
CN106780417A (en) * 2016-11-22 2017-05-31 北京交通大学 A kind of Enhancement Method and system of uneven illumination image
US20190333200A1 (en) * 2017-01-17 2019-10-31 Peking University Shenzhen Graduate School Method for enhancing low-illumination image
CN107578383A (en) * 2017-08-29 2018-01-12 北京华易明新科技有限公司 A kind of low-light (level) image enhancement processing method
CN109493295A (en) * 2018-10-31 2019-03-19 泰山学院 A kind of non local Haar transform image de-noising method
CN111583123A (en) * 2019-02-17 2020-08-25 郑州大学 Wavelet transform-based image enhancement algorithm for fusing high-frequency and low-frequency information
US20210118110A1 (en) * 2019-10-21 2021-04-22 Illumina, Inc. Increased Calculation Efficiency for Structured Illumination Microscopy
WO2021088481A1 (en) * 2019-11-08 2021-05-14 南京理工大学 High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection
CN111223068A (en) * 2019-11-12 2020-06-02 西安建筑科技大学 An adaptive non-uniform low-light image enhancement method based on Retinex
CN111626945A (en) * 2020-04-23 2020-09-04 泰山学院 Depth image restoration method based on pixel-level self-similarity model
CN112116536A (en) * 2020-08-24 2020-12-22 山东师范大学 Low-illumination image enhancement method and system
CN112365425A (en) * 2020-11-24 2021-02-12 中国人民解放军陆军炮兵防空兵学院 Low-illumination image enhancement method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄丽雯;王勃;宋涛;黄俊木;: "低光照彩色图像增强算法研究", 重庆理工大学学报(自然科学), no. 01 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114298943A (en) * 2021-12-30 2022-04-08 桂林理工大学 Low-light image enhancement method based on block matching three-dimensional transformation

Also Published As

Publication number Publication date
CN113160096B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN110232661B (en) Low-illumination color image enhancement method based on Retinex and convolutional neural network
CN110047051B (en) Non-uniform illumination color image enhancement method
CN104156921B (en) An Adaptive Image Enhancement Method for Images with Low Illumination or Uneven Brightness
CN101951523B (en) Adaptive colour image processing method and system
Gupta et al. Minimum mean brightness error contrast enhancement of color images using adaptive gamma correction with color preserving framework
CN103530847B (en) A kind of infrared image enhancing method
CN103593830B (en) A low-light video image enhancement method
CN112785532B (en) Singular value equalization image enhancement algorithm based on weighted histogram distribution gamma correction
CN111968041A (en) Self-adaptive image enhancement method
CN110428379B (en) Image gray level enhancement method and system
WO2014169579A1 (en) Color enhancement method and device
CN105243641B (en) A kind of low light image Enhancement Method based on dual-tree complex wavelet transform
CN114897753A (en) Low-illumination image enhancement method
CN107256539B (en) An Image Sharpening Method Based on Local Contrast
CN104112253A (en) Low-illumination image/video enhancement method based on self-adaptive multiple-dimensioned filtering
CN111968065A (en) Self-adaptive enhancement method for image with uneven brightness
CN102289670B (en) Image characteristic extraction method with illumination robustness
CN115170415B (en) A low-light image enhancement method, system and readable storage medium
Jeon et al. Low-light image enhancement using inverted image normalized by atmospheric light
CN118195980A (en) Dark part detail enhancement method based on gray level transformation
Omarova et al. Application of the Clahe method contrast enhancement of X-ray images
CN104463806B (en) Height adaptive method for enhancing picture contrast based on data driven technique
CN114187222A (en) Low-illumination image enhancement method and system and storage medium
CN105225205A (en) Image enchancing method, Apparatus and system
CN103839245A (en) Retinex night color image enhancement method based on statistical regularities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20231208

CF01 Termination of patent right due to non-payment of annual fee