CN114429426B - Low-illumination image quality improvement method based on Retinex model - Google Patents

Low-illumination image quality improvement method based on Retinex model Download PDF

Info

Publication number
CN114429426B
CN114429426B CN202111561917.5A CN202111561917A CN114429426B CN 114429426 B CN114429426 B CN 114429426B CN 202111561917 A CN202111561917 A CN 202111561917A CN 114429426 B CN114429426 B CN 114429426B
Authority
CN
China
Prior art keywords
layer image
image
illumination
pixel value
detail
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111561917.5A
Other languages
Chinese (zh)
Other versions
CN114429426A (en
Inventor
赵蓝飞
魏莲莲
陈志铧
李国庆
李士俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202111561917.5A priority Critical patent/CN114429426B/en
Publication of CN114429426A publication Critical patent/CN114429426A/en
Application granted granted Critical
Publication of CN114429426B publication Critical patent/CN114429426B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Abstract

A low-illumination image quality improvement method based on a Retinex model belongs to the technical field of image processing. The invention solves the problem of poor quality of the obtained image when the existing low-illumination image quality improvement algorithm is adopted to process the low-illumination image. Firstly, layering a digital image through a Retinex model to obtain a detail layer image and an illumination layer image; secondly, designing a nonlinear global brightness mapping function, and mapping the illumination layer image to obtain an illumination layer enhanced image; designing a nonlinear detail layer image mapping function again, and stretching the detail layer image to obtain a detail layer enhanced image; and finally, multiplying each pixel of the detail layer enhanced image and the illumination layer enhanced image to synthesize the low-illumination enhanced image. The method can be applied to improve the quality of the low-illumination image.

Description

Low-illumination image quality improvement method based on Retinex model
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a low-illumination image quality improving method based on a Retinex model.
Background
Because light is easily influenced by the environment in a propagation path, the images shot by the digital camera have uneven brightness distribution, unclear details and textures in low-illumination areas, and poor image quality and human eye visualization effect. Therefore, how to improve the effect of the low illumination effect on the image quality has become a hot issue in the image processing field in recent years.
The classic low-illumination image quality improvement algorithm mainly comprises an image gray scale segmentation mapping algorithm, a histogram equalization algorithm, a Gamma correction algorithm and the like. Although the classic low-illumination image quality improvement algorithm can inhibit the visual problem caused by the low-illumination effect to a certain extent, the image quality of the enhanced image is affected by the problems of selection of free parameters, excessive enhancement of the image, fuzzy details and the like, and therefore, after the low-illumination image is processed by the existing low-illumination image quality improvement algorithm, the quality of the obtained image is still poor.
Disclosure of Invention
The invention aims to solve the problem that when the existing low-illumination image quality improvement algorithm is adopted to process a low-illumination image, the quality of the obtained image is poor, and provides a low-illumination image quality improvement method based on a Retinex model.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a low-illumination image quality improvement method based on a Retinex model specifically comprises the following steps:
step one, after the acquired image is converted from an RGB channel to an HSV channel, data of a V channel, an H channel and an S channel are respectively acquired;
decomposing the V channel data into an illumination layer image and a detail layer image;
step two, carrying out global brightness mapping processing on the illumination layer image to obtain the illumination layer image after global brightness mapping;
then, carrying out pixel value domain expansion on the illumination layer image after global brightness mapping to obtain an illumination layer image after pixel value domain expansion;
stretching the detail layer image by using a detail layer image mapping function to obtain a stretched detail layer image;
step four, synthesizing the illumination layer image with the expanded pixel value domain and the stretched detail layer image into a new image; and converting the synthesized image and the H channel and S channel data obtained in the first step into an output image.
Further, in the first step, the V-channel data is decomposed into an illumination layer image and a detail layer image, and the specific process is as follows:
I(x,y)=c(x,y)×L(x,y) (1)
wherein, I (x, y) represents a pixel value of the V channel data at the pixel point (x, y), L (x, y) represents a pixel value of the illumination layer image at the pixel point (x, y), and c (x, y) represents a pixel value of the detail layer image at the pixel point (x, y).
Further, the illumination layer image is obtained by performing convolution calculation on the V channel data and a Gaussian kernel with the variance of 1.
Further, the specific process of the second step is as follows:
step two, normalization processing is carried out on the illumination layer image to obtain the illumination layer image after normalization processing;
Figure BDA0003420717800000021
wherein the content of the first and second substances,
Figure BDA0003420717800000022
representing the pixel value of the illumination layer image after normalization processing at the pixel point (x, y), wherein max represents the maximum value operation;
secondly, determining a brightness segmentation threshold T of the illumination layer image after normalization processing by adopting a maximum inter-class variance method;
step two, according to the brightness segmentation threshold value T and a global brightness mapping function, global brightness mapping is carried out on the illumination layer image after normalization processing, and the illumination layer image after global brightness mapping is obtained;
the global luminance mapping function is:
Figure BDA0003420717800000023
wherein the content of the first and second substances,
Figure BDA0003420717800000024
representing the pixel value of the illumination layer image after global brightness mapping at the pixel point (x, y);
step two, carrying out pixel value domain expansion on the illumination layer image after global brightness mapping:
Figure BDA0003420717800000025
wherein L is d And (x, y) represents the pixel value of the illumination layer image after the pixel value domain expansion at the pixel point (x, y).
Further, the detail layer image mapping function is:
Figure BDA0003420717800000026
wherein e is the base of the natural logarithm, A, B, D are coefficients of the detail layer image mapping function, and S (c (x, y)) represents the value of the pixel point (x, y) in the stretched detail layer image.
Further, the coefficients a, B, D of the detail layer image mapping function are:
Figure BDA0003420717800000031
wherein, [ h ] 0 ,h 1 ]For the domain of pixel values c (x, y) in the detail layer image, c (x, y) is E [ h ] 0 ,h 1 ]I.e. h 0 Is the minimum value of the pixel values c (x, y), h, in the detail layer image 1 Is the maximum value of the pixel value c (x, y) in the detail layer image.
Further, in the fourth step, the illumination layer image after the pixel value domain expansion and the stretched detail layer image are synthesized into a new image, and the specific process is as follows:
and multiplying the pixel values at the same position in the illumination layer image after the pixel value domain expansion and the stretched detail layer image, and taking the multiplication result as the pixel value at the corresponding position in the synthesized new image.
The invention has the beneficial effects that:
the invention provides a low-illumination image quality improving method based on a Retinex model, aiming at improving the overall contrast of a low-illumination image and enhancing the detail and texture characteristics of the low-illumination image. Firstly, layering a digital image through a Retinex model to obtain a detail layer image and an illumination layer image; secondly, designing a nonlinear global brightness mapping function, and mapping the illumination layer image to obtain an illumination layer enhanced image; designing a nonlinear detail layer image mapping function again, and stretching the detail layer image to obtain a detail layer enhanced image; and finally, multiplying each pixel of the detail layer enhanced image and the illumination layer enhanced image to synthesize the low-illumination enhanced image. Experimental results show that the algorithm can effectively improve the image quality of the low-illumination image.
Drawings
FIG. 1 is a flowchart of a method for improving the quality of low-illumination images based on Retinex model according to the present invention;
fig. 2 is a graph of a global luminance mapping function of a corresponding illumination layer when a luminance partition threshold T is 0.2;
fig. 3 is a graph of a global luminance mapping function of a corresponding illumination layer when a luminance partition threshold T is 0.5;
FIG. 4 is a graph of a detail layer image mapping function;
wherein, the intensity value range of the detail layer of the original image is [0.1,3 ];
FIG. 5(a) is a first original image;
FIG. 5(b) is an enhanced image corresponding to FIG. 5 (a);
FIG. 6(a) is the original image two;
fig. 6(b) is an enhanced image corresponding to fig. 6 (a).
Detailed Description
First embodiment this embodiment will be described with reference to fig. 1. The method for improving the quality of a low-illumination image based on a Retinex model in the embodiment specifically includes the following steps:
step one, after the acquired image is converted from an RGB channel to an HSV channel, data of a V channel, an H channel and an S channel are respectively acquired;
decomposing the V channel data into an illumination layer image and a detail layer image;
step two, carrying out global brightness mapping processing on the illumination layer image to obtain the illumination layer image after global brightness mapping;
then, carrying out pixel value domain expansion on the illumination layer image after global brightness mapping to obtain an illumination layer image after pixel value domain expansion, namely an illumination enhancement image;
stretching the detail layer image by using a detail layer image mapping function to obtain a stretched detail layer image, namely a detail enhanced image;
step four, synthesizing the illumination layer image with the expanded pixel value domain and the stretched detail layer image into a new image; and converting the synthesized image and the H channel and S channel data obtained in the step one into an output image.
In the embodiment, firstly, an image is input to the RGB-to-HSV module for color conversion; independently carrying out Retinex layering model shown in the formula (1) on the V channel data to obtain an illumination layer image and a detail layer image; carrying out global brightness mapping shown in the formula (3) on the illumination layer image independently to obtain an illumination layer enhanced image; independently mapping the detail layer image shown in the formula (5) to obtain a detail layer enhanced image; multiplying the detail layer enhanced image and the illumination layer enhanced image one by one to synthesize a new image; by the HSV-RGB conversion module, the synthesized new image, original H channel and S channel data are converted into an output image which can be directly displayed, and the quality of the obtained output image is obviously improved.
The second embodiment is as follows: the difference between this embodiment and the first embodiment is that, in the step one, the V-channel data is decomposed into an illumination layer image and a detail layer image, and the specific process is as follows:
I(x,y)=c(x,y)×L(x,y) (1)
wherein, I (x, y) represents a pixel value of the V channel data at the pixel point (x, y), L (x, y) represents a pixel value of the illumination layer image at the pixel point (x, y), and c (x, y) represents a pixel value of the detail layer image at the pixel point (x, y).
According to the Retinex model, a digital image can be decomposed into an illumination layer image and a detail layer image, the illumination layer image determines the whole brightness and darkness contrast of the image, and the detail layer image determines the details and texture of the image. The pixel point (x, y) is a coordinate in an image coordinate system, and the width direction of the image is taken as an x-axis, and the height direction of the image is taken as a y-axis.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the difference between this embodiment and the first or second embodiment is that the illumination layer image is obtained by performing convolution calculation on V-channel data and a gaussian kernel with a variance of 1.
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the difference between this embodiment and one of the first to third embodiments is that the specific process of the second step is:
step two, normalization processing is carried out on the illumination layer image to obtain the illumination layer image after normalization processing;
Figure BDA0003420717800000051
wherein the content of the first and second substances,
Figure BDA0003420717800000052
representing the pixel value of the illumination layer image after normalization processing at the pixel point (x, y), wherein max represents the maximum value operation;
secondly, determining the brightness segmentation threshold T of the illumination layer image after normalization processing by adopting a maximum inter-class variance method (OTSU);
step two, according to the brightness segmentation threshold value T and a global brightness mapping function, global brightness mapping is carried out on the illumination layer image after normalization processing, and the illumination layer image after global brightness mapping is obtained;
the global luminance mapping function is:
Figure BDA0003420717800000053
wherein the content of the first and second substances,
Figure BDA0003420717800000054
representing the pixel value of the illumination layer image after global brightness mapping at the pixel point (x, y);
the overall contrast of the image can be enhanced through the global brightness mapping, and as can be seen from fig. 2 and 3, the global brightness mapping function stretches the brightness smaller than the threshold T, so as to improve the brightness of the portion of pixels, and compresses the brightness larger than the threshold T, so as to reduce the brightness of the portion of pixels.
Step two, pixel (brightness) value domain expansion is carried out on the illumination layer image after the global brightness mapping:
Figure BDA0003420717800000055
wherein L is d And (x, y) represents the pixel value of the illumination layer image after the pixel value domain expansion at the pixel point (x, y).
By stretching the illumination layer image L by the method of the present embodiment, the overall contrast of the low-illumination image can be enhanced.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode is as follows: this embodiment will be described with reference to fig. 4. The difference between this embodiment and one of the first to the fourth embodiments is that the detail layer image mapping function is:
Figure BDA0003420717800000061
wherein e is the base of the natural logarithm, A, B, D are coefficients of the detail layer image mapping function, and S (c (x, y)) represents the value of the pixel point (x, y) in the stretched detail layer image.
The nonlinear detail layer image mapping function designed in the present embodiment is to enhance the detail and texture characteristics of the detail layer image and enrich the detail and texture characteristics of the low-illumination image.
Other steps and parameters are the same as in one of the first to fourth embodiments.
The sixth specific implementation mode: this embodiment is different from one of the first to fifth embodiments in that the coefficients a, B, and D of the detail layer image mapping function are:
Figure BDA0003420717800000062
wherein, [ h ] 0 ,h 1 ]For the domain of pixel values c (x, y) in the detail layer image, c (x, y) is E [ h ] 0 ,h 1 ]I.e. h 0 Is the minimum value of the pixel values c (x, y), h, in the detail layer image 1 Is the maximum value of the pixel value c (x, y) in the detail layer image.
A, B and D are three undetermined coefficients, and the definition domain, the value domain and the original image detail intensity (namely the pixel value) of the decision function are enhanced image detail intensities corresponding to 1. Since the image is most blurred when the original detail layer image intensity is 1, the detail-enhanced image intensity should also be 1, i.e., S (1) ═ 1. In addition, detail layer image enhancement needs to keep the domain consistent with the value domain, i.e., S (h) 0 )=h 0 、S(h 1 )=h 1 . Therefore, the three corresponding relations are respectively substituted into the function (5), and the values of the three pending coefficients shown in the formula (6) are obtained.
Other steps and parameters are the same as those in one of the first to fifth embodiments.
The seventh embodiment: the difference between this embodiment and one of the first to sixth embodiments is that, in the fourth step, the illumination layer image after the pixel value domain expansion and the detail layer image after the stretching are synthesized into a new image, and the specific process is as follows:
and multiplying the pixel values at the same position in the illumination layer image after the pixel value domain expansion and the stretched detail layer image, and taking the multiplication result as the pixel value at the corresponding position in the synthesized new image.
That is, the pixel value of the pixel point (x, y) in the illumination layer image after the pixel value domain expansion is multiplied by the pixel value of the pixel point (x, y) in the detail layer image after stretching, and the multiplication result is used as the pixel value of the pixel point (x, y) in the synthesized new image.
Other steps and parameters are the same as those in one of the first to sixth embodiments.
Analysis of Experimental results
The simulation software adopted by the invention is Matlab 2018 a. The hardware platform is a desktop computer, and the hardware of the hardware platform comprises the following components: i9-11900H specification CPU, 16GB DDR4 specification memory, RTX 3060 specification display card. The input and output of the simulation program are standard images in the bmp format.
The simulation results of the method of the present invention are shown in fig. 5(a) and 5(b) and in fig. 6(a) and 6 (b).
As can be seen from fig. 5(a) and 5(b) and fig. 6(a) and 6(b), because the illumination intensity is relatively low and the distribution is not uniform, the original image frame shows a darker effect, so that the details and the contour of the scene in the original image cannot be distinguished by human eyes, and thus the image quality is poor; the low-illumination image quality improvement algorithm designed by the invention improves the image brightness of a low-illumination area, and the local details of the enhanced image are clear and rich. Therefore, the algorithm can effectively improve the overall visualization effect of the low-illumination image and enhance the image quality of the image.
The above-described calculation examples of the present invention are merely to explain the calculation model and the calculation flow of the present invention in detail, and are not intended to limit the embodiments of the present invention. It will be apparent to those skilled in the art that other variations and modifications of the present invention can be made based on the above description, and it is not intended to be exhaustive or to limit the invention to the precise form disclosed, and all such modifications and variations are possible and contemplated as falling within the scope of the invention.

Claims (5)

1. A low-illumination image quality improvement method based on a Retinex model is characterized by specifically comprising the following steps of:
step one, after the acquired image is converted from an RGB channel to an HSV channel, data of a V channel, an H channel and an S channel are respectively acquired;
in the first step, the V-channel data is decomposed into an illumination layer image and a detail layer image, and the specific process is as follows:
I(x,y)=c(x,y)×L(x,y) (1)
wherein, I (x, y) represents a pixel value of the V channel data at the pixel point (x, y), L (x, y) represents a pixel value of the illumination layer image at the pixel point (x, y), and c (x, y) represents a pixel value of the detail layer image at the pixel point (x, y);
decomposing the V channel data into an illumination layer image and a detail layer image;
step two, carrying out global brightness mapping processing on the illumination layer image to obtain an illumination layer image after global brightness mapping;
the specific process of the second step is as follows:
step two, normalizing the illumination layer image to obtain a normalized illumination layer image;
Figure FDA0003722406560000011
wherein the content of the first and second substances,
Figure FDA0003722406560000012
representing the pixel value of the illumination layer image after normalization processing at the pixel point (x, y), wherein max represents the maximum value operation;
secondly, determining a brightness segmentation threshold T of the illumination layer image after normalization processing by adopting a maximum inter-class variance method;
step two, according to the brightness segmentation threshold value T and a global brightness mapping function, global brightness mapping is carried out on the illumination layer image after normalization processing, and the illumination layer image after global brightness mapping is obtained;
the global luminance mapping function is:
Figure FDA0003722406560000013
wherein the content of the first and second substances,
Figure FDA0003722406560000014
representing a global luminance mapThe pixel value of the subsequent illumination layer image at the pixel point (x, y);
step two, carrying out pixel value domain expansion on the illumination layer image after global brightness mapping:
Figure FDA0003722406560000015
wherein L is d (x, y) represents the pixel value of the illumination layer image at the pixel point (x, y) after the pixel value domain expansion;
then, carrying out pixel value domain expansion on the illumination layer image after global brightness mapping to obtain an illumination layer image after pixel value domain expansion;
stretching the detail layer image by using a detail layer image mapping function to obtain a stretched detail layer image;
step four, synthesizing the illumination layer image with the expanded pixel value domain and the stretched detail layer image into a new image; and converting the synthesized image and the H channel and S channel data obtained in the first step into an output image.
2. The method as claimed in claim 1, wherein the illumination layer image is obtained by performing convolution calculation on V-channel data and a gaussian kernel with a variance of 1.
3. A method of improving image quality under low illumination based on Retinex model as claimed in claim 2, wherein the detail layer image mapping function is:
Figure FDA0003722406560000021
wherein e is the base of the natural logarithm, A, B, D are coefficients of the detail layer image mapping function, and S (c (x, y)) represents the value of the pixel point (x, y) in the stretched detail layer image.
4. The method of claim 3, wherein coefficients A, B, and D of the detail layer image mapping function are:
Figure FDA0003722406560000022
wherein, [ h ] 0 ,h 1 ]For the domain of pixel values c (x, y) in the detail layer image, c (x, y) is E [ h ] 0 ,h 1 ]I.e. h 0 Is the minimum value of the pixel values c (x, y), h, in the detail layer image 1 Is the maximum value of the pixel value c (x, y) in the detail layer image.
5. The method according to claim 4, wherein in the fourth step, the illumination layer image after pixel value domain expansion and the detail layer image after stretching are synthesized into a new image by a specific process:
and multiplying the pixel values at the same position in the illumination layer image after the pixel value domain expansion and the stretched detail layer image, and taking the multiplication result as the pixel value at the corresponding position in the synthesized new image.
CN202111561917.5A 2021-12-20 2021-12-20 Low-illumination image quality improvement method based on Retinex model Active CN114429426B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111561917.5A CN114429426B (en) 2021-12-20 2021-12-20 Low-illumination image quality improvement method based on Retinex model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111561917.5A CN114429426B (en) 2021-12-20 2021-12-20 Low-illumination image quality improvement method based on Retinex model

Publications (2)

Publication Number Publication Date
CN114429426A CN114429426A (en) 2022-05-03
CN114429426B true CN114429426B (en) 2022-08-16

Family

ID=81311806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111561917.5A Active CN114429426B (en) 2021-12-20 2021-12-20 Low-illumination image quality improvement method based on Retinex model

Country Status (1)

Country Link
CN (1) CN114429426B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115660994B (en) * 2022-10-31 2023-06-09 哈尔滨理工大学 Image enhancement method based on regional least square estimation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046663A (en) * 2015-07-10 2015-11-11 西南科技大学 Human visual perception simulation-based self-adaptive low-illumination image enhancement method
CN110163815A (en) * 2019-04-22 2019-08-23 桂林电子科技大学 Low-light (level) restoring method based on multistage variation self-encoding encoder
CN110298796A (en) * 2019-05-22 2019-10-01 中山大学 Based on the enhancement method of low-illumination image for improving Retinex and Logarithmic image processing
CN111105359A (en) * 2019-07-22 2020-05-05 浙江万里学院 Tone mapping method for high dynamic range image
CN112950596A (en) * 2021-03-09 2021-06-11 宁波大学 Tone mapping omnidirectional image quality evaluation method based on multi-region and multi-layer

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7590303B2 (en) * 2005-09-29 2009-09-15 Samsung Electronics Co., Ltd. Image enhancement method using local illumination correction
CN100562067C (en) * 2007-07-26 2009-11-18 上海交通大学 The real time digital image processing and enhancing method that has noise removal function
WO2009012659A1 (en) * 2007-07-26 2009-01-29 Omron Corporation Digital image processing and enhancing system and method with function of removing noise
CN103295225B (en) * 2013-04-10 2016-09-28 苏州大学 Train bogie edge detection method under the conditions of low-light
JP5907938B2 (en) * 2013-09-26 2016-04-26 京セラドキュメントソリューションズ株式会社 Image forming apparatus and program
CN105654437B (en) * 2015-12-24 2019-04-19 广东迅通科技股份有限公司 A kind of Enhancement Method of pair of low-light (level) image
CN107045715B (en) * 2017-02-22 2019-06-07 西南科技大学 A kind of method that single width low dynamic range echograms generate high dynamic range images
CN112734650B (en) * 2019-10-14 2022-09-30 武汉科技大学 Virtual multi-exposure fusion based uneven illumination image enhancement method
CN112308793A (en) * 2020-10-21 2021-02-02 淮阴工学院 Novel method for enhancing contrast and detail of non-uniform illumination image
CN113256533B (en) * 2021-06-15 2022-08-09 北方民族大学 Self-adaptive low-illumination image enhancement method and system based on MSRCR

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046663A (en) * 2015-07-10 2015-11-11 西南科技大学 Human visual perception simulation-based self-adaptive low-illumination image enhancement method
CN110163815A (en) * 2019-04-22 2019-08-23 桂林电子科技大学 Low-light (level) restoring method based on multistage variation self-encoding encoder
CN110298796A (en) * 2019-05-22 2019-10-01 中山大学 Based on the enhancement method of low-illumination image for improving Retinex and Logarithmic image processing
CN111105359A (en) * 2019-07-22 2020-05-05 浙江万里学院 Tone mapping method for high dynamic range image
CN112950596A (en) * 2021-03-09 2021-06-11 宁波大学 Tone mapping omnidirectional image quality evaluation method based on multi-region and multi-layer

Also Published As

Publication number Publication date
CN114429426A (en) 2022-05-03

Similar Documents

Publication Publication Date Title
Cao et al. Contrast enhancement of brightness-distorted images by improved adaptive gamma correction
Gao et al. Naturalness preserved nonuniform illumination estimation for image enhancement based on retinex
Gupta et al. Minimum mean brightness error contrast enhancement of color images using adaptive gamma correction with color preserving framework
CN109064426B (en) Method and device for suppressing glare in low-illumination image and enhancing image
Lai et al. Improved local histogram equalization with gradient-based weighting process for edge preservation
CN103578084A (en) Color image enhancement method based on bright channel filtering
Zotin Fast algorithm of image enhancement based on multi-scale retinex
CN110298792B (en) Low-illumination image enhancement and denoising method, system and computer equipment
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
Gupta et al. New contrast enhancement approach for dark images with non-uniform illumination
Shiau et al. A low-cost hardware architecture for illumination adjustment in real-time applications
CN114429426B (en) Low-illumination image quality improvement method based on Retinex model
CN108550124B (en) Illumination compensation and image enhancement method based on bionic spiral
Jeon et al. Low-light image enhancement using inverted image normalized by atmospheric light
Gu et al. A novel Retinex image enhancement approach via brightness channel prior and change of detail prior
Tung et al. ICEBIN: Image contrast enhancement based on induced norm and local patch approaches
Kim Image enhancement using patch-based principal energy analysis
CN112308793A (en) Novel method for enhancing contrast and detail of non-uniform illumination image
CN113222859B (en) Low-illumination image enhancement system and method based on logarithmic image processing model
CN113191956B (en) Backlight image enhancement method based on depth matting
CN114862706A (en) Tone mapping method for keeping gradient direction of image
Liu et al. An adaptive tone mapping algorithm based on gaussian filter
CN110807748A (en) New tone mapping image enhancement method based on high dynamic range
Lucas et al. Image enhancement for astronomical scenes
Vavilin et al. Recursive HDR image generation from differently exposed images based on local image properties

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant