CN113160096B - Low-light image enhancement method based on retina model - Google Patents

Low-light image enhancement method based on retina model Download PDF

Info

Publication number
CN113160096B
CN113160096B CN202110581353.5A CN202110581353A CN113160096B CN 113160096 B CN113160096 B CN 113160096B CN 202110581353 A CN202110581353 A CN 202110581353A CN 113160096 B CN113160096 B CN 113160096B
Authority
CN
China
Prior art keywords
component
image
final
illumination
illumination component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110581353.5A
Other languages
Chinese (zh)
Other versions
CN113160096A (en
Inventor
魏本征
侯昊
侯迎坤
丁鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Traditional Chinese Medicine
Original Assignee
Shandong University of Traditional Chinese Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Traditional Chinese Medicine filed Critical Shandong University of Traditional Chinese Medicine
Priority to CN202110581353.5A priority Critical patent/CN113160096B/en
Publication of CN113160096A publication Critical patent/CN113160096A/en
Application granted granted Critical
Publication of CN113160096B publication Critical patent/CN113160096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a low-light image enhancement method based on a retina model, which belongs to the technical field of image processing and comprises the following steps: step S1: obtaining a similar pixel group; step S2: performing haar transform on the group of similar pixels, respectively obtaining illumination components and reflection components at R, G, B three channels by using non-local haar transform at pixel level; step S3: finding the final reflection component; step S4: finding an enhanced illumination component; step S5: taking the minimum component of the enhanced illumination component as the final illumination component; step S6: the final reflected component and the final illumination component are applied to the retinal model to obtain an enhanced image. The low-light image enhancement method is quick and effective, the color of the image processed by the method is not too bright, the information in the original image can be well reserved, the problem of uneven illumination after enhancement can be well solved, false signals are not introduced, and the fidelity of the image edge information is very high.

Description

Low-light image enhancement method based on retina model
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a low-light image enhancement method based on a retina model.
Background
The low-light image enhancement is to enhance the image with too low brightness shot in a low-light environment, so as to obtain the image with better illumination effect. Such as photographs taken at night or in a closed environment, there is often a problem that the brightness is too low to effectively recognize the specific contents of the photograph, so that low-light image enhancement has been one of the popular research directions in the field of computer vision.
The current low-light image enhancement method is mainly based on retina models, such as MSR, MSRCR, SIRE, RRM and the like, and the method has the defects of unobvious enhancement effect, color distortion, artifact generation and the like, and the current low-light image enhancement method consumes more time when processing images.
Disclosure of Invention
The invention aims to overcome at least one defect of the prior art and provide a low-light image enhancement method based on a retina model, which can effectively solve the overexposure problem caused by the enhancement of a very bright place and the distortion problem caused by the enhancement of a very dark place, thereby obtaining better low-light image enhancement effect.
The invention discloses a low-light image enhancement method based on a retina model, which comprises the following steps:
step S1: obtaining a similar pixel group;
step S2: performing haar transform on the similar pixel group, respectively obtaining illumination component and reflection component in R, G, B three channels by using low-frequency coefficient and high-frequency coefficient of non-local haar transform of pixel level;
step S3: finding the minimum component of the R, G, B three-channel reflection components as a final reflection component;
step S4: finding the maximum component of the illumination components of the R, G, B channels, and enhancing by using an index and logarithm combined method to obtain an enhanced illumination component;
step S5: taking the minimum component of the enhanced illumination component as the final illumination component;
step S6: and applying the final reflection component and the final illumination component to a retina model to obtain an enhanced image.
Preferably, the step S1 specifically includes:
step S1a: block matching and line matching are respectively carried out in R, G, B channels in RGB color space, and one size is selected according to a certain sliding step lengthReference picture block B r In B r Performing block matching in a neighborhood of a given size centered on the upper left corner coordinates to obtain a block matching with B r The most similar N2-1 tiles are obtained together with B r Inner N2 similar image blocks;
step S1b: will beEach of the dimensions isIs stretched into a column vector, denoted +.> All V l Spliced into a +.>Matrix M of rows N2 columns b
Step S1c: selecting matrix M b Wherein one row R r As reference row, calculate R r Euclidean distance to all other rows to find the most similar N to it 3 Line-1, together with R r Build-in a size N 3 ×N 2 Similar pixel matrix M s
Preferably, the step S1c specifically includes:
matrix M b The ith row is used as a reference row, and the Euclidean distance between the ith row and all the rest rows is calculated as follows:
selecting N with smallest distance from ith row 3 Line-1, together with line i, ultimately obtaining a size N 3 ×N 2 Is a matrix M of similar pixels s
Preferably, the haar transform in the step S2 specifically includes:
for similar pixel matrix M s And respectively executing longitudinal and transverse separable lifting haar transformation, namely:
C h =H l *M S *H r
wherein C is h Is a spectrum matrix after haar transformation, H l And H is r Ha Erju of a shape of Ha ErjuAn array.
Preferably, the method for obtaining the illumination component in step S2 specifically includes:
definition C h (1, 1) is a low frequency coefficient, using C h (1, 1) reconstructing the image after performing the inverse haar transform to obtain the illumination component I l
Preferably, the method for obtaining the reflection component in step S2 specifically includes:
definition C h (1, 1) is a low frequency coefficient, using C h -C h (1, 1) N 3 ×N 2 -1 transform coefficient, reconstructing the image after performing an inverse haar transform to obtain a reflected component I r
Preferably, the step S4 specifically includes:
step S4a: for the illumination components, 3 illumination components are obtained through R, G, B three channels respectively And->For a pair ofAnd->Comparing to obtain the maximum component of the illumination component;
step S4b: by different indices gamma 1 And gamma 2 To perform the step of enhancing the quality of the product,
wherein gamma is 1 Calculated by the following method:
wherein gamma is 2 By the following methodAnd (3) calculating:
if it is
Then
Otherwise
Step S4c: obtaining a first enhanced illumination component
Step S4cd: obtaining a second enhanced illumination component
Preferably, the step S5 specifically includes:
step S5a: obtaining final illumination component
Wherein,for the final illumination component +.>For the first enhanced illumination component, < >>For a second enhanced illumination component;
step S5b: normalizing the image to a gray value of [0,1];
step S5c: the gray scale range of the image is compressed.
Preferably, the step S6 specifically includes:
step S6a: applying the final enhanced illumination component and the final reflection component to a retina model, i.e.
Wherein I is e For the last enhanced image, as if it is a dot product operation;
step S6b: will I e Denoted Y ', replacing the V channel in the HSV color space with Y' and converting back to the RGB color space, a final enhanced color image is obtained.
Compared with the prior art, the invention has the beneficial effects that: the low-light image enhancement method based on the retina model is quick and effective, the color of the image processed by the method is not too bright, the information in the original image can be well reserved, the problem of uneven illumination after enhancement can be well solved, false signals are not introduced, and the fidelity of the image edge information is very high.
Drawings
FIG. 1 is a flow chart of a low-light image enhancement method based on a retina model according to the present invention;
FIG. 2 is a first comparison of the image effect of the low light image enhancement method of the present invention with a portion of the prior low light enhancement method;
FIG. 3 is a second comparative diagram of the image effect of the low light image enhancement method of the present invention after processing with a portion of the conventional low light enhancement method.
Detailed Description
The invention is further described below in connection with the accompanying drawings, which are provided solely for illustration of specific embodiments of the invention and are not to be construed as limiting the invention in any way, as follows:
as shown in fig. 1, the present invention provides a low-light image enhancement method based on a combination of retina models, which comprises the following steps:
step S1: obtaining a similar pixel group;
step S2: performing haar transform on the similar pixel group, respectively obtaining illumination component and reflection component in R, G, B three channels by using low-frequency coefficient and high-frequency coefficient of non-local haar transform of pixel level;
step S3: finding the minimum component of the R, G, B three-channel reflection components as a final reflection component;
step S4: finding the maximum component of the illumination components of the R, G, B channels, and enhancing by using an index and logarithm combined method to obtain an enhanced illumination component;
step S5: taking the minimum component of the enhanced illumination component as the final illumination component;
step S6: and applying the final reflection component and the final illumination component to a retina model to obtain an enhanced image.
The method specifically comprises the following steps:
a group of similar pixels is obtained.
Low-light color image I epsilon R in RGB color space h×w×c I is synchronously converted from RGB color space to HSV color space.
Block matching and line matching are respectively carried out in R, G, B channels in RGB color space, and one size is selected according to a certain sliding step lengthReference picture block B r In B r Performing block matching in a neighborhood of a given size centered on the upper left corner coordinates to obtain a block matching with B r The most similar N2-1 tiles are obtained together with B r And the inner N2 similar image blocks. Each size is +.>Is stretched into a column vector, denoted +.>All V l Spliced into a +.> Matrix M of rows N2 columns b
To better mine self-similarity in images, we further at M b And performing row matching.
Selecting one of the rows R r Calculate the Euclidean distance of all other rows as reference row to find N most similar to it 3 Line-1, together with R r Build-in a size N 3 ×N 2 Similar pixel matrix M s
Specifically, for the ith row as a reference row, the euclidean distance of the ith row to all the remaining rows is calculated as:
then select the N with the smallest distance from the ith row 3 Line-1, together with line i, ultimately obtaining a size N 3 ×N 2 Is a matrix M of similar pixels s
In the similar pixel group M s And performing separable haar transform.
Respectively to M s Performing longitudinal and transverse separable liftingHaar transform, namely:
C h =H l *M S *H r
wherein C is h Is a spectrum matrix after haar transformation, H l And H is r Is a haar matrix.
C due to the characteristic of separable lifting haar transformation h (1, 1) is M S We define this as a low frequency coefficient, using only C h (1, 1) reconstructing the image after performing the inverse haar transform to obtain the desired illumination component I l The method comprises the steps of carrying out a first treatment on the surface of the Conversely, utilize C h -C h (1, 1) N 3 ×N 2 -1 transform coefficient (i.e. medium-high frequency coefficient) to reconstruct the image after performing the inverse haar transform to obtain the ideal reflected component I r . The method can effectively and rapidly separate illumination and reflection components of the image, and is an important step of low-light image enhancement.
Performing an enhancement operation on the reflected component:
for the reflected components, 3 reflected components were obtained by R, G, B three channels, respectivelyAnd->For->And->Comparing to obtain the minimum component of the reflected components, and then selecting the minimum component as the final reflected component l r
Performing an enhancement operation on the illumination component:
for the illumination components, 3 illumination components are obtained through R, G, B three channels respectivelyAnd->For->And->And comparing to obtain the maximum component of the illumination components, namely obtaining the brightest illumination component. By different indices gamma 1 And gamma 2 To perform the enhancement step, gamma 1 And gamma 2 The calculation can be performed by the following methods:
γ 2 the following two cases are classified:
if it is
Then
Otherwise
First enhanced illumination componentCan be obtained by the following method:
second oneEnhancing illumination componentsCan be obtained by the following method:
in practice, it has been found that if only the exponential transformation is used to enhance the illumination component of the low-luminance portion of the image, the luminance value increases too fast, resulting in a problem of insufficient luminance; if only the logarithmic transformation is used to enhance the illumination component of the high-luminance part of the image, the luminance value will increase too fast, and the luminance will be uneven.
Therefore, in order to solve the above-described problems, the present invention obtains a final illumination component I by l
The image is normalized to gray values [0,1], then the gray range of the image is compressed, dark areas in the original image have bright areas to a great extent, the bright areas have small change, the enhancement effect of the low-light image is realized, and the self-adaptability of the enhancement result in different illumination areas is ensured.
Applying the final illumination component and the final reflection component to the retinal model, i.e
Wherein I is e For the last enhanced image, +..
Will I e Denoted Y ', replacing the V channel in the HSV color space with Y' and converting back to the RGB color space, the enhanced final color image is obtained.
The invention randomly selects 200 low-light images in the CVPR2021UG2+challenge data set to form the numberThe enhancement experiments were performed on the data sets of the data set and 35 low-light images by MATLAB software, and the algorithm of the invention was performed to obtain an enhanced image, and the enhanced image was compared with the classical methods of HE, MSRCR, CVC, NPE, SIRE, MF, WVM, CRM, BIMEF, LIME, jiep and STAR in the prior art, and the image effects are shown in FIG. 2 and FIG. 3. Wherein the CVPR2021UG2+ challenge data set address is: (http://cvpr2021.ug2challenge.org/dataset21_t1.html)
As can be seen from fig. 2 and 3, the color of the image enhanced by the method of the present invention is not too bright, the information in the original image can be well preserved, the problem of uneven illumination after enhancement can be well solved, no false signal is introduced, and the fidelity of the image edge information is very high.
The NIQE, LOE, TMQI and FSIM values of the enhanced image obtained by the method of the invention are compared with those of the enhanced image obtained by the prior art, and the values are shown in the following table:
Method NIQE LOE TMQI FSIM
HE 3.62 740.30 0.9220 0.7174
MSRCR 3.17 702.85 0.8506 0.6969
CVC 3.11 654.82 0.8715 0.8578
NPE 3.22 710.21 0.8891 0.8193
SIRE 3.01 637.70 0.8680 0.8991
MF 3.38 776.41 0.8997 0.8233
WVM 2.99 633.40 0.8674 0.8999
CRM 3.13 744.61 0.8964 0.8123
BIMEF 3.04 703.16 0.9017 0.8898
LIME 3.39 779.73 0.8791 0.7131
JieP 2.99 724.52 0.8766 0.8749
STAR 2.93 677.43 0.8784 0.9047
the method of the invention 2.76 546.63 0.8616 0.9250
It should be noted that: a lower NIQE, LOE, TMQI indicates a higher image quality, and a higher FSIM indicates a higher image quality.
From the data in the table, the four index values of the image processed by the low-light image enhancement method are better than the result of the low-light image enhancement method in the prior art.
By now it should be appreciated by those skilled in the art that while exemplary embodiments of the invention have been shown and described in detail herein, many other variations or modifications that are consistent with the principles of the invention may be directly ascertained or derived from the teachings of the present disclosure without departing from the spirit and scope of the invention. Accordingly, the scope of the present invention should be understood and deemed to cover all such other variations or modifications.

Claims (1)

1. A low-light image enhancement method based on a retina model, comprising:
step S1: obtaining a similar pixel group;
step S2: performing haar transform on the similar pixel group, respectively obtaining illumination component and reflection component in R, G, B three channels by using low-frequency coefficient and high-frequency coefficient of non-local haar transform of pixel level;
step S3: finding the minimum component of the R, G, B three-channel reflection components as a final reflection component;
step S4: finding the maximum component of the illumination components of the R, G, B channels, and enhancing by using an index and logarithm combined method to obtain an enhanced illumination component;
step S5: taking the minimum component of the enhanced illumination component as the final illumination component;
step S6: applying the final reflected component and the final illumination component to a retina model to obtain an enhanced image;
the step S1 specifically includes:
step S1a: block matching and line matching are respectively carried out in R, G, B channels in RGB color space, and one size is selected according to a certain sliding step lengthReference picture block->In->Performing block matching in a neighborhood of a given size centered on the upper left corner to obtain a sum +.>N2-1 image blocks which are most similar, thereby obtaining a picture block which is combined with->Inner N2 similar image blocks;
step S1b: each of the dimensions is as followsIs stretched into a column vector, labeled asThe method comprises the steps of carrying out a first treatment on the surface of the All->Spliced into a +.>Matrix of rows N2 columns->
Step S1c: selecting a matrixOne line->As reference line, calculate +.>Euclidean distance to all other rows to find the most similar to it>Line, together with->An inner structure with a size of +.>Similar pixel matrix->
The step S1c specifically includes:
matrix is formedMiddle->Line is taken as reference line, calculate +.>The Euclidean distance of a row from all the remaining rows is:
selection and the firstMinimum row distance +.>Lines, together with->The line finally obtained size is +.>Is a matrix of similar pixels of (1)
The haar transform in the step S2 specifically includes:
for similar pixel matrixAnd respectively executing longitudinal and transverse separable lifting haar transformation, namely:
wherein,is a matrix of the Har transform, < ->And->Is a haar matrix;
the method for obtaining the illumination component in step S2 specifically includes:
definition of the definitionFor low frequency coefficients, use ∈>Reconstructing the image after performing the inverse haar transform to obtain the illumination component +.>
The method for obtaining the reflection component in step S2 specifically includes:
definition of the definitionFor low frequency coefficients, use ∈>This->The reflection component is obtained by reconstructing the image after performing inverse haar transform on the transform coefficients>
The step S4 specifically includes:
step S4a: for the illumination components, 3 illumination components are obtained through R, G, B three channels respectively、/>And->For->And->Comparing to obtain the maximum component of the illumination component;
step S4b: by different index sumsTo perform the step of enhancing the quality of the product,
wherein the method comprises the steps ofCalculated by the following method:
wherein the method comprises the steps ofCalculated by the following method:
if it is;
;
then;
;
otherwise, the method comprises the steps of;
;
step S4c: obtaining a first enhanced illumination component
;
Step S4cd: obtaining a second enhanced illumination component
;
The step S5 specifically includes:
step S5a: obtaining final illumination component
;
Wherein,for the final illumination component +.>For the first enhanced illumination component, < >>For a second enhanced illumination component;
step S5b: normalizing the image to a gray value of [0,1];
step S5c: compressing the gray scale range of the image;
the step S6 specifically includes:
step S6a: the final illumination component and the final reflection component are applied to a retina model, i.e.,
wherein the method comprises the steps ofFor the last enhanced image ++>For dot multiplication operations, ++>Is the final reflected component;
step S6b: will beDenoted as->Use->After replacing the V-channels in the HSV color space, the color is converted back to the RGB color space, resulting in a final enhanced color image.
CN202110581353.5A 2021-05-27 2021-05-27 Low-light image enhancement method based on retina model Active CN113160096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110581353.5A CN113160096B (en) 2021-05-27 2021-05-27 Low-light image enhancement method based on retina model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110581353.5A CN113160096B (en) 2021-05-27 2021-05-27 Low-light image enhancement method based on retina model

Publications (2)

Publication Number Publication Date
CN113160096A CN113160096A (en) 2021-07-23
CN113160096B true CN113160096B (en) 2023-12-08

Family

ID=76877698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110581353.5A Active CN113160096B (en) 2021-05-27 2021-05-27 Low-light image enhancement method based on retina model

Country Status (1)

Country Link
CN (1) CN113160096B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014169579A1 (en) * 2013-04-19 2014-10-23 华为技术有限公司 Color enhancement method and device
CN106780417A (en) * 2016-11-22 2017-05-31 北京交通大学 A kind of Enhancement Method and system of uneven illumination image
CN107578383A (en) * 2017-08-29 2018-01-12 北京华易明新科技有限公司 A kind of low-light (level) image enhancement processing method
CN109493295A (en) * 2018-10-31 2019-03-19 泰山学院 A kind of non local Haar transform image de-noising method
CN111223068A (en) * 2019-11-12 2020-06-02 西安建筑科技大学 Retinex-based self-adaptive non-uniform low-illumination image enhancement method
CN111583123A (en) * 2019-02-17 2020-08-25 郑州大学 Wavelet transform-based image enhancement algorithm for fusing high-frequency and low-frequency information
CN111626945A (en) * 2020-04-23 2020-09-04 泰山学院 Depth image restoration method based on pixel-level self-similarity model
CN112116536A (en) * 2020-08-24 2020-12-22 山东师范大学 Low-illumination image enhancement method and system
CN112365425A (en) * 2020-11-24 2021-02-12 中国人民解放军陆军炮兵防空兵学院 Low-illumination image enhancement method and system
WO2021088481A1 (en) * 2019-11-08 2021-05-14 南京理工大学 High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875352B (en) * 2017-01-17 2019-08-30 北京大学深圳研究生院 A kind of enhancement method of low-illumination image
CN116360086A (en) * 2019-10-21 2023-06-30 因美纳有限公司 System and method for structured illumination microscopy

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014169579A1 (en) * 2013-04-19 2014-10-23 华为技术有限公司 Color enhancement method and device
CN106780417A (en) * 2016-11-22 2017-05-31 北京交通大学 A kind of Enhancement Method and system of uneven illumination image
CN107578383A (en) * 2017-08-29 2018-01-12 北京华易明新科技有限公司 A kind of low-light (level) image enhancement processing method
CN109493295A (en) * 2018-10-31 2019-03-19 泰山学院 A kind of non local Haar transform image de-noising method
CN111583123A (en) * 2019-02-17 2020-08-25 郑州大学 Wavelet transform-based image enhancement algorithm for fusing high-frequency and low-frequency information
WO2021088481A1 (en) * 2019-11-08 2021-05-14 南京理工大学 High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection
CN111223068A (en) * 2019-11-12 2020-06-02 西安建筑科技大学 Retinex-based self-adaptive non-uniform low-illumination image enhancement method
CN111626945A (en) * 2020-04-23 2020-09-04 泰山学院 Depth image restoration method based on pixel-level self-similarity model
CN112116536A (en) * 2020-08-24 2020-12-22 山东师范大学 Low-illumination image enhancement method and system
CN112365425A (en) * 2020-11-24 2021-02-12 中国人民解放军陆军炮兵防空兵学院 Low-illumination image enhancement method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
低光照彩色图像增强算法研究;黄丽雯;王勃;宋涛;黄俊木;;重庆理工大学学报(自然科学)(第01期);全文 *

Also Published As

Publication number Publication date
CN113160096A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN110232661B (en) Low-illumination color image enhancement method based on Retinex and convolutional neural network
CN100568279C (en) A kind of fast colourful image enchancing method based on the Retinex theory
CN109410127B (en) Image denoising method based on deep learning and multi-scale image enhancement
CN104156921B (en) Self-adaptive low-illuminance or non-uniform-brightness image enhancement method
Gupta et al. Minimum mean brightness error contrast enhancement of color images using adaptive gamma correction with color preserving framework
CN107730475A (en) Image enchancing method and system
CN110298792B (en) Low-illumination image enhancement and denoising method, system and computer equipment
CN109919859B (en) Outdoor scene image defogging enhancement method, computing device and storage medium thereof
CN111968062B (en) Dark channel prior specular highlight image enhancement method and device and storage medium
CN110473152B (en) Image enhancement method based on improved Retinex algorithm
CN109493291A (en) A kind of method for enhancing color image contrast ratio of adaptive gamma correction
CN114897753B (en) Low-illumination image enhancement method
CN107256539B (en) Image sharpening method based on local contrast
CN104021531A (en) Improved method for enhancing dark environment images on basis of single-scale Retinex
CN111968065A (en) Self-adaptive enhancement method for image with uneven brightness
CN117252773A (en) Image enhancement method and system based on self-adaptive color correction and guided filtering
CN114187222A (en) Low-illumination image enhancement method and system and storage medium
CN104463806B (en) Height adaptive method for enhancing picture contrast based on data driven technique
CN114463207B (en) Tone mapping method based on global dynamic range compression and local brightness estimation
CN116363011A (en) Multi-branch low-illumination image enhancement method based on frequency domain frequency division
CN113222859B (en) Low-illumination image enhancement system and method based on logarithmic image processing model
CN108550124B (en) Illumination compensation and image enhancement method based on bionic spiral
CN107358592B (en) Iterative global adaptive image enhancement method
CN113160096B (en) Low-light image enhancement method based on retina model
KR102277005B1 (en) Low-Light Image Processing Method and Device Using Unsupervised Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant