CN116468636A - Low-illumination enhancement method, device, electronic equipment and readable storage medium - Google Patents

Low-illumination enhancement method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116468636A
CN116468636A CN202310427263.XA CN202310427263A CN116468636A CN 116468636 A CN116468636 A CN 116468636A CN 202310427263 A CN202310427263 A CN 202310427263A CN 116468636 A CN116468636 A CN 116468636A
Authority
CN
China
Prior art keywords
image
current object
low
illumination
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310427263.XA
Other languages
Chinese (zh)
Inventor
季渊
李星仪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Tanggu Semiconductor Co ltd
Original Assignee
Wuxi Tanggu Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Tanggu Semiconductor Co ltd filed Critical Wuxi Tanggu Semiconductor Co ltd
Priority to CN202310427263.XA priority Critical patent/CN116468636A/en
Publication of CN116468636A publication Critical patent/CN116468636A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a low-illumination enhancement method, a device, electronic equipment and a readable storage medium, which relate to the technical field of image low-illumination enhancement and comprise the steps of obtaining a current object image and preprocessing the current object image to obtain an original image of the current object; processing the reflection component and the illumination component of the original image of the current object respectively based on an image enhancement algorithm to determine an observation image of the current object after low illumination enhancement; the current object image and the observation image with enhanced low illumination are fused, and the target imaging of the current object is determined, so that the underexposure condition of the image shot by the imaging equipment is relieved, and the technical problem that the stage scene cannot be truly reflected is avoided.

Description

Low-illumination enhancement method, device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of image low-illumination enhancement technology, and in particular, to a low-illumination enhancement method, apparatus, electronic device, and readable storage medium.
Background
Near-eye display devices are one of the main portals for people to perceive the meta-universe, and people watch three-dimensional digital scenes built by computers through the near-eye display devices to form the world. Entertainment immersive experience is a typical application of the meta-universe, wherein a stage is one of common scenes, lighting equipment is generally adopted for concentrated illumination to highlight the environment or render atmosphere in stage performance, and a part which is not illuminated by light is a piece of black paint, so that the brightness range of the stage scene is very wide due to strong contrast.
However, the brightness dynamic range of single shooting of the current imaging equipment is not more than three orders of magnitude at maximum, the condition of overexposure or underexposure is easy to occur, and the real stage scene cannot be reflected.
Disclosure of Invention
The invention aims to provide a low-illumination enhancement method, a device, electronic equipment and a readable storage medium, so as to relieve underexposure of an image shot by imaging equipment and avoid the technical problem that a stage scene cannot be truly reflected.
In a first aspect, an embodiment of the present invention provides a low-illuminance enhancement method, including:
acquiring a current object image and preprocessing the current object image to obtain an original image of the current object;
processing the reflection component and the illumination component of an original image of a current object respectively based on an image enhancement algorithm, and determining an observation image of the current object after low illumination enhancement;
and fusing the current object image and the observation image with enhanced low illumination to determine target imaging of the current object.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the steps of obtaining a current object image and preprocessing the current object image to obtain an original image of the current object include:
converting the current object image into an HSV color space, and processing a brightness channel of the current object image;
denoising and extracting features of the processed current object image to obtain an original image of the current object.
With reference to the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the step of performing denoising and feature extraction operations on the processed current object image to obtain an original image of the current object includes:
extracting edges of a brightness channel of a current object image according to an edge detection operator, and adding the extracted edge details to the current object image to obtain a first original image of the current object;
stretching the dynamic range of the first original image according to a Gamma operator to determine a second original image of the current object;
and denoising the second original image according to a median filtering method to obtain a third original image of the current object.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the step of determining the observed image after low-illumination enhancement of the current object includes processing, based on an image enhancement algorithm, a reflection component and an illumination component of an original image of the current object, respectively, includes:
extracting a reflection component and an illumination component in the original image based on a Gaussian Laplace filtering algorithm;
determining an enhanced luminance channel according to the product of the illumination component and the enhanced reflection component;
and fusing the enhanced brightness channel with the hue channel and the saturation channel, converting the brightness channel into an RGB color space, and determining the observation image of the current object after low-illumination enhancement.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the step of fusing the current object image and the observation image with enhanced low illumination to determine a target image of the current object includes:
according to the Mertens fusion method, respectively constructing a Gaussian pyramid and a Laplacian pyramid from a weight graph and a residual graph which correspond to the current object image and the low-illumination enhanced image;
fusing and reversely solving the Gaussian pyramid and the Laplacian pyramid to obtain an object image of the current object; the weight map for representing the image quality corresponding to the low-illumination enhanced image comprises three indexes of contrast, saturation and exposure.
With reference to the first aspect, the embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the method further includes:
and determining the number of convolution kernels corresponding to the low-illumination enhancement algorithm according to the number of indexes corresponding to the weight graph.
With reference to the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the method further includes:
and transmitting the target imaging of the current object into a scanning control module, and transmitting the synchronous signal and the data signal to a display receiving end in a low-voltage differential signal mode.
In a second aspect, an embodiment of the present invention further provides a low-illuminance enhancement apparatus, including: the device comprises:
the acquisition module acquires a current object image and performs preprocessing on the current object image to obtain an original image of the current object;
the first determining module is used for respectively processing the reflection component and the illumination component of the original image of the current object based on an image enhancement algorithm and determining an observation image of the current object after low illumination enhancement;
and the second determining module is used for fusing the original image of the current object with the observation image with enhanced low illumination to determine target imaging of the current object.
In a third aspect, an embodiment provides an electronic device, including a memory, a processor, where the memory stores a computer program executable on the processor, and where the processor implements the steps of the method according to any of the foregoing embodiments when the computer program is executed.
In a fourth aspect, embodiments provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the steps of the method of any of the preceding embodiments.
The embodiment of the invention provides a low-illumination enhancement method, a device, electronic equipment and a readable storage medium, which are used for obtaining an original image of a current object by carrying out preprocessing operations such as denoising and the like on the image of the current object, determining illumination components and reflection components corresponding to the original image according to an image enhancement method, carrying out low-illumination enhancement to obtain an observed image of the current object, and carrying out fusion according to the enhanced observed image and the image of the current object to eliminate an excessively enhanced image part so as to obtain a target image corresponding to the current object, wherein the target image does not have defects such as underexposure and the like, and can meet the application of truly reflecting the stage environment.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a low-illuminance enhancement method according to an embodiment of the present invention;
FIG. 2 is a flowchart of another low-light enhancement method according to an embodiment of the present invention;
FIG. 3 is a comparison chart of edge detection operator effects according to an embodiment of the present invention;
FIG. 4 is a Laplace convolution template according to an embodiment of the present disclosure;
FIG. 5 is a flowchart of another method for enhancing low illumination according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of gray value comparison before and after image fusion according to an embodiment of the present invention;
FIG. 7 shows an image histogram contrast after image enhancement and image fusion with an artwork provided by an embodiment of the present invention;
FIG. 8 is a schematic diagram of an FPGA implementation framework of the LLEF algorithm according to an embodiment of the present invention;
FIG. 9 is a graph showing the processing result of a low-light image according to a different algorithm according to an embodiment of the present invention;
fig. 10 is a processing result of a stage scene shot by a binocular camera according to a different algorithm provided by an embodiment of the present invention;
fig. 11 is a schematic functional block diagram of a low-illuminance enhancement apparatus according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a hardware architecture of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The inventor researches that the current stage scene shooting needs to expand the dynamic range of the image, enhance the contrast of the brightness of the image and restore the real scene of the stage. Furthermore, the real-time nature of image processing is particularly important when viewing live broadcast using a near-eye display device such as Virtual Reality (VR).
Based on the above, the low-illumination enhancement method, the low-illumination enhancement device, the electronic equipment and the readable storage medium provided by the embodiment of the invention can relieve the underexposure condition of the image shot by the imaging equipment and avoid the defect that the stage scene cannot be truly reflected.
For the sake of understanding the present embodiment, first, a detailed description will be given of a low-illuminance enhancement method disclosed in the present embodiment, where the method may be applied to intelligent control devices such as a controller, an upper computer, a server, and the like.
Fig. 1 is a flowchart of a low-illuminance enhancement method according to an embodiment of the present invention.
The Low-illumination enhancement algorithm (Low-Light Enhancement Fusion, LLEF) is understood to divide an image into illumination and reflection portions under the principle of Retinx, where the illumination portion corresponds to the Low-frequency signal portion of the image and the reflection portion corresponds to the high-frequency signal portion of the image.
As shown in fig. 1, the method includes:
step S102, obtaining a current object image and preprocessing the current object image to obtain an original image of the current object.
The acquisition device can acquire the current object image, and pre-process the current object image for the reliability of the image processing of the subsequent step to obtain an original image.
Step S104, the reflection component and the illumination component of the original image of the current object are respectively processed based on an image enhancement algorithm, and an observation image after the low illumination enhancement of the current object is determined.
The image enhancement Retinex theory considers that the color of an object observed by human eyes is irrelevant to illumination intensity, and is related to the capability of reflecting light rays with different frequencies of the object, and the expression is as follows:
S(x,y)=R(x,y)·L(x,y) (1)
where S is the observed image, R is the reflected portion of the image, and L is the illuminated portion of the image, i.e., the final imaging of the whole is the product of the reflected portion and the illuminated portion. The single-scale SSR algorithm estimates the illumination component using gaussian low-pass filtering with the reflection component as the enhanced final result, and converts equation (1) to arithmetic operation on the digital domain. The expression is:
log[R(x,y)]=log[S(x,y)]-log[L(x,y)] (2)
and (3) quantizing the result of the formula (2) to a pixel space in the range of 0-255, wherein the result is the enhanced image.
And step S106, fusing the current object image and the observation image with enhanced low illumination to determine target imaging of the current object.
The LLEF algorithm is divided into two parts of low-illumination enhancement and image fusion, and the whole flow is shown in fig. 2. The low illumination enhancement part adopts a modified Retinex algorithm, and the image fusion part adopts a modified pyramid fusion method.
In a preferred embodiment of practical application, the embodiment of the invention obtains the original image of the current object by performing pre-processing operations such as denoising and the like on the current object image, determines the illumination component and the reflection component corresponding to the original image according to an image enhancement method, performs low-illumination enhancement to obtain the observation image of the current object, and then fuses the enhanced observation image with the current object image to eliminate the excessively enhanced image part, thereby obtaining the target imaging corresponding to the current object, wherein the target imaging has no defects such as underexposure and the like, and can meet the application of truly reflecting the stage environment.
In some embodiments, to further ensure the reliability of the low-intensity image enhancement method, the image is pre-processed prior to the image enhancement step operation; illustratively, this step S102 specifically includes:
step 1.1), converting the current object image into HSV color space, and processing the brightness channel of the current object image.
In the HSV color space, the H-channel represents chromaticity, the S-channel represents saturation of hue, and the V-channel represents brightness. Its brightness is independent of hue and saturation, and segmentation robustness is better. The LLEF algorithm converts the image into an HSV color space, and the V channels are independently processed, so that the problem of color distortion can not occur, and the image operation speed is effectively increased.
Step 1.2), denoising and extracting features of the processed current object image to obtain an original image of the current object.
As a preferred embodiment, the denoising and feature extraction operation may include:
1) And extracting edges of the brightness channel of the current object image according to the edge detection operator, and adding the extracted edge details to the current object image to obtain a first original image of the current object.
The edge detection operator can comprise a Canny operator, and Gaussian filtering processing is carried out on the image before the edge is extracted, so that the edge detection operator has good noise tolerance and is very suitable for edge extraction of low-illumination images. Before image enhancement, the edge extraction can be performed on the V channel by using a Canny operator, then the extracted edge details are added to the original image, the edge is enhanced, and the loss of the image edge details is avoided. The details of the processing without the Canny operator and the processing with the Canny operator are as shown in fig. 3. It can be seen that the ground texture in the lower right corner of the left side (a) is not clearly diffused, and the ground texture in the right side (b) is clearly diffused, so that the halo blurring condition does not occur.
2) And stretching the dynamic range of the first original image according to the Gamma operator to determine a second original image of the current object.
Gamma correction is widely applied to the field of image processing, and nonlinear stretching is performed on an image by adjusting the Gamma curve of the image, so that the brightness contrast of the image is enhanced, and the image obtains a better dynamic range. After the edge extraction is completed, gamma correction is further carried out, and the image contrast is enhanced.
The Gamma operator has the expression:
g(x,y)=u(x,y) γ (3)
wherein, gamma is constant, and parameter gamma corresponds to stretching effect. When γ=1, the image keeps the original dynamic range unchanged; when gamma >1, the image contrast of the high gray area is enhanced; when γ <1, the low gray image dynamic range is enhanced. In order to obtain a higher gray level, the gamma parameter needs to be chosen in the range of 0, 1. In the range of gamma parameter selection 0,1, the enhanced gray level decreases monotonically with increasing parameters, but the smaller the gamma value, the more noise will be introduced. Through experiments, gamma=0.7 is selected, so that the dynamic range can be stretched well, and excessive noise is not introduced.
3) And denoising the second original image according to the median filtering method to obtain a third original image of the current object.
Low-light images are prone to noise during image acquisition and enhancement due to the darkness of the overall image. The median filtering can better keep the image marginality when removing noise, has lower blurring effect and can remove some random noise. Therefore, the median filtering is carried out before the image enhancement algorithm is carried out, and noise existing in the image and noise appearing in Gamma correction can be removed.
In some embodiments, in order to reduce the loss of image details, step S104 in the above embodiments may further be implemented by the following steps, including:
step 2.1), extracting a reflection component and an illumination component in the original image based on a Gaussian Laplace filtering algorithm.
The LLEF algorithm adopts Gaussian-Laplace high-pass filtering to replace Gaussian low-pass filtering, and the reflection component is obtained first. The reason why the illumination can be estimated by Gaussian filtering is that the Laplace algorithm is adopted, according to the Retinex theory, gaussian-Laplace filtering (Laplace of Gaussian, log) can be further adopted to directly extract the reflection component of the image, the problem of detail data loss in the image subtraction process can be solved, enhancement processing is carried out, then the illumination component is obtained through the reflection component in the reverse direction, and finally the final imaging is obtained by combining the processed reflection component with the illumination component.
In digital image processing, the maximum value of the first derivative of the image often corresponds to the edge region of the image, while the edge positioning capability of the second derivative is stronger, so that whether the edge exists or not can be determined, and the Laplace operator is a second differential linear operator. The Laplace operator judges the edge position of the image through the second order differential characteristic, and the mathematical expression of the Laplace transformation of the image I (x, y) is as follows:
represented in the form of a convolution template, as shown in fig. 4.
The template can intuitively show that the Laplace operator can strengthen bright spots in the image, but noise can also appear in the image in a sudden bright spot or other forms, the Laplace operator can not judge whether the spot is edge or noise, and sometimes the noise in the image can be enhanced, so that Gaussian filtering is performed before Laplace edge detection is performed, and the Gaussian-Laplace operator is formed.
Gaussian is a commonly used filter in image processing, and the two-dimensional Gaussian smooth convolution kernel expression is
Wherein G sigma (x, y) is a Gaussian smoothed image, sigma is a standard deviation, and represents the influence degree of surrounding pixels on the current pixel, and the larger the sigma is, the larger the influence degree is; the smaller σ, the smaller the influence degree. The Gaussian-Laplace operator carries out second-order derivation after Gaussian smoothing to extract edges, and a two-dimensional Gaussian-Laplace mathematical expression with the Gaussian standard deviation sigma is adopted as follows:
the LoG (x, y) is an image obtained by gaussian-laplace operator operation. In the modified Retinex proposed by the LLEF algorithm, the reflection component is obtained by filtering the image by gaussian-laplace high-pass filtering. Represented by the following formula:
R(x,y)=V(x,y)*F(x,y) (7)
wherein V (x, y) is the V channel component of the preprocessed image in HSV color space; * Representing a convolution operation; f (x, y) is gaussian-laplace high pass filtering; r (x, y) is the calculated reflection component.
Step 2.2), determining an enhanced brightness channel according to the product of the illumination component and the enhanced reflection component;
on the basis of the steps, after the reflection component is obtained, the reflection component is subjected to contrast stretching by adopting a method of CLAHE (Contrast Limited Adaptive Histogram Equalizatin) limiting a contrast self-adaptive histogram, so that the reflection component after the enhancement treatment is obtained, the image contrast is enhanced, and the image is clearer. The expression is:
R'(x,y)=CLAHE(R(x,y)) (8)
according to Retinex theory, the illumination component of the image is obtained, and the expression is:
L(x,y)=log(I(x,y))-log(R(x,y)) (9)
the final V-channel component V' (x, y) is obtained by multiplying the enhanced reflected component by the illumination component, expressed as:
V'(x,y)=L(x,y)·R'(x,y)(10)
step 2.3), fusing the enhanced brightness channel with the hue channel and the saturation channel, converting the brightness channel into an RGB color space, and determining an observation image of the current object after low illumination enhancement.
Here, the enhanced V-channel component is used instead of the original V-channel component, fused with the H-channel and S-channel, and then converted into RGB color space, and the overall flowchart is shown in fig. 5.
In some embodiments, due to the light in the stage scene, overexposure is easy to occur in the shooting process of the camera, and after the low-illumination enhancement, the problem of excessive enhancement may occur in the original light portion. And fusing the enhanced image with the current object image by adopting an image fusion mode, and eliminating the part which is excessively enhanced. Based on this, this step S106 may be further implemented by the following steps:
and 3.1) respectively constructing a Gaussian pyramid and a Laplacian pyramid according to a weight map and a residual map corresponding to the current object image and the low-illumination enhanced image by a Mertens fusion method.
And 3.2), fusing and reversely solving the Gaussian pyramid and the Laplacian pyramid to obtain an object image of the current object.
The weight map for representing the image quality corresponding to the low-illumination enhanced image comprises three indexes of contrast, saturation and exposure.
Because of the need to achieve real-time processing, a Mertens fusion method with higher speed is adopted. According to the method, a Gaussian pyramid and a Laplacian pyramid are respectively constructed from an original image (which is not preprocessed) and a weight image and a residual image of an enhanced image, and then the two pyramids are fused and solved reversely to obtain a final image. Wherein, the weight graph of the image quality adopts three indexes of contrast, saturation and exposure. The LLEF algorithm adopts an improved fusion method, the convolution kernel of 5*5 in the original algorithm is changed into the convolution kernel of 3*3, and the improvement can greatly reduce the calculation times of each pixel point and reduce 48 convolution calculations. The convolution kernel is changed from equation (11) to equation (12):
the weight map in the original algorithm adopts three indexes, and as fusion is to eliminate the excessive enhancement part and contrast enhancement is carried out before fusion, only the index of exposure is selected as the weight map. The gray value is close to 1 after the light part is excessively enhanced, and the exposure degree is close to 0.5, which is a better exposure degree. Judging the exposure degree of the pixel point according to a Gaussian curve, wherein the Gaussian curve can be expressed as:
wherein, the Gaussian standard deviation sigma is set to 0.2, and i is the gray value of the pixel point. The gray value of the stage scene after fusion is compared with that shown in fig. 6. The gray value is the gray value of the red block diagram area in the image, and it can be seen that the fused image eliminates the vertical line stripes caused by excessive enhancement in the background, the gray value of the background area is wholly reduced, the enhanced part of the actor area on the stage is reserved, the whole image is clearer and more natural, and the brightness dynamic range of the real stage scene is more approximate.
Fig. 7 is a histogram contrast of the original, enhanced and fused images. It can be seen that the enhanced image has stretched gray level, bright color, clear detail and enhanced contrast. The gray level of the original image is concentrated in the 0-100 low gray level area, the enhanced image gray level is concentrated in the 50-150 higher gray level area, the gray level of the image is wholly moved to the right, and the gray level structure of the original image is maintained. The LoGR algorithm has obvious effect of enhancing the dynamic range of the image brightness on the premise of not changing the gray level structure of the image, and better retains the image brightness details. However, the image gray level after enhancement is scattered, and as can be seen from fig. 7 (d), the number of pixels with different gray levels is greatly different, and the image quality is poor. As can be seen from fig. 7 (f), the number of pixels with different gray levels in the fused image changes smoothly, so that the problem of abrupt gray level change is solved, and the overall visual effect of the image is good.
On the basis of the foregoing embodiments, after the low-illuminance enhanced image of the target object is obtained, the image may be applied to a near-eye display apparatus; illustratively, the method further comprises: and transmitting the target imaging of the current object into a scanning control module, and transmitting the synchronous signal and the data signal into a display receiving end in a low-voltage differential signal mode so as to enable the near-eye display equipment to display images.
In addition, the general software platform is difficult to meet the real-time requirement, and furthermore, a Field programmable gate array (Field-Programmable Gate Array, FPGA) hardware platform is selected to carry out pipeline processing on the image, and the reasonable pipeline filling and emptying time can enable the algorithm to achieve the real-time effect. Therefore, the embodiment of the invention verifies the LLEF algorithm on the FPGA, and the whole block diagram is shown in FIG. 8 and mainly comprises a video acquisition module, an image processing module and a scanning control module.
The video source is transmitted into PC (Personal Computer) end by binocular camera, and transmitted into FPGA through high definition multimedia Interface (High Definition Multime-dia Interface, HDMI). Firstly, video data is processed by a video acquisition module and then is transmitted into an image processing module and a double data rate synchronous dynamic random access memory (Double Data Rate Synchronous Dynamic Random Access Memory, DDRDRAM, DDR 3). And secondly, carrying out LoGR algorithm enhancement and pyramid fusion on the image to obtain a final image. Finally, the image data is transmitted to a scanning control module, and as a result, the synchronous signals and the data signals are transmitted to a display receiving end in a low voltage differential signaling (Low Voltage Differential Signaling, LVDS) mode.
The image processing module mainly comprises an RGB-HSV conversion module, an HSV-RGB conversion module, a preprocessing module, a LoGR algorithm module and a pyramid fusion module. First, the data is color space converted, V-channel data is extracted, and S, H-channel data is stored in synchronous DDR 3. And secondly, performing edge extraction on the V-channel data by adopting a mode of constructing a 3×3 window by adopting a FIFO (First Input First Output) buffer and a D trigger, and performing Gamma correction and logarithmic conversion by adopting a lookup table mode. And storing different layers in the Frame Buffer in sequence during pyramid decomposition, and taking out the layers in sequence according to the space address during fusion. And finally, driving a display to display a final video by the fused image through a scanning module. The scanning module adopts a digital pulse width modulation (P-ulse Width Modulation, PWM) mode to scan and drive the display, receives a frame of image to be displayed, adopts an addressing mode to output image data, and displays the image through control signals.
Aiming at a stage scene, in order to solve the problems of color distortion and halation, and comprehensively consider the computational complexity and the running speed, the embodiment of the invention provides a Low-illumination enhancement algorithm of LLEF (Low-Light Enhancement Fusion), which comprises two parts of Low-illumination enhancement and image fusion. Wherein the low illumination enhancement part adopts a modified Retinex algorithm based on Gaussian-Laplace. Compared with the traditional algorithm, the LLEF algorithm improves the brightness of the image, ensures better color and detail quality of the image, maintains the contrast of the original image and can be widely applied to stage scenes.
In addition, in order to verify the effectiveness of the LLEF algorithm, the embodiment of the invention performs comparison analysis from subjective and objective aspects respectively, adopts 5 low-illumination image enhancement algorithms and 8 groups of images with different sizes to perform comparison experiments, and the adopted comparison algorithms are respectively as follows: BIMEF-based Multi-exposure fusion framework for realizing Low-light-level image enhancement BIMEF (A Bio-injected Multi-Exposure Fusion Framework for Low-light Image Enhancement), a method LIME (LIME: low-light image enhanc-ement via illumination map estimation) for refining initial illumination by structure priors, multi-scale Retinex MSRCR with color recovery, and a Low-light-level image enhancement method LECARM (Low-Light Image Enhancement Using the Camera Respon-se Model) for locally adjusting exposure by using a camera corresponding Model. The environment configuration used in this experiment was an Intel (R) Core (TM) i7-1065G7 CPU 1.50GHz processor and the software platform was Matlab2020b.
From the subjective effect, as shown in fig. 9 and 10, fig. 9 is a house and country image with a size of 5500 x 3500, fig. 10 is a stage 1-4 performance scene actually photographed by a binocular camera, and the images are photographed by the binocular camera and spliced into an image with a size of 7500 x 2000.
As can be seen from fig. 9 and 10, the BIMEF algorithm has good enhancement effect, but weakens the image contrast, and the overall color saturation of the image is low. The LIME algorithm is prone to excessive enhancement problems in both brightness and color and can enhance noise, making the image appear severely distorted. The MSRCR algorithm enhances the image to have lower definition, the color of the image is distorted, the whole image presents the problem of blushing, partial areas become white, and the detail of the image is seriously lost. The lecard algorithm, while effective overall, weakens the sky contrast, and the bridge and wall area noise is evident from a comparison of fig. 9 (e) and (f). The LLEF algorithm has higher definition, obvious low-illumination enhancement effect, retains the contrast of original pictures, has no problems of color deviation and chromatic aberration, and has a more natural overall visual effect. In the comparison chart shot by the binocular camera, as shown in fig. 10, the LLEF algorithm is obviously superior to the other four algorithms, the BIMEF, LIME, MCRCR algorithm has poor enhancement effect, the character area is enhanced to excessively lose details, and the noise in the stage background area is serious. The LECARM algorithm has a good enhancement effect, but the problem of excessive enhancement of the ground part of the stage also exists, so that the light and dark contrast ratio of the stage is reduced, and the LECARM algorithm is not suitable for the stage scene. The LLEF algorithm enhances the brightness range of the person and prop area in the stage scene while ensuring the image quality, keeps the background brightness range unchanged and highlights the stage person performing area.
In objective evaluation, as shown in table one and table two, two commonly used reference indexes of Peak Signal-to-Noise Ratio (PSNR) and structural similarity (Structural Similarity, SSIM) are adopted as the criteria for objective evaluation of image quality. The peak signal-to-noise ratio indicates the image reconstruction quality, and the larger the peak signal-to-noise ratio, the better the image quality. The SSIM defines the structural information of an image as an attribute reflecting the structure of an object in a scene independent of brightness, contrast, and models distortion as a combination of three different factors, brightness, contrast, and structure, with larger SSIM values representing better image quality.
As shown in tables 1 and 2, the LLEF algorithm is superior to BIMEF in both PSNR and SSIM, and significantly superior to the other three algorithms. As can be seen from stage images shot by the last four groups of binocular cameras, the LLEF algorithm has the best stage scene processing effect, and the peak signal-to-noise ratio and the structural similarity value are far higher than those of other 4 classical algorithms, so that the method is most suitable for low-illumination enhancement of stage scenes. The result shows that the LLEF algorithm has the best enhancement effect, and compared with other 4 classical algorithms, the peak signal to noise ratio can be improved by 2 times at most, the structural similarity can be improved by 1.6 times at most by 57.06% on average, and the structural similarity is improved by 27.34% on average.
Table 1 peak SNR index comparison of different algorithms Tab.1 Comparionofakeaksignal-to-noiseprometricsfor differential algorithms
TABLE 2 structural similarity index comparison of different algorithms Tab.2Comparison ofstructural similarity metrics ofdifferent algorithms
The embodiment of the invention researches a binocular camera image brightness dynamic range expansion-oriented method, provides an improved Retinex algorithm low-illumination image enhancement fusion algorithm according to the binocular camera stage shooting characteristics, carries out theoretical simulation on a design scheme through a Matlab platform, and carries out hardware implementation on the design scheme on an FPGA (field programmable gate array) to achieve a real-time effect. Experimental results show that the brightness dynamic range is obviously improved on the premise of ensuring the image quality, and the speed can achieve a real-time effect. The method provides a better solution for the problem of limited dynamic range of low-illumination scene images such as VR live broadcast interactive stage theatres.
As shown in fig. 11, an embodiment of the present invention provides a low-illuminance enhancement apparatus, including: the device comprises:
the acquisition module acquires a current object image and performs preprocessing on the current object image to obtain an original image of the current object;
the first determining module is used for respectively processing the reflection component and the illumination component of the original image of the current object based on an image enhancement algorithm and determining an observation image of the current object after low illumination enhancement;
and the second determining module is used for fusing the current object image and the observation image with enhanced low illumination to determine target imaging of the current object.
Aiming at the problem that the dynamic range of the shooting brightness of the stage scene is limited, the embodiment of the invention firstly adopts an improved Retinex algorithm to carry out enhancement processing on the low-illumination image of the stage scene, and an integrally enhanced image is obtained. And then fusing the original image with the enhanced image, and processing out background areas which are excessively enhanced and do not need to be enhanced to obtain a final image. The improved Retinex algorithm adopts Gaussian-Laplace high-pass filtering to obtain the reflection component and then obtains the illumination component, so that the problem of detail loss of the reflection component is solved. And then contrast and detail enhancement processing is carried out on the reflection component, and the reflection component is multiplied by the illumination component to obtain an enhanced image. The embodiment of the invention performs FPGA hardware platform verification based on software platform verification. Experimental results show that compared with other classical methods, the embodiment of the invention has obvious visual effect on different stage scenes, especially stage scenes with larger light-dark difference, the average peak signal-to-noise ratio (PSNR) is improved by 57.06%, and the average Structural Similarity (SSIM) is improved by 27.34%. The processed image restores the real brightness dynamic range of the stage, has better color saturation and no distortion, and ensures better image quality.
In some embodiments, the obtaining module is further specifically configured to convert the current object image into an HSV color space, and process a brightness channel of the current object image; denoising and extracting features of the processed current object image to obtain an original image of the current object.
In some embodiments, the obtaining module is further specifically configured to perform edge extraction on a luminance channel of a current object image according to an edge detection operator, and add the extracted edge details to the current object image to obtain a first original image of the current object; stretching the dynamic range of the first original image according to a Gamma operator to determine a second original image of the current object; and denoising the second original image according to a median filtering method to obtain a third original image of the current object.
In some embodiments, the first determining module is further specifically configured to extract a reflection component and an illumination component in the original image based on a laplacian of gaussian filtering algorithm; determining an enhanced luminance channel according to the product of the illumination component and the enhanced reflection component; and fusing the enhanced brightness channel with the hue channel and the saturation channel, converting the brightness channel into an RGB color space, and determining the observation image of the current object after low-illumination enhancement.
In some embodiments, the second determining module is further specifically configured to construct a gaussian pyramid and a laplacian pyramid from a weight map and a residual map corresponding to the current object image and the low-illumination enhanced image according to the Mertens fusion method, respectively; fusing and reversely solving the Gaussian pyramid and the Laplacian pyramid to obtain an object image of the current object; the weight map for representing the image quality corresponding to the low-illumination enhanced image comprises three indexes of contrast, saturation and exposure.
In some embodiments, the apparatus is further to: and determining the number of convolution kernels corresponding to the low-illumination enhancement algorithm according to the number of indexes corresponding to the weight graph.
In some embodiments, the apparatus is further to: and transmitting the target imaging of the current object into a scanning control module, and transmitting the synchronous signal and the data signal to a display receiving end in a low-voltage differential signal mode.
In the embodiment of the present invention, the electronic device may be, but is not limited to, a personal computer (Personal Computer, PC), a notebook computer, a monitoring device, a server, and other computer devices with analysis and processing capabilities.
As an exemplary embodiment, referring to fig. 12, an electronic device 110 includes a communication interface 111, a processor 112, a memory 113, and a bus 114, the processor 112, the communication interface 111, and the memory 113 being connected by the bus 114; the memory 113 is used for storing a computer program supporting the processor 112 to execute the method, and the processor 112 is configured to execute the program stored in the memory 113.
The machine-readable storage medium referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (RadomAccess Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The non-volatile medium may be a non-volatile memory, a flash memory, a storage drive (e.g., hard drive), any type of storage disk (e.g., optical disk, dvd, etc.), or a similar non-volatile storage medium, or a combination thereof.
It can be understood that the specific operation method of each functional module in this embodiment may refer to the detailed description of the corresponding steps in the above method embodiment, and the detailed description is not repeated here.
The computer readable storage medium provided by the embodiments of the present invention stores a computer program, where the computer program code may implement the method described in any of the foregoing embodiments when executed, and the specific implementation may refer to the method embodiment and will not be described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A method of low illumination enhancement, the method comprising:
acquiring a current object image and preprocessing the current object image to obtain an original image of the current object;
processing the reflection component and the illumination component of an original image of a current object respectively based on an image enhancement algorithm, and determining an observation image of the current object after low illumination enhancement;
and fusing the current object image and the observation image with enhanced low illumination to determine target imaging of the current object.
2. The method of claim 1, wherein the steps of obtaining a current object image and preprocessing the current object image to obtain an original image of the current object, comprise:
converting the current object image into an HSV color space, and processing a brightness channel of the current object image;
denoising and extracting features of the processed current object image to obtain an original image of the current object.
3. The method of claim 2, wherein the step of denoising and feature extraction operations on the processed current object image to obtain an original image of the current object comprises:
extracting edges of a brightness channel of a current object image according to an edge detection operator, and adding the extracted edge details to the current object image to obtain a first original image of the current object;
stretching the dynamic range of the first original image according to a Gamma operator to determine a second original image of the current object;
and denoising the second original image according to a median filtering method to obtain a third original image of the current object.
4. The method of claim 1, wherein the step of determining the low-light enhanced observation image of the current object based on processing the reflected component and the illumination component of the original image of the current object, respectively, by an image enhancement algorithm, comprises:
extracting a reflection component and an illumination component in the original image based on a Gaussian Laplace filtering algorithm;
determining an enhanced luminance channel according to the product of the illumination component and the enhanced reflection component;
and fusing the enhanced brightness channel with the hue channel and the saturation channel, converting the brightness channel into an RGB color space, and determining the observation image of the current object after low-illumination enhancement.
5. The method of claim 1, wherein the step of fusing the current object image and the low-light enhanced observation image to determine a target image of the current object comprises:
according to the Mertens fusion method, respectively constructing a Gaussian pyramid and a Laplacian pyramid from a weight graph and a residual graph which correspond to the current object image and the low-illumination enhanced image;
fusing and reversely solving the Gaussian pyramid and the Laplacian pyramid to obtain an object image of the current object; the weight map for representing the image quality corresponding to the low-illumination enhanced image comprises three indexes of contrast, saturation and exposure.
6. The method of claim 5, wherein the method further comprises:
and determining the number of convolution kernels corresponding to the low-illumination enhancement algorithm according to the number of indexes corresponding to the weight graph.
7. The method according to claim 1, wherein the method further comprises:
and transmitting the target imaging of the current object into a scanning control module, and transmitting the synchronous signal and the data signal to a display receiving end in a low-voltage differential signal mode.
8. A low-light enhancement device, comprising: the device comprises:
the acquisition module acquires a current object image and performs preprocessing on the current object image to obtain an original image of the current object;
the first determining module is used for respectively processing the reflection component and the illumination component of the original image of the current object based on an image enhancement algorithm and determining an observation image of the current object after low illumination enhancement;
and the second determining module is used for fusing the current object image and the observation image with enhanced low illumination to determine target imaging of the current object.
9. An electronic device comprising a memory, a processor and a program stored on the memory and capable of running on the processor, the processor implementing the method of any one of claims 1 to 7 when executing the program.
10. A computer readable storage medium, characterized in that the computer program is stored in the readable storage medium, which computer program, when executed, implements the method of any of claims 1-7.
CN202310427263.XA 2023-04-20 2023-04-20 Low-illumination enhancement method, device, electronic equipment and readable storage medium Pending CN116468636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310427263.XA CN116468636A (en) 2023-04-20 2023-04-20 Low-illumination enhancement method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310427263.XA CN116468636A (en) 2023-04-20 2023-04-20 Low-illumination enhancement method, device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116468636A true CN116468636A (en) 2023-07-21

Family

ID=87176673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310427263.XA Pending CN116468636A (en) 2023-04-20 2023-04-20 Low-illumination enhancement method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116468636A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721039A (en) * 2023-08-08 2023-09-08 中科海拓(无锡)科技有限公司 Image preprocessing method applied to automatic optical defect detection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721039A (en) * 2023-08-08 2023-09-08 中科海拓(无锡)科技有限公司 Image preprocessing method applied to automatic optical defect detection
CN116721039B (en) * 2023-08-08 2023-11-03 中科海拓(无锡)科技有限公司 Image preprocessing method applied to automatic optical defect detection

Similar Documents

Publication Publication Date Title
Galdran Image dehazing by artificial multiple-exposure image fusion
Liang et al. Single underwater image enhancement by attenuation map guided color correction and detail preserved dehazing
US8059911B2 (en) Depth-based image enhancement
CN105850114B (en) The method of inverse tone mapping (ITM) for image
Rao et al. A Survey of Video Enhancement Techniques.
WO2016206087A1 (en) Low-illumination image processing method and device
WO2017048927A1 (en) Cameras and depth estimation of images acquired in a distorting medium
CN109325918B (en) Image processing method and device and computer storage medium
Vazquez-Corral et al. A fast image dehazing method that does not introduce color artifacts
WO2020099893A1 (en) Image enhancement system and method
Singh et al. Weighted least squares based detail enhanced exposure fusion
Wang et al. Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition
CN116468636A (en) Low-illumination enhancement method, device, electronic equipment and readable storage medium
CN111476732A (en) Image fusion and denoising method and system
CN112927160B (en) Single low-light image enhancement method based on depth Retinex
Zhao et al. Multi-scene image enhancement based on multi-channel illumination estimation
JP5286215B2 (en) Outline extracting apparatus, outline extracting method, and outline extracting program
CN110136085B (en) Image noise reduction method and device
JP2012028937A (en) Video signal correction apparatus and video signal correction program
WO2023215371A1 (en) System and method for perceptually optimized image denoising and restoration
WO2021093718A1 (en) Video processing method, video repair method, apparatus and device
CN116263942A (en) Method for adjusting image contrast, storage medium and computer program product
CN114862707A (en) Multi-scale feature recovery image enhancement method and device and storage medium
JP3807266B2 (en) Image processing device
Liu et al. An adaptive tone mapping algorithm based on gaussian filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination