CN113744169A - Image enhancement method and device, electronic equipment and storage medium - Google Patents

Image enhancement method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113744169A
CN113744169A CN202111042929.7A CN202111042929A CN113744169A CN 113744169 A CN113744169 A CN 113744169A CN 202111042929 A CN202111042929 A CN 202111042929A CN 113744169 A CN113744169 A CN 113744169A
Authority
CN
China
Prior art keywords
image
sample
illumination
dim light
enhanced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111042929.7A
Other languages
Chinese (zh)
Inventor
贾若然
傅云翔
陈向阳
王光新
杨文康
杨昌东
李亚玲
曹玲玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Iflytek Information Technology Co Ltd
Original Assignee
Iflytek Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iflytek Information Technology Co Ltd filed Critical Iflytek Information Technology Co Ltd
Priority to CN202111042929.7A priority Critical patent/CN113744169A/en
Publication of CN113744169A publication Critical patent/CN113744169A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention provides an image enhancement method, an image enhancement device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a dim light image to be enhanced; determining an enhanced image of the dim image based on a dim enhancement model, wherein the dim enhancement model is obtained based on spatial consistency between the sample image and a sample enhanced image of the sample image and/or image quality constraint training of the sample enhanced image; the sample enhanced image is the enhanced image of the sample image determined based on the dim light enhanced model in training, the enhanced image paired with the sample image does not need to be collected in advance in the training process of the model, paired data is not relied on, and the collection difficulty of the training sample is greatly reduced; model training is carried out based on space consistency constraint and/or image quality constraint, so that the trained dim light enhancement model can inhibit unnatural effects possibly introduced in the dim light enhancement process, the image quality of the output enhanced image is ensured, and the feasibility and the reliability of dim light enhancement are improved.

Description

Image enhancement method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image enhancement method and apparatus, an electronic device, and a storage medium.
Background
Image information is an important information carrier, and facilitates information exchange. In real life, due to the limitation of natural conditions and technical conditions, a large number of dark light images exist in massive image data, and such dark light images have the problems of low brightness, high noise, distortion and the like, which seriously affect the use of subsequent computer vision algorithms, such as target detection, instance segmentation and the like.
Current dim light image enhancement methods include filter-based methods, decomposition-based methods, and deep learning-based methods. Although the image quality can be improved by the image enhancement through the method, various problems such as local over-enhancement, color distortion and the like are brought while the image quality is improved, and the method depends on paired training data which are difficult to collect in a real scene.
Disclosure of Invention
The invention provides an image enhancement method, an image enhancement device, an electronic device and a storage medium, which are used for solving the defects that a dim light enhancement method in the prior art depends on paired training data and can bring various problems while enhancing the image quality.
The invention provides an image enhancement method, which comprises the following steps:
determining a dim light image to be enhanced;
determining an enhanced image of the dim image based on a dim enhancement model, wherein the dim enhancement model is obtained based on spatial consistency between a sample image and a sample enhanced image of the sample image, and/or image quality constraint training of the sample enhanced image;
the sample enhanced image is an enhanced image of the sample image determined based on a dim light enhancement model in training.
According to an image enhancement method provided by the present invention, the determining an enhanced image of the dim image based on a dim enhancement model includes:
based on the dim light enhancement model, carrying out illumination conversion on the illumination characteristic graph of the dim light image, and combining the converted illumination enhancement graph with the dim light image to obtain an enhanced image of the dim light image;
the dim light enhancement model is obtained by training based on illumination smooth constraint of an illumination characteristic diagram of the sample image, spatial consistency between the sample image and a sample enhanced image of the sample image and/or image quality constraint of the sample enhanced image.
According to an image enhancement method provided by the invention, the spatial consistency between the sample image and the sample enhanced image of the sample image is determined based on the gradient value between adjacent areas in the sample image and the gradient value between adjacent areas in the sample enhanced image.
According to the image enhancement method provided by the invention, the image quality constraint of the sample enhanced image comprises at least one of an image exposure constraint, a color constancy constraint and a smoothing constraint;
the image exposure constraint is used for constraining the difference between the exposure intensity of each region in the sample enhanced image and a preset exposure intensity;
the color constancy constraint is used for constraining intensity deviation among various color channels of the sample enhanced image;
and the smooth constraint is used for constraining gradient values between adjacent pixel points in the sample enhanced image.
According to the image enhancement method provided by the invention, the illumination characteristic diagram of the dim light image is determined based on the following steps:
based on the full-resolution branch in the dim light enhancement model, performing full-resolution illumination feature extraction on the dim light image to obtain a full-resolution illumination map;
based on the low-resolution branch in the dim light enhancement model, down-sampling the image features of the dim light image, and based on the low-resolution features obtained by the down-sampling, extracting the low-resolution illumination features to obtain a low-resolution illumination map;
and performing feature fusion on the full-resolution illumination map and the low-resolution illumination map based on the fusion branch in the dim light enhancement model to obtain the illumination feature map.
According to the image enhancement method provided by the invention, the low-resolution illumination feature extraction is carried out based on the low-resolution features obtained by down-sampling to obtain the low-resolution illumination map, and the method comprises the following steps:
and respectively carrying out global feature extraction and local feature extraction on the low-resolution features based on the low-resolution branches to obtain the low-resolution illumination map fusing the global features and the local features.
According to an image enhancement method provided by the present invention, the performing illumination transformation on the illumination characteristic map of the dim image includes:
and performing illumination transformation on the illumination characteristic diagram based on an illumination mapping function in the dim light enhancement model, wherein the illumination mapping function is a monotonic function with an independent variable being the illumination characteristic diagram.
The present invention also provides an image enhancement apparatus comprising:
an image determining unit for determining a dim image to be enhanced;
the dim light enhancement unit is used for determining an enhanced image of the dim light image based on a dim light enhancement model, wherein the dim light enhancement model is obtained based on the spatial consistency between a sample image and a sample enhanced image of the sample image and/or the image quality constraint training of the sample enhanced image; the sample enhanced image is an enhanced image of the sample image determined based on a dim light enhancement model in training.
The present invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the image enhancement method as described in any of the above when executing the program.
The invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the image enhancement method as described in any of the above.
According to the image enhancement method, the image enhancement device, the electronic equipment and the storage medium, the enhanced image of the dim image is determined according to the dim enhancement model, wherein the dim enhancement model can be obtained based on the space consistency between the sample image and the sample enhanced image of the sample image, and/or the image quality of the sample enhanced image is obtained through unsupervised training, the enhanced image paired with the sample image is not required to be collected in advance in the training process, paired data is not required to be relied on, and the collection difficulty of the training sample is greatly reduced; model training is carried out based on space consistency constraint and/or image quality constraint, so that the trained dim light enhancement model can inhibit unnatural effects possibly introduced in the dim light enhancement process, the image quality of the output enhanced image is ensured, and the feasibility and the reliability of dim light enhancement are improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of an image enhancement method provided by the present invention;
FIG. 2 is a schematic diagram of a process for determining an illumination profile provided by the present invention;
FIG. 3 is an overall block diagram of the image enhancement method provided by the present invention;
FIG. 4 is a schematic structural diagram of an image enhancement apparatus provided by the present invention;
fig. 5 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Image information is an important information carrier, and facilitates information exchange. However, in real life, a large amount of dark-light images exist in a large amount of image data due to limitations of natural conditions and technical conditions. Such dim light images have the problems of low brightness, poor visibility, high noise, distortion and the like, and the requirements of people on image quality are difficult to meet, and the use of subsequent computer vision algorithms, such as target detection, instance segmentation and the like, is seriously influenced by the problems.
At present, the methods for enhancing the dark image mainly include histogram equalization and Retinex algorithm. The histogram equalization method is widely applied due to simple algorithm and high running speed, but the image processed by the method loses gray level and has weak processing capacity on polarization conditions. The Retinex algorithm is a color constancy algorithm, which simulates the nonlinear characteristic of the human visual system, estimates the illumination component of an image by using a Gaussian filter, and subtracts the illumination component from an original foggy image signal in a logarithmic domain to obtain a reflection component reflecting real image information, thereby achieving the purposes of improving the illumination condition of the image and sharpening the image details. However, since the Retinex algorithm is based on the assumption of "gradual change of illumination", halo artifacts are easily generated at the abrupt gray level change, and the use of a gaussian filter to estimate the illumination component of an image is likely to cause blurring of the edges of the image.
To solve these problems, many dim light image enhancement techniques have been proposed, such as a filtering-based method, a decomposition-based method, a deep learning-based method, and the like. Although the method can improve the image quality, various unnatural effects such as local over-enhancement, color distortion, serious noise and the like are introduced while the image quality is improved, and the method relies on paired data which are very difficult to collect in most real scenes. Therefore, in order to make the scotopic image widely used in the computer vision task, it is necessary to reduce the dependence on training data when the scotopic image is subjected to image enhancement, and to perform operations such as noise filtering and color restoration on the image after brightness enhancement.
In view of the above situation, the present invention provides an image enhancement method, which is intended to implement unsupervised training, and fig. 1 is a schematic flow chart of the image enhancement method provided by the present invention, as shown in fig. 1, the method includes:
step 110, determining a dim light image to be enhanced;
specifically, before image enhancement is performed, it is necessary to first determine a dim image to be image-enhanced, i.e., a dim image to be enhanced. The dim light image to be enhanced can be obtained by image acquisition of a dim light area through image acquisition equipment, wherein the dim light area is an area with weak light, and the image acquisition equipment can be a camera or a camera and also can be intelligent equipment such as a smart phone, a tablet and a computer provided with the camera.
It should be noted that one or more dark light images to be enhanced may be provided, and in the case where there are more dark light images to be enhanced, image enhancement needs to be performed on each dark light image.
Step 120, determining an enhanced image of the dim image based on a dim enhancement model, wherein the dim enhancement model is obtained based on spatial consistency between the sample image and the sample enhanced image of the sample image, and/or image quality constraint training of the sample enhanced image; the sample enhanced image is an enhanced image of the sample image determined based on the dim light enhancement model in training.
Specifically, after determining the dim image to be enhanced in step 110, the dim image may be image enhanced by a dim enhancement model, so as to obtain an enhanced image of the dim image.
Considering that the current image enhancement method depends on paired training data to a great extent, and the paired training data are difficult to obtain, the embodiment of the invention adopts an unsupervised mode when training the dim light enhancement model, namely, the paired training data are not needed, and the dim light enhancement model can be trained only according to the sample image.
Specifically, during training, the sample image may be input into the dim light enhancement model during training, so as to obtain an enhanced image of the sample image, that is, a sample enhanced image. On the basis, the dim light enhancement model needs to be trained by taking the spatial consistency between the sample image and the sample enhanced image of the sample image as a constraint, or by taking the image quality constraint of the sample enhanced image as a constraint condition, or by taking the spatial consistency between the sample image and the sample enhanced image of the sample image and the image quality constraint of the sample enhanced image as a constraint condition.
It should be noted that the spatial consistency here represents the consistency of the sample image and the sample enhanced image in the spatial distribution, and the spatial consistency is taken as the training constraint of the dim light enhanced model, and the enhanced image paired with the sample image does not need to be collected in advance for the sample image, and the spatial distribution characteristic of the sample image itself can be directly applied, and the parameters of the dim light enhanced model and the spatial distribution characteristic of the sample enhanced image obtained by the dim light enhanced model are updated and iterated, so that the spatial distribution characteristics of the sample image and the sample enhanced image can be consistent, and the trained dim light enhanced model can suppress the unnatural effect which may be introduced in the dim light enhanced process. Further, the characteristic of the spatial distribution of the image may be a gradient between each region and an adjacent region in the image, for example, the gradient may be a gray scale value of each region and an adjacent region in a gray scale image of the image, or a gradient of an intensity value of each region and an adjacent region in each channel image of the image, where the region division of the image may be preset, or may be adaptively adjusted according to a size, a resolution, and the like of the image.
The image quality constraint is to constrain the image quality of the sample enhanced image obtained by the dim light enhanced model, the image quality of the sample enhanced image is used as the training constraint of the dim light enhanced model, and similarly, the enhanced image paired with the sample image does not need to be collected in advance for the sample image, the image quality of the sample enhanced image obtained by the dim light enhanced model can be directly applied, and the parameters of the dim light enhanced model are updated and iterated, so that the image quality of the sample enhanced image can reach the preset image quality requirement, and the trained dim light enhanced model can ensure that the output enhanced image does not have the quality problems of gray level loss, edge blurring and the like while realizing the image brightness enhancement. Further, the image quality reflected here may cover one or more parameter indexes of an exposure condition, a color deviation, and an illumination smoothness, and the image quality constraint may be specifically used to constrain a gap between the exposure condition of the sample enhanced image and a preset good exposure level, or may be used to constrain a color deviation degree of the sample enhanced image, and may also be used to constrain an illumination smoothness degree of the sample enhanced image, and the like.
Before step 120 is executed, a dim light enhancement model may be obtained by training in advance according to the above constraint conditions, and the specific steps may include:
firstly, collecting a large number of sample images, inputting the sample images into a trained dim light enhancement model, and carrying out dim light enhancement on the input sample images by the dim light enhancement model to obtain an enhanced image of the sample images output by the trained dim light enhancement model, namely a sample enhanced image; it should be noted that the sample image is not limited to the dark light image, and may also be a non-dark light image, and this is not specifically limited in the embodiment of the present invention.
And then, carrying out parameter iteration on the trained dim light enhancement model based on the spatial consistency between the sample image and the sample enhancement image of the sample image and/or based on the image quality constraint of the sample enhancement image of the sample image, thereby obtaining the trained dim light enhancement model.
When the obtained dim light enhancement model is used for carrying out dim light enhancement on the dim light image, the image quality of the dim light image can be improved, and various unnatural effects can be avoided while the image quality is improved.
According to the image enhancement method provided by the invention, the enhanced image of the dim image is determined according to the dim enhancement model, wherein the dim enhancement model can be obtained based on the space consistency between the sample image and the sample enhanced image of the sample image and/or the image quality constraint of the sample enhanced image without supervision training, the enhanced image paired with the sample image does not need to be collected in advance in the training process, and the paired data is not relied on, so that the collection difficulty of the training sample is greatly reduced; model training is carried out based on space consistency constraint and/or image quality constraint, so that the trained dim light enhancement model can inhibit unnatural effects possibly introduced in the dim light enhancement process, the image quality of the output enhanced image is ensured, and the feasibility and the reliability of dim light enhancement are improved.
Based on the above embodiment, in step 120, determining an enhanced image of the dim image based on the dim enhancement model includes:
based on the dim light enhancement model, carrying out illumination conversion on the illumination characteristic graph of the dim light image, and combining the converted illumination enhancement graph with the dim light image to obtain an enhanced image of the dim light image;
the dim light enhancement model is obtained by training based on illumination smooth constraint of an illumination characteristic diagram of the sample image, spatial consistency between the sample image and a sample enhanced image of the sample image and/or image quality constraint of the sample enhanced image.
Specifically, in the step 120, in the process of performing the dim light enhancement on the dim light image by the dim light enhancement model, the image enhancement problem of the dim light image may be converted into an illumination optimization problem, the dim light enhancement model determines the illumination characteristic diagram of the dim light image, on this basis, in order to improve the illumination information in the illumination characteristic diagram of the dim light image, the dim light enhancement model may perform the dim light enhancement on the illumination characteristic diagram of the dim light image, and then the illumination characteristic diagram obtained by the dim light enhancement is combined with the dim light image, so as to obtain the enhanced image of the dim light image.
The illumination characteristic graph of the dark light image is subjected to dark light enhancement, specifically, illumination conversion is performed on the illumination characteristic graph, so that the illumination intensity in the illumination characteristic graph is improved, and an illumination enhancement graph is obtained. Here, the illumination characteristic map subjected to the illumination conversion may cover the illumination characteristic maps of the color channels of the dark light image. Correspondingly, the illumination enhancement map obtained by the dark light enhancement also covers the illumination characteristic maps of the color channels, and the illumination characteristic maps of the color channels after the dark light enhancement can be applied to the dark light image, so that the enhanced image of the dark light image is obtained.
In the process of dim light enhancement, the dim light enhancement model firstly needs to determine an illumination characteristic diagram of a dim light image, and in order to ensure that the illumination characteristic diagram can be accurately and reliably extracted, constraint conditions for extraction of the illumination characteristic diagram are increased in the process of training the dim light enhancement model.
In a natural illumination scene, the illumination information of an image should be presented smoothly, in other words, the illumination information in the image has correlation in space, and the illumination information of adjacent regions does not change greatly, so that the illumination intensity change between adjacent pixels should be smooth, and accordingly, the constraint condition for extracting the illumination characteristic graph, which is added in the process of training the dim light enhancement model, can be illumination smoothness constraint. The illumination smoothness constraint is used for constraining the smoothness of illumination intensity change of each pixel point in the illumination characteristic diagram, the illumination smoothness is used as the training constraint of the dim light enhancement model, the illumination smoothness of the illumination characteristic diagram of the dim light image obtained by the dim light enhancement model can be directly applied, and the parameters of the dim light enhancement model are updated and iterated, so that the illumination intensity change of the illumination characteristic diagram of the dim light image can be smooth as much as possible, and accurate and reliable illumination characteristic diagram extraction of the trained dim light enhancement model can be guaranteed.
In the embodiment of the invention, the image enhancement problem of the dim image is converted into the illumination optimization problem, and the dim enhancement is carried out on the illumination characteristic diagram of the dim image through the dim enhancement model, compared with the traditional dim enhancement model which learns the regression mapping between the original dim image and the clear image from the training data, the embodiment of the invention learns the illumination conversion relation between the illumination characteristic diagram of the dim image and the illumination characteristic diagram of the clear image through the dim enhancement model, can better promote the illumination information of the dim image, and can enhance the detail information of the dim image, thereby realizing the integral promotion of the image quality of the dim image.
Based on the above embodiment, the illumination smoothness may be expressed as an illumination coefficient gradient of each pixel point in the image in the horizontal direction, and/or an illumination coefficient gradient of each pixel point in the vertical direction.
Considering that the illumination characteristic diagram for performing illumination transformation may cover the illumination characteristic diagram of each color channel of the dark light image, for example, the illumination characteristic diagram may include R, G, B three channels, when performing illumination smoothing constraint, the illumination smoothness of the illumination characteristic diagram of each color channel may be taken into consideration, for example, the illumination smoothness of the illumination characteristic diagram of each color channel may be averaged, and thus the illumination smoothing constraint may be expressed by the following formula:
Figure BDA0003250111900000101
wherein the content of the first and second substances,
Figure BDA0003250111900000102
a loss function representing illumination smoothness constraint of the illumination characteristic diagram, N represents the total number of pixel points in the illumination characteristic diagram, c represents the number of channels,
Figure BDA0003250111900000103
represents the nth pixel point on the c channel,
Figure BDA0003250111900000104
representing the gradient of the illumination coefficient in the horizontal direction,
Figure BDA0003250111900000105
indicating the gradient of the illumination coefficient in the vertical direction, and xi indicating three channels of R, G and B.
Based on the above-described embodiment, the spatial consistency between the sample image and the sample enhanced image of the sample image is determined based on the gradient values between the adjacent regions in the sample image and the gradient values between the adjacent regions in the sample enhanced image.
Specifically, in the course of training the dim light enhancement model, it is necessary to determine the spatial consistency between the sample image and the sample enhancement image of the sample image, and accordingly perform update iteration on the parameters of the dim light enhancement model.
The spatial consistency between the sample image and the sample enhanced image of the sample image can be specifically represented as a difference between a gradient value between each adjacent region in the sample image and a gradient value between each adjacent region in the sample enhanced image, where the gradient value between each adjacent region in the sample image can be used to characterize the spatial distribution characteristics of the sample image, and the gradient value between each adjacent region in the sample enhanced image can be used to characterize the spatial distribution characteristics of the sample enhanced image, and a difference between the two can also be regarded as a difference in the spatial distribution characteristics of the sample image and the sample enhanced image, and the larger the difference, the weaker the spatial consistency is, and the smaller the difference, the better the spatial consistency is.
Here, the division of the region in the sample image may be the same as or different from the division of the region in the sample enhanced image of the sample image. For any image, for example, a sample image or a sample enhanced image, the image may be divided into a plurality of regions, and for each region, a gradient value between the region intensity of the region and the region intensities of the regions adjacent to the region may be obtained, so that the gradient value between the adjacent regions may be obtained.
Based on the above embodiment, the spatial consistency loss promotes the consistency of the sample image and the sample enhanced image in the image space by keeping the gradient values of the adjacent regions consistent, and the spatial consistency loss specifically obtains two grayscale images by calculating the mean values of the sample image and the sample enhanced image in three color channels respectively; and decomposing the pixel interpolation data into a plurality of 4 × 4 patches, calculating the pixel interpolation of a central area i and an adjacent area j in each patch, and averaging the pixel interpolation.
For example, spatial consistency can be expressed by the following equation:
Figure BDA0003250111900000111
wherein L isspaA loss function representing a spatial consistency constraint, K the number of local regions, Ω (i) four adjacent regions (top, down, left, right) centered around region i, YiRepresenting the mean intensity value, Y, of a region i in the sample enhanced imagejRepresenting the mean intensity value, I, of four adjacent regions in the sample-enhanced image, centered around region IiRepresenting the mean intensity value of a region I in the sample image, IjRepresenting the average intensity value of four adjacent regions in the sample image centered on region i.
Here, the size of the local region is preset, and may be set to 4 × 4, 6 × 6, or the like, for example.
Based on the above embodiment, the image quality constraint of the sample enhanced image includes at least one of an image exposure constraint, a color constancy constraint, and a smoothing constraint;
the image exposure constraint is used for constraining the difference between the exposure intensity of each region in the sample enhanced image and the preset exposure intensity;
the color constancy constraint is used for constraining the intensity deviation among the color channels of the sample enhanced image;
and the smooth constraint is used for constraining the gradient value between each adjacent pixel point in the sample enhanced image.
Specifically, in the process of training the dim light enhancement model, it is necessary to determine the image quality constraint of the sample enhancement image, and accordingly perform update iteration on the parameters of the dim light enhancement model. The image quality constraint may be an image exposure constraint, a color constancy constraint or a smoothing constraint, or any two of the image exposure constraint, the color constancy constraint and the smoothing constraint, or the image exposure constraint, the color constancy constraint and the smoothing constraint, which is not specifically limited in the embodiment of the present invention.
The image exposure constraint is used for constraining the difference between the exposure intensity of each region in the sample enhanced image and the preset exposure intensity, the sample enhanced image at the position can be divided into a plurality of regions in advance, and the difference between the exposure intensity of each region and the preset exposure intensity can be the sum of the differences between each region and the preset exposure intensity, or the mean value of the differences between each region and the preset exposure intensity, and the like. The larger the difference between the exposure intensity of each region and the preset exposure intensity is, the worse the exposure condition of the sample enhanced image is, and the smaller the difference between the exposure intensity of each region and the preset exposure intensity is, the better the exposure condition of the sample enhanced image is, so that the enhanced image output by the dim light enhancement model can have a good exposure condition through image exposure constraint. Here, the preset exposure intensity may be set in advance according to actual conditions, and the preset exposure intensity may be set to a default good exposure intensity, for example, to a gray scale level E of 6.
For example, the image exposure constraint may be obtained by measuring the difference between the exposure intensity of the local region and the good exposure intensity, converting the sample enhanced image into a gray-scale map, decomposing the gray-scale map into 16 × 16patches, and calculating the average value in each patch.
The image exposure constraints can be expressed by the following formula:
Figure BDA0003250111900000131
wherein L isexpRepresenting image exposure constraints of the sample enhanced image, M representing the number of local regions, YkRepresenting a sampleThe average intensity value of region k in the image is enhanced. The size of the local region is 16 × 16, and the regions of 16 × 16 may be overlapped or not overlapped, which is not specifically limited in the embodiment of the present invention.
And the color constancy constraint is used for constraining the intensity deviation among the color channels of the sample enhanced image, correcting the potential deviation among the color channels and establishing the relation among the color channels so as to avoid the color distortion of the image. The greater the intensity deviation among the color channels, the more serious the color distortion of the sample enhanced image is; conversely, the smaller the intensity deviation between each color channel, the smaller the color distortion degree of the sample enhanced image. Therefore, the problem of image distortion of the enhanced image output by the dim light enhancement model can be avoided through color constancy constraint.
For example, the color constancy constraint may be expressed by the following equation:
Figure BDA0003250111900000132
wherein L iscolRepresenting the color constancy constraint of the sample enhanced image, JpRepresenting the average intensity of the channel p in the sample enhanced image, JqRepresents the average intensity of channel q in the sample enhanced image, and (p, q) represents a pair of channels in the sample enhanced image. The above formula can be understood as the assumption of gray world color constancy, and the global average values of the three channels of R, G and B are similar to the global average value of the whole color image, so that the conclusion that the subtraction of the average values of the three channels is small in pairs under the condition that the color of the image is not distorted can be obtained, and the result is used as the training process of the color constancy constraint acting on the dim light enhancement model.
The smooth constraint is used for constraining gradient values between adjacent pixel points in the sample enhanced image, keeping smooth transition of image colors of adjacent pixels in the sample enhanced image, and keeping the horizontal gradient average value and the vertical gradient average value of pixel values at adjacent positions in each channel at a smaller level. The larger the gradient value between each adjacent pixel point in the sample enhanced image is, the more abrupt and unnatural the image color transition of the sample enhanced image is, and the smaller the gradient value between each adjacent pixel point in the sample enhanced image is, the more smooth and natural the image color transition of the sample enhanced image is. And the color transition of the enhanced image output by the dim light enhanced model can be ensured to be natural through smooth constraint.
The smoothing constraint can be expressed by the following formula:
Figure BDA0003250111900000141
wherein the content of the first and second substances,
Figure BDA0003250111900000142
representing the smoothness constraint of the sample enhanced image, N representing the total number of pixels in the sample enhanced image, c representing the number of channels,
Figure BDA0003250111900000143
represents the nth pixel point on the c channel,
Figure BDA0003250111900000144
representing the gradient of pixel values in the horizontal direction,
Figure BDA0003250111900000145
and the gradient of the pixel value in the vertical direction is shown, and xi represents three channels of R, G and B.
Based on the above embodiment, fig. 2 is a schematic diagram of a determination process of the illumination characteristic diagram provided by the present invention, and as shown in fig. 2, the illumination characteristic diagram of the dim image is determined based on the following steps:
step 210, performing full-resolution illumination feature extraction on the dim light image based on the full-resolution branch in the dim light enhancement model to obtain a full-resolution illumination pattern;
step 220, based on the low resolution branch in the dim light enhancement model, down-sampling the image features of the dim light image, and based on the low resolution features obtained by the down-sampling, performing low resolution illumination feature extraction to obtain a low resolution illumination map;
and step 230, performing feature fusion on the full-resolution illumination map and the low-resolution illumination map based on the fusion branch in the dim light enhancement model to obtain an illumination feature map.
Specifically, in step 120, before performing illumination conversion on the illumination feature map of the dim image according to the dim enhancement model, the illumination feature map of the dim image needs to be determined, and the determining process of the illumination feature map of the dim image includes the following steps:
step 210, firstly, the full-resolution illumination feature extraction may be performed on the dim light image according to the full-resolution branch in the dim light enhancement model, specifically, the dim light image may be input to the full-resolution branch in the dim light enhancement model, the full-resolution branch in the dim light enhancement model performs a series of convolution operations on the input dim light image, the full-resolution illumination feature of the dim light image is extracted, and the full-resolution illumination image output by the full-resolution branch in the dim light enhancement model is obtained.
In step 220, the image features of the dim light image can be down-sampled according to the low-resolution branch in the dim light enhancement model to obtain the low-resolution features of the dim light image; and low-resolution illumination feature extraction is carried out on the low-resolution features obtained by down-sampling to obtain a low-resolution illumination image of the dim light image.
The determination process of the low-resolution illumination map may specifically be: firstly, inputting a dim light image to a low-resolution branch in a dim light enhancement model, and performing image feature extraction on the dim light image by an encoder in the low-resolution branch to obtain the image feature of the dim light image; then, down-sampling the image characteristics of the dim light image to obtain the low-resolution characteristics of the dim light image; after that, the low resolution features obtained by down-sampling are subjected to low resolution illumination feature extraction by the feature extractor in the low resolution branch, so as to obtain a low resolution illumination map of the dim image. The feature extractor herein includes a local feature extractor and a global feature extractor.
It should be noted that, the execution sequence of the process of performing full-resolution illumination feature extraction and the process of performing low-resolution illumination feature extraction on the dim light image is not in sequence.
After the full-resolution illumination pattern and the low-resolution illumination pattern are obtained through steps 210 and 220, respectively, in step 230, feature fusion may be performed on the full-resolution illumination pattern and the low-resolution illumination pattern through a fusion branch in the dim light enhancement model, specifically, the full-resolution illumination pattern and the low-resolution illumination pattern may be input to the fusion branch in the dim light enhancement model, and the fusion branch in the dim light enhancement model performs feature fusion on the input full-resolution illumination pattern and the input low-resolution illumination pattern.
According to the method provided by the embodiment of the invention, the low-resolution illumination map and the full-resolution illumination map of the dim image are respectively extracted through the low-resolution branch and the full-resolution branch in the dim enhancement model, the low-resolution illumination map and the full-resolution illumination map are subjected to feature fusion through the fusion branch in the dim enhancement model to obtain the illumination feature map, and the illumination feature map obtained by the feature map integrates illumination features under multiple scales, so that the method provides benefits for illumination conversion and dim enhancement based on the illumination feature map to obtain the enhanced image.
Based on the above embodiment, in step 230, based on the fusion branch in the dim light enhancement model, the feature fusion is performed on the full-resolution illumination map and the low-resolution illumination map to obtain an illumination feature map, which further includes:
the low resolution illumination map is upsampled.
At present, a gaussian filter or a bilateral filter is usually adopted for up-sampling, wherein the gaussian filter adopts the same gaussian kernel at any position of an image, and the processing has the disadvantages that effective smoothing of the image is realized, and simultaneously edge information is blurred, so that the edge area of the image cannot be well maintained, and the image quality of an enhanced image obtained after the image subjected to the processing is subjected to dim light enhancement is blurred.
In view of this situation, in the embodiment of the present invention, a Bilateral grid upsampling (Bilateral grid upsampling) is used when upsampling the low-resolution illumination map, and a core idea of the Bilateral grid upsampling is as follows: any filter can be regarded as a linear transformation in a local small area, and a high-resolution image can be obtained by slicing from a low-resolution image by using a bilateral grid.
It should be noted that, when sampling bilateral grid upsampling is performed to upsample a low-resolution illumination map in the embodiment of the present invention, the transform coefficients are used, that is, the illumination information in the low-resolution illumination map is upsampled, rather than the pixels of the entire low-resolution illumination map, and the upsampling of the transform coefficients can enable the detail information in the low-resolution illumination map to be retained to the maximum extent, so that the loss of image quality is reduced.
Based on the above embodiment, in step 220, performing low-resolution illumination feature extraction based on the low-resolution features obtained by down-sampling to obtain a low-resolution illumination map, including:
and respectively carrying out global feature extraction and local feature extraction on the low-resolution features based on the low-resolution branches to obtain a low-resolution illumination map fusing the global features and the local features.
In consideration of the fact that a single dim-light image may have a situation that illumination of a partial area is strong and illumination of another part is weak, in the embodiment of the present invention, the low-resolution branch is divided into two parts, namely, a local feature extraction part and a global feature extraction part, the local feature extraction part performs low-resolution illumination feature extraction on low-resolution features by using a local feature extractor, and the global feature extraction part performs low-resolution illumination feature extraction on the low-resolution features by using a global feature extractor, so as to obtain a local low-resolution illumination pattern output by the local feature extractor in the low-resolution branch and a global low-resolution illumination pattern output by the global feature extractor.
The local feature extractor is used for extracting local features of the low-resolution features, the global feature extractor is used for extracting global features of the low-resolution features, namely the global feature extractor focuses on the illumination condition of the whole dim image, and the local feature extractor focuses on the illumination condition of each local part in the dim image.
When the local feature extractor extracts the local features of the low-resolution features, firstly, the partial patch convolutions are carried out, then, the convolution results of the patches are combined, the local illumination information of the low-resolution features of the dim-light image is specifically analyzed through the method, and finally, the global features extracted by the global feature extractor are combined to obtain the low-resolution illumination map.
And then, fusing the local low-resolution illumination map and the global low-resolution illumination map, wherein the fusing can be directly performing feature splicing on the local low-resolution illumination map and the global low-resolution illumination map, so as to obtain the low-resolution illumination map fused with the local features and the global features.
Based on the above embodiment, in step 120, performing luminance transformation on the illumination characteristic map of the dim image includes:
and performing illumination transformation on the illumination characteristic diagram based on an illumination mapping function in the dim light enhancement model, wherein the illumination mapping function is a monotonic function with an independent variable as the illumination characteristic diagram.
Specifically, before performing illumination transformation on the illumination characteristic diagram of the dim image according to the dim enhancement model, an illumination mapping function for illumination transformation needs to be determined, where the illumination mapping function is a curve capable of directly mapping an illumination characteristic diagram with lower illumination information to an illumination enhancement diagram with higher illumination information, and parameters of the illumination mapping function are adaptive according to the illumination characteristic diagram and only depend on the illumination characteristic diagram.
It should be noted that, when determining the illumination mapping function, the following points need to be considered:
1. in order to avoid information loss caused by over-enhancement, the pixel values of the enhanced image obtained by illumination conversion through the illumination mapping function can be normalized to [0, 1 ];
2. in order to preserve the difference (contrast) between adjacent pixels, the illumination mapping function must be monotonic;
3. in order to be conductive in the gradient back propagation process, the illumination mapping function should be as simple as possible;
4. when the dim light enhancement is carried out through the illumination mapping function, a larger gain is required for a smaller numerical value; conversely, for larger values, there should be less gain to avoid overexposure of the resulting enhanced image.
In view of the above requirements, the illumination mapping function proposed in the embodiment of the present invention can be represented by the following formula:
H(x)=x+x*(1-x)*a
wherein, x represents the value of the corresponding pixel point in the illumination characteristic diagram, and a represents the transformation coefficient. It should be noted that the transformation coefficient is a learnable matrix with the same width and height as the illumination characteristic diagram, and the transformation coefficients of the pixels in the matrix are different. The transform coefficients may be pre-trained and stored in a dim light enhancement model.
After the illumination mapping function is determined, the illumination enhancement image obtained by the illumination transformation is subjected to the dark light enhancement according to the illumination mapping function in the dark light enhancement model to obtain the enhanced illumination information of the three channels, and the enhanced illumination information of the three channels is applied to the dark light image to obtain the enhanced image of the dark light image.
Based on the above embodiments, fig. 3 is a general framework diagram of the image enhancement method provided by the present invention, and as shown in fig. 3, the dim light enhancement model includes a full resolution branch and a low resolution branch. The low-resolution branch is used for learning the overall information of the dim light image, and the use of the low-resolution branch is beneficial to increasing the receptive field of the dim light enhancement model and improving the speed of the model for carrying out dim light enhancement.
In consideration of the fact that a single dim-light image may have a situation that partial areas are strong in illumination and the other part is weak in illumination, in the embodiment of the present invention, the low-resolution branch is divided into two parts, which are a local feature extraction part and a global feature extraction part, respectively, the local feature extraction part extracts local features of low-resolution features by a local feature extractor, the global feature extraction part extracts global features of the low-resolution features by a global feature extractor, the global feature extractor focuses on the illumination situation of the whole dim-light image, and the local feature extractor focuses on the illumination situation of each local area in the dim-light image.
When the local feature extractor extracts the local features of the low-resolution features, firstly, the partial patch convolutions are carried out, then, the convolution results of the patches are combined, the local illumination information of the low-resolution features of the dim-light image is specifically analyzed through the method, and finally, the global features extracted by the global feature extractor are combined to obtain the low-resolution illumination map.
Referring to fig. 3, a dim light image is input to a low resolution branch in the dim light enhancement model, and an encoder in the low resolution branch performs image feature extraction on the dim light image to obtain image features of the dim light image; then, down-sampling the image characteristics of the dim light image to obtain the low-resolution characteristics of the dim light image; and then, local feature extraction and global feature extraction are respectively carried out on the low-resolution features through a local feature extractor and a global feature extractor in the low-resolution branch, and the results output after the local feature extractor and the global feature extractor are subjected to feature extraction are fused to obtain a low-resolution illumination map fused with the global features and the local features.
And inputting the dim light image into a full-resolution branch in the dim light enhancement model, performing a series of convolution operations on the dim light image by the full-resolution branch, extracting the full-resolution illumination characteristics of the dim light image, and obtaining a full-resolution illumination pattern output by the full-resolution branch.
Then, the full-resolution illumination map and the low-resolution illumination map are input to a fusion branch in the dim light enhancement model, and the fusion branch in the dim light enhancement model performs feature fusion on the input full-resolution illumination map and the input low-resolution illumination map.
After the illumination characteristic graph is determined, the illumination transformation can be carried out on the illumination characteristic graph of the dim light image according to an illumination mapping function in the dim light enhancement model to obtain an illumination enhancement graph of the dim light image; the luminance enhancement map resulting from the luminance transformation may then be combined with the scotopic image to obtain an enhanced image of the scotopic image.
The following describes the image enhancement device provided by the present invention, and the image enhancement device described below and the image enhancement method described above can be referred to correspondingly.
Fig. 4 is a schematic structural diagram of an image enhancement apparatus provided by the present invention, as shown in fig. 4, the apparatus includes:
an image determining unit 410 for determining a dim image to be enhanced;
a dim light enhancement unit 420, configured to determine an enhanced image of the dim light image based on a dim light enhancement model, where the dim light enhancement model is obtained based on spatial consistency between a sample image and a sample enhanced image of the sample image, and/or based on image quality constraint training of the sample enhanced image; the sample enhanced image is an enhanced image of the sample image determined based on a dim light enhancement model in training.
The image enhancement device determines the enhanced image of the dim image according to the dim enhancement model, wherein the dim enhancement model can be obtained based on the space consistency between the sample image and the sample enhanced image of the sample image, and/or the image quality of the sample enhanced image is restrained by unsupervised training, the enhanced image paired with the sample image does not need to be collected in advance in the training process, paired data is not relied on, and the collection difficulty of the training sample is greatly reduced; model training is carried out based on space consistency constraint and/or image quality constraint, so that the trained dim light enhancement model can inhibit unnatural effects possibly introduced in the dim light enhancement process, the image quality of the output enhanced image is ensured, and the feasibility and the reliability of dim light enhancement are improved.
Based on the above embodiment, the dim light enhancement unit 420 is configured to:
based on the dim light enhancement model, carrying out illumination conversion on the illumination characteristic graph of the dim light image, and combining the converted illumination enhancement graph with the dim light image to obtain an enhanced image of the dim light image;
the dim light enhancement model is obtained by training based on illumination smooth constraint of an illumination characteristic diagram of the sample image, spatial consistency between the sample image and a sample enhanced image of the sample image and/or image quality constraint of the sample enhanced image.
Based on the above embodiment, the spatial consistency between the sample image and the sample enhanced image of the sample image is determined based on the gradient values between adjacent regions in the sample image and the gradient values between adjacent regions in the sample enhanced image.
Based on the above embodiment, the image quality constraint of the sample enhanced image comprises at least one of an image exposure constraint, a color constancy constraint, and a smoothing constraint;
the image exposure constraint is used for constraining the difference between the exposure intensity of each region in the sample enhanced image and a preset exposure intensity;
the color constancy constraint is used for constraining intensity deviation among various color channels of the sample enhanced image;
and the smooth constraint is used for constraining gradient values between adjacent pixel points in the sample enhanced image.
Based on the above embodiment, the apparatus further includes a feature map determining unit, configured to:
based on the full-resolution branch in the dim light enhancement model, performing full-resolution illumination feature extraction on the dim light image to obtain a full-resolution illumination map;
based on the low-resolution branch in the dim light enhancement model, down-sampling the image features of the dim light image, and based on the low-resolution features obtained by the down-sampling, extracting the low-resolution illumination features to obtain a low-resolution illumination map;
and performing feature fusion on the full-resolution illumination map and the low-resolution illumination map based on the fusion branch in the dim light enhancement model to obtain the illumination feature map.
Based on the above embodiment, the feature map determination unit is configured to:
and respectively carrying out global feature extraction and local feature extraction on the low-resolution features based on the low-resolution branches to obtain the low-resolution illumination map fusing the global features and the local features.
Based on the above embodiment, the dim light enhancement unit 420 is configured to:
and performing illumination transformation on the illumination characteristic diagram based on an illumination mapping function in the dim light enhancement model, wherein the illumination mapping function is a monotonic function with an independent variable being the illumination characteristic diagram.
Fig. 5 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 5: a processor (processor)510, a communication Interface (Communications Interface)520, a memory (memory)530 and a communication bus 540, wherein the processor 510, the communication Interface 520 and the memory 530 communicate with each other via the communication bus 540. Processor 510 may invoke logic instructions in memory 530 to perform an image enhancement method comprising: determining a dim light image to be enhanced; determining an enhanced image of the dim image based on a dim enhancement model, wherein the dim enhancement model is obtained based on spatial consistency between a sample image and a sample enhanced image of the sample image, and/or image quality constraint training of the sample enhanced image; the sample enhanced image is an enhanced image of the sample image determined based on a dim light enhancement model in training.
Furthermore, the logic instructions in the memory 530 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the image enhancement method provided by the above methods, the method comprising: determining a dim light image to be enhanced; determining an enhanced image of the dim image based on a dim enhancement model, wherein the dim enhancement model is obtained based on spatial consistency between a sample image and a sample enhanced image of the sample image, and/or image quality constraint training of the sample enhanced image; the sample enhanced image is an enhanced image of the sample image determined based on a dim light enhancement model in training.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor is implemented to perform the image enhancement method provided by the above methods, the method comprising: determining a dim light image to be enhanced; determining an enhanced image of the dim image based on a dim enhancement model, wherein the dim enhancement model is obtained based on spatial consistency between a sample image and a sample enhanced image of the sample image, and/or image quality constraint training of the sample enhanced image; the sample enhanced image is an enhanced image of the sample image determined based on a dim light enhancement model in training.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An image enhancement method, comprising:
determining a dim light image to be enhanced;
determining an enhanced image of the dim image based on a dim enhancement model, wherein the dim enhancement model is obtained based on spatial consistency between a sample image and a sample enhanced image of the sample image, and/or image quality constraint training of the sample enhanced image;
the sample enhanced image is an enhanced image of the sample image determined based on a dim light enhancement model in training.
2. The image enhancement method of claim 1, wherein determining the enhanced image of the scotopic image based on a scotopic enhancement model comprises:
based on the dim light enhancement model, carrying out illumination conversion on the illumination characteristic graph of the dim light image, and combining the converted illumination enhancement graph with the dim light image to obtain an enhanced image of the dim light image;
the dim light enhancement model is obtained by training based on illumination smooth constraint of an illumination characteristic diagram of the sample image, spatial consistency between the sample image and a sample enhanced image of the sample image and/or image quality constraint of the sample enhanced image.
3. The image enhancement method according to claim 1 or 2, wherein the spatial consistency between the sample image and the sample enhanced image of the sample image is determined based on a gradient value between adjacent regions in the sample image and a gradient value between adjacent regions in the sample enhanced image.
4. The image enhancement method of claim 1 or 2, wherein the image quality constraints of the sample enhanced image comprise at least one of image exposure constraints, color constancy constraints, and smoothing constraints;
the image exposure constraint is used for constraining the difference between the exposure intensity of each region in the sample enhanced image and a preset exposure intensity;
the color constancy constraint is used for constraining intensity deviation among various color channels of the sample enhanced image;
and the smooth constraint is used for constraining gradient values between adjacent pixel points in the sample enhanced image.
5. The image enhancement method of claim 2, wherein the illumination feature map of the scotopic image is determined based on the steps of:
based on the full-resolution branch in the dim light enhancement model, performing full-resolution illumination feature extraction on the dim light image to obtain a full-resolution illumination map;
based on the low-resolution branch in the dim light enhancement model, down-sampling the image features of the dim light image, and based on the low-resolution features obtained by the down-sampling, extracting the low-resolution illumination features to obtain a low-resolution illumination map;
and performing feature fusion on the full-resolution illumination map and the low-resolution illumination map based on the fusion branch in the dim light enhancement model to obtain the illumination feature map.
6. The image enhancement method according to claim 5, wherein said performing low-resolution illumination feature extraction based on the low-resolution features obtained from the down-sampling to obtain a low-resolution illumination map comprises:
and respectively carrying out global feature extraction and local feature extraction on the low-resolution features based on the low-resolution branches to obtain the low-resolution illumination map fusing the global features and the local features.
7. The image enhancement method according to claim 2, wherein the performing the illumination transformation on the illumination feature map of the dim image comprises:
and performing illumination transformation on the illumination characteristic diagram based on an illumination mapping function in the dim light enhancement model, wherein the illumination mapping function is a monotonic function with an independent variable being the illumination characteristic diagram.
8. An image enhancement apparatus, comprising:
an image determining unit for determining a dim image to be enhanced;
the dim light enhancement unit is used for determining an enhanced image of the dim light image based on a dim light enhancement model, wherein the dim light enhancement model is obtained based on the spatial consistency between a sample image and a sample enhanced image of the sample image and/or the image quality constraint training of the sample enhanced image; the sample enhanced image is an enhanced image of the sample image determined based on a dim light enhancement model in training.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the image enhancement method according to any of claims 1 to 7 are implemented when the program is executed by the processor.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the image enhancement method according to any one of claims 1 to 7.
CN202111042929.7A 2021-09-07 2021-09-07 Image enhancement method and device, electronic equipment and storage medium Pending CN113744169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111042929.7A CN113744169A (en) 2021-09-07 2021-09-07 Image enhancement method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111042929.7A CN113744169A (en) 2021-09-07 2021-09-07 Image enhancement method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113744169A true CN113744169A (en) 2021-12-03

Family

ID=78736446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111042929.7A Pending CN113744169A (en) 2021-09-07 2021-09-07 Image enhancement method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113744169A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023130650A1 (en) * 2022-01-04 2023-07-13 苏州浪潮智能科技有限公司 Image restoration method and apparatus, electronic device, and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105474261A (en) * 2013-05-23 2016-04-06 生物梅里埃公司 Method, system and computer program product for improving the quality of an image
CN109461157A (en) * 2018-10-19 2019-03-12 苏州大学 Image, semantic dividing method based on multi-stage characteristics fusion and Gauss conditions random field
CN109919869A (en) * 2019-02-28 2019-06-21 腾讯科技(深圳)有限公司 A kind of image enchancing method, device and storage medium
CN110807740A (en) * 2019-09-17 2020-02-18 北京大学 Image enhancement method and system for window image of monitoring scene
CN111583161A (en) * 2020-06-17 2020-08-25 上海眼控科技股份有限公司 Blurred image enhancement method, computer device and storage medium
CN112287779A (en) * 2020-10-19 2021-01-29 华南农业大学 Low-illuminance image natural illuminance reinforcing method and application
CN112614077A (en) * 2020-12-30 2021-04-06 北京航空航天大学杭州创新研究院 Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN112703509A (en) * 2018-08-07 2021-04-23 布赖凯科技股份有限公司 Artificial intelligence techniques for image enhancement
CN112950497A (en) * 2021-02-22 2021-06-11 上海商汤智能科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113014927A (en) * 2021-03-02 2021-06-22 三星(中国)半导体有限公司 Image compression method and image compression device
CN113076685A (en) * 2021-03-04 2021-07-06 华为技术有限公司 Training method of image reconstruction model, image reconstruction method and device thereof
CN113111919A (en) * 2021-03-18 2021-07-13 浙江工业大学 Hyperspectral image classification method based on depth high resolution
CN113159019A (en) * 2021-03-08 2021-07-23 北京理工大学 Dark light video enhancement method based on optical flow transformation
CN113269818A (en) * 2021-06-09 2021-08-17 河北工业大学 Seismic data texture feature reconstruction method based on deep learning
CN113284054A (en) * 2020-02-19 2021-08-20 华为技术有限公司 Image enhancement method and image enhancement device
CN113313657A (en) * 2021-07-29 2021-08-27 北京航空航天大学杭州创新研究院 Unsupervised learning method and system for low-illumination image enhancement

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105474261A (en) * 2013-05-23 2016-04-06 生物梅里埃公司 Method, system and computer program product for improving the quality of an image
CN112703509A (en) * 2018-08-07 2021-04-23 布赖凯科技股份有限公司 Artificial intelligence techniques for image enhancement
CN109461157A (en) * 2018-10-19 2019-03-12 苏州大学 Image, semantic dividing method based on multi-stage characteristics fusion and Gauss conditions random field
CN109919869A (en) * 2019-02-28 2019-06-21 腾讯科技(深圳)有限公司 A kind of image enchancing method, device and storage medium
CN110807740A (en) * 2019-09-17 2020-02-18 北京大学 Image enhancement method and system for window image of monitoring scene
CN113284054A (en) * 2020-02-19 2021-08-20 华为技术有限公司 Image enhancement method and image enhancement device
CN111583161A (en) * 2020-06-17 2020-08-25 上海眼控科技股份有限公司 Blurred image enhancement method, computer device and storage medium
CN112287779A (en) * 2020-10-19 2021-01-29 华南农业大学 Low-illuminance image natural illuminance reinforcing method and application
CN112614077A (en) * 2020-12-30 2021-04-06 北京航空航天大学杭州创新研究院 Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN112950497A (en) * 2021-02-22 2021-06-11 上海商汤智能科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113014927A (en) * 2021-03-02 2021-06-22 三星(中国)半导体有限公司 Image compression method and image compression device
CN113076685A (en) * 2021-03-04 2021-07-06 华为技术有限公司 Training method of image reconstruction model, image reconstruction method and device thereof
CN113159019A (en) * 2021-03-08 2021-07-23 北京理工大学 Dark light video enhancement method based on optical flow transformation
CN113111919A (en) * 2021-03-18 2021-07-13 浙江工业大学 Hyperspectral image classification method based on depth high resolution
CN113269818A (en) * 2021-06-09 2021-08-17 河北工业大学 Seismic data texture feature reconstruction method based on deep learning
CN113313657A (en) * 2021-07-29 2021-08-27 北京航空航天大学杭州创新研究院 Unsupervised learning method and system for low-illumination image enhancement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023130650A1 (en) * 2022-01-04 2023-07-13 苏州浪潮智能科技有限公司 Image restoration method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
Wang et al. An experiment-based review of low-light image enhancement methods
Liang et al. Single underwater image enhancement by attenuation map guided color correction and detail preserved dehazing
Zhou et al. Underwater image enhancement method via multi-feature prior fusion
Vanmali et al. Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility
CN108921799B (en) Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network
CN110675336A (en) Low-illumination image enhancement method and device
CN110570360B (en) Retinex-based robust and comprehensive low-quality illumination image enhancement method
CN110889812B (en) Underwater image enhancement method for multi-scale fusion of image characteristic information
Gao et al. Single image dehazing via self-constructing image fusion
CN112561804A (en) Low-illumination underwater image enhancement method based on multi-scale detail enhancement
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
Sethi et al. Fusion of underwater image enhancement and restoration
Gao et al. Single fog image restoration with multi-focus image fusion
CN111353955A (en) Image processing method, device, equipment and storage medium
Shi et al. A joint deep neural networks-based method for single nighttime rainy image enhancement
Wang et al. Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition
Wang et al. An efficient method for image dehazing
Zhu et al. Low-light image enhancement network with decomposition and adaptive information fusion
Jeon et al. Low-light image enhancement using inverted image normalized by atmospheric light
Lei et al. Low-light image enhancement using the cell vibration model
CN113744169A (en) Image enhancement method and device, electronic equipment and storage medium
CN116128768B (en) Unsupervised image low-illumination enhancement method with denoising module
Hong et al. Single image dehazing based on pixel-wise transmission estimation with estimated radiance patches
Zhang et al. A new image filtering method: Nonlocal image guided averaging
CN115760630A (en) Low-illumination image enhancement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination