CN109345485B - Image enhancement method and device, electronic equipment and storage medium - Google Patents

Image enhancement method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109345485B
CN109345485B CN201811233579.0A CN201811233579A CN109345485B CN 109345485 B CN109345485 B CN 109345485B CN 201811233579 A CN201811233579 A CN 201811233579A CN 109345485 B CN109345485 B CN 109345485B
Authority
CN
China
Prior art keywords
image
pixel point
pixel
luminance
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811233579.0A
Other languages
Chinese (zh)
Other versions
CN109345485A (en
Inventor
张雷
谷继力
郑文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201811233579.0A priority Critical patent/CN109345485B/en
Publication of CN109345485A publication Critical patent/CN109345485A/en
Application granted granted Critical
Publication of CN109345485B publication Critical patent/CN109345485B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image enhancement method, apparatus, electronic device and storage medium, the method comprising: acquiring a target image to be enhanced, and performing down-sampling processing on the target image to obtain a down-sampled image; inputting the downsampled image into a depth learning network trained in advance to obtain image enhancement data corresponding to the downsampled image; determining a matching point of each pixel point in the target image in the down-sampling image; aiming at each pixel point in the target image, determining a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data; and aiming at each pixel point in the target image, adjusting the pixel value of the pixel point based on the target enhancement parameter corresponding to the pixel point to obtain an enhanced image corresponding to the target image. The present disclosure can reduce the complexity of image enhancement processing while maintaining an image enhancement effect obtained based on deep learning.

Description

Image enhancement method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image enhancement method and apparatus, an electronic device, and a storage medium.
Background
When image acquisition is performed by a personal consumer-grade device, an image with a high dynamic range cannot be directly captured, and it is difficult to acquire continuous multi-frame images with different exposures so as to perform multi-exposure fusion and obtain an image with a high dynamic range. At this time, an image enhancement technique based on a single frame is particularly important, wherein the enhancement of the single frame image can be achieved by adjusting parameters such as image brightness and contrast only through current image information, so as to obtain an image enhancement effect with a high dynamic range similar to multi-exposure fusion.
Currently, the commonly used single-frame image enhancement methods include: an image enhancement method based on deep learning. In the related image enhancement method based on deep learning, the mapping information from the original image to the enhanced image is learned through the pre-collected data set, the parameters of the deep learning network are stored, and the image to be enhanced is subjected to self-adaptive enhancement to obtain a better image enhancement effect.
However, since the high-resolution original image is directly subjected to an operation such as convolution, and the high-resolution enhanced image is directly output, the complexity of the image enhancement processing procedure based on the deep learning in the related art is high.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image enhancement method, apparatus, electronic device, and storage medium to reduce the complexity of image enhancement processing while maintaining an image enhancement effect obtained based on deep learning.
According to a first aspect of embodiments of the present disclosure, there is provided an image enhancement method, including:
acquiring a target image to be enhanced, and performing down-sampling processing on the target image to obtain a down-sampled image;
inputting the downsampled image into a deep learning network trained in advance to obtain image enhancement data corresponding to the downsampled image; the deep learning network is obtained by training a sample image and a sample enhanced image corresponding to the sample image, and the image enhanced data is data representing the enhancement degree of the image enhanced by the down-sampling image relative to the down-sampling image;
determining a matching point of each pixel point in the target image in the down-sampling image;
aiming at each pixel point in the target image, determining a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data;
and aiming at each pixel point in the target image, adjusting the pixel value of the pixel point based on the target enhancement parameter corresponding to the pixel point to obtain an enhanced image corresponding to the target image.
Optionally, the determining a matching point of each pixel point in the target image in the down-sampled image includes:
aiming at each pixel point in the target image, determining a corresponding pixel point of the pixel point in the down-sampling image, searching a pixel point with the minimum absolute value of the difference between the absolute value of the pixel value of the corresponding pixel point and the absolute value of the pixel value of the corresponding pixel point in a searching area which takes the corresponding pixel point as the center and has the size of M multiplied by N, and taking the searched pixel point as a matching point of the pixel point in the down-sampling image;
and for the pixel point with the coordinate of (u, v) in the target image, the coordinate of the corresponding pixel point of the pixel point in the downsampled image is (u/x, v/x), and x represents the downsampling multiple.
Optionally, the image enhancement data comprises: each pixel point in the down-sampling image is mapped into a mapping parameter of a corresponding pixel point, and the corresponding pixel point of any pixel point is the pixel point which has the same position as the pixel point in the image after the down-sampling image is enhanced;
the determining, for each pixel point in the target image, a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data includes:
and aiming at each pixel point in the target image, determining a target parameter corresponding to the matching point of the pixel point from each mapping parameter, and taking the determined target parameter as a target enhancement parameter corresponding to the pixel point.
Optionally, the image enhancement data comprises: the downsampled image enhanced image;
the determining, for each pixel point in the target image, a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data includes:
and aiming at each pixel point in the target image, determining a target point which is the same as the matching point of the pixel point in the image after the down-sampling image enhancement, calculating the ratio of the pixel value of the target point to the pixel value of the matching point of the pixel point, and determining the ratio as a target enhancement parameter corresponding to the pixel point.
Optionally, the adjusting, for each pixel point in the target image, a pixel value of the pixel point based on the target enhancement parameter corresponding to the pixel point includes:
aiming at each pixel point in the target image, based on the target enhancement parameter corresponding to the pixel point, the pixel value of the pixel point is adjusted through the following formula:
AIp=Op
wherein A represents the target enhancement parameter corresponding to the pixel point, IpIndicating the pixel value, O, of the pixel point before adjustmentpAnd the pixel value of the pixel point after adjustment is represented.
Optionally, the method further comprises:
determining a first RGB image of the target image in an RGB color mode and a second RGB image of the enhanced image in the RGB color mode;
generating a first brightness image corresponding to the first RGB image, where a brightness value of any pixel in the first brightness image is: the maximum value in the RGB channel value of a first pixel point corresponding to the pixel point is the pixel point in the first RGB image, and the position of the first pixel point is the same as that of the pixel point;
generating a second brightness image corresponding to the second RGB image, where a brightness value of any pixel in the second brightness image is: the maximum value in the RGB channel value of a second pixel point corresponding to the pixel point is the pixel point in the second RGB image, and the position of the second pixel point is the same as that of the pixel point;
calculating a gain parameter of the second luminance image relative to the first luminance image;
and performing brightness enhancement processing on the second RGB image based on the calculated gain parameter to obtain a brightness enhanced image.
Optionally, the calculating the gain parameter of the second luminance image relative to the first luminance image includes:
calculating a gain parameter of the second luminance image relative to the first luminance image:
Figure BDA0001837258010000031
wherein R ispRepresenting a gain parameter, V, of a pixel in the second luminance image relative to a corresponding pixel in the first luminance imageopA luminance value, V, representing said pixel point in said second luminance imageipAnd expressing the brightness value of the corresponding pixel point in the first brightness image.
Optionally, the performing, based on the calculated gain parameter, luminance enhancement processing on the second RGB image to obtain a luminance enhanced image includes:
based on the calculated gain parameter, performing brightness enhancement processing on the second RGB image through the following formula to obtain a brightness enhanced image:
Y‘p,c=Yp,c*Rp
wherein, Y'p,cA channel value, Y, of a channel c representing a pixel in the luminance enhanced imagep,cRepresenting the channel value of a channel c of a corresponding pixel point in the second RGB image, wherein the value of c is 1, 2, 3, 1 represents an R channel, 2 represents a G channel, 3 represents a B channel, and RpAnd representing a gain parameter of a pixel point in the second luminance image relative to a corresponding pixel point in the first luminance image, wherein the coordinates of the pixel point in the luminance enhanced image, the corresponding pixel point in the second RGB image, the pixel point in the second luminance image and the corresponding pixel point in the first luminance image are the same.
Optionally, the calculating the gain parameter of the second luminance image relative to the first luminance image includes:
carrying out exposure correction processing on the second brightness image to obtain a third brightness image;
and calculating a gain parameter of the third luminance image relative to the first luminance image.
Optionally, the performing exposure correction processing on the second luminance image to obtain a third luminance image includes:
carrying out exposure correction processing on the second brightness image by the following formula to obtain a third brightness image:
Figure BDA0001837258010000041
wherein, V'opA pixel value, V, representing a pixel point in the third luminance imageopRepresenting pixel values, V, of corresponding pixel points in said second luminance imageminRepresenting a minimum pixel value, V, in said second luminance imagemaxRepresenting a maximum pixel value, th, in said second luminance imagelIndicates a first preset threshold value, thhIndicates a second preset threshold value, thlAnd thhSatisfies the following conditions: 0 < thl<thh<1。
Optionally, after the step of performing luminance enhancement processing on the second RGB image based on the calculated gain parameter to obtain a luminance enhanced image, the method further includes:
and carrying out color enhancement processing on the brightness enhanced image to obtain a color enhanced image.
Optionally, the performing color enhancement processing on the brightness enhanced image to obtain a color enhanced image includes:
carrying out color enhancement processing on the brightness enhanced image through the following formula to obtain a color enhanced image:
Figure BDA0001837258010000042
wherein, Y ″)p,cRepresenting a channel value, Y ', of a channel c of a pixel point in the color enhanced image'p,cA channel c representing a corresponding pixel point in the luminance enhanced imageRoad value, Yp,cRepresenting the channel value of the channel c of the corresponding pixel point in the second RGB image, wherein the value of c is 1, 2, 3, th1,th2And th3And the third preset threshold values corresponding to the R, G and B channels are respectively represented.
According to a second aspect of the embodiments of the present disclosure, there is provided an image enhancement apparatus including:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is configured to acquire a target image to be enhanced and perform downsampling processing on the target image to obtain a downsampled image;
the processing module is configured to input the downsampled image into a deep learning network trained in advance to obtain image enhancement data corresponding to the downsampled image; the deep learning network is obtained by training a sample image and a sample enhanced image corresponding to the sample image, and the image enhanced data is data representing the enhancement degree of the image enhanced by the down-sampling image relative to the down-sampling image;
a first determining module configured to determine a matching point of each pixel point in the target image in the down-sampled image;
a second determining module configured to determine, for each pixel point in the target image, a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data;
and the adjusting module is configured to adjust the pixel value of each pixel point in the target image based on the target enhancement parameter corresponding to the pixel point to obtain an enhanced image corresponding to the target image.
Optionally, the first determining module is specifically configured to determine, for each pixel in the target image, a corresponding pixel of the pixel in the downsampled image, search, in a search area that is centered on the corresponding pixel and has a size of mxn, for a pixel having a minimum absolute value of a difference between a pixel value of the corresponding pixel and a pixel value of the corresponding pixel, and use the found pixel as a matching point of the pixel in the downsampled image;
and for the pixel point with the coordinate of (u, v) in the target image, the coordinate of the corresponding pixel point of the pixel point in the downsampled image is (u/x, v/x), and x represents the downsampling multiple.
Optionally, the image enhancement data comprises: each pixel point in the down-sampling image is mapped into a mapping parameter of a corresponding pixel point, and the corresponding pixel point of any pixel point is the pixel point which has the same position as the pixel point in the image after the down-sampling image is enhanced;
the second determining module is specifically configured to determine, for each pixel point in the target image, a target parameter corresponding to a matching point of the pixel point from the mapping parameters, and use the determined target parameter as a target enhancement parameter corresponding to the pixel point.
Optionally, the image enhancement data comprises: the downsampled image enhanced image;
the second determining module is specifically configured to determine, for each pixel point in the target image, a target point in the image after the enhancement of the downsampled image, where the position of the target point is the same as the matching point of the pixel point, calculate a ratio between a pixel value of the target point and a pixel value of the matching point of the pixel point, and determine the ratio as a target enhancement parameter corresponding to the pixel point.
Optionally, the adjusting module is specifically configured to, for each pixel point in the target image, adjust a pixel value of the pixel point based on a target enhancement parameter corresponding to the pixel point by using the following formula:
AIp=Op
wherein A represents the target enhancement parameter corresponding to the pixel point, IpIndicating the pixel value, O, of the pixel point before adjustmentpAnd the pixel value of the pixel point after adjustment is represented.
Optionally, the apparatus further comprises:
a third determination module configured to determine a first RGB image of the target image in RGB color mode and a second RGB image of the enhanced image in RGB color mode;
a first generating module, configured to generate a first luminance image corresponding to the first RGB image, where a luminance value of any pixel in the first luminance image is: the maximum value in the RGB channel value of a first pixel point corresponding to the pixel point is the pixel point in the first RGB image, and the position of the first pixel point is the same as that of the pixel point;
a second generating module, configured to generate a second luminance image corresponding to the second RGB image, where a luminance value of any pixel in the second luminance image is: the maximum value in the RGB channel value of a second pixel point corresponding to the pixel point is the pixel point in the second RGB image, and the position of the second pixel point is the same as that of the pixel point;
a calculation module configured to calculate a gain parameter of the second luminance image relative to the first luminance image;
and the first enhancement module is configured to perform brightness enhancement processing on the second RGB image based on the calculated gain parameter to obtain a brightness enhanced image.
Optionally, the calculating module is specifically configured to calculate a gain parameter of the second luminance image relative to the first luminance image:
Figure BDA0001837258010000061
wherein R ispRepresenting a gain parameter, V, of a pixel in the second luminance image relative to a corresponding pixel in the first luminance imageopA luminance value, V, representing said pixel point in said second luminance imageipAnd expressing the brightness value of the corresponding pixel point in the first brightness image.
Optionally, the first enhancement module is specifically configured to perform, based on the calculated gain parameter, luminance enhancement processing on the second RGB image by using the following formula to obtain a luminance enhanced image:
Y‘p,c=Yp,c*Rp
wherein, Y'p,cA channel value, Y, of a channel c representing a pixel in the luminance enhanced imagep,cRepresenting the channel value of a channel c of a corresponding pixel point in the second RGB image, wherein the value of c is 1, 2, 3, 1 represents an R channel, 2 represents a G channel, 3 represents a B channel, and RpAnd representing a gain parameter of a pixel point in the second luminance image relative to a corresponding pixel point in the first luminance image, wherein the coordinates of the pixel point in the luminance enhanced image, the corresponding pixel point in the second RGB image, the pixel point in the second luminance image and the corresponding pixel point in the first luminance image are the same.
Optionally, the calculation module comprises: a correction unit and a calculation unit;
the correction unit is configured to perform exposure correction processing on the second brightness image to obtain a third brightness image;
the calculation unit is configured to calculate a gain parameter of the third luminance image with respect to the first luminance image.
Optionally, the correction unit is specifically configured to perform exposure correction processing on the second luminance image by using the following formula to obtain a third luminance image:
Figure BDA0001837258010000062
wherein, V'opA pixel value, V, representing a pixel point in the third luminance imageopRepresenting pixel values, V, of corresponding pixel points in said second luminance imageminRepresenting a minimum pixel value, V, in said second luminance imagemaxRepresenting a maximum pixel value, th, in said second luminance imagelIndicates a first preset threshold value, thhIndicates a second preset threshold value, thlAnd thhSatisfies the following conditions: 0 < thl<thh<1。
Optionally, the apparatus further comprises:
and the second enhancement module is configured to perform color enhancement processing on the brightness enhanced image to obtain a color enhanced image.
Optionally, the second enhancing module is specifically configured to perform color enhancement processing on the brightness enhanced image by using the following formula to obtain a color enhanced image:
Figure BDA0001837258010000071
wherein, Y ″)p,cRepresenting a channel value, Y ', of a channel c of a pixel point in the color enhanced image'p,cA channel value, Y, of a channel c representing a corresponding pixel point in the luminance enhanced imagep,cRepresenting the channel value of the channel c of the corresponding pixel point in the second RGB image, wherein the value of c is 1, 2, 3, th1,th2And th3And the third preset threshold values corresponding to the R, G and B channels are respectively represented.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the image enhancement method according to the first aspect is implemented when the program stored in the memory is executed.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image enhancement method as described above in the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, wherein the instructions of the computer program product, when executed by a processor of an electronic device, enable the electronic device to perform the image enhancement method as described above in the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the complexity of image enhancement processing is reduced by performing downsampling processing on a target image with high resolution and inputting the obtained downsampled image with low resolution into a deep learning network. And after the down-sampling image is input into the deep learning network, mapping each pixel point of the target image based on the obtained image enhancement data corresponding to the down-sampling image and the pixel point matching result of the target image and the down-sampling image to obtain the enhanced image corresponding to the target image. The enhanced image thus obtained has the high resolution of the target image, and the better image enhancement effect obtained based on the deep learning is maintained.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a method of image enhancement according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of image enhancement according to another exemplary embodiment;
FIG. 3 is a flow chart illustrating a method of image enhancement according to another exemplary embodiment;
FIG. 4 is a block diagram illustrating an image enhancement apparatus according to an exemplary embodiment;
FIG. 5 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating an apparatus for image enhancement in accordance with an exemplary embodiment;
FIG. 7 is a block diagram illustrating another apparatus for image enhancement according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
High-Dynamic Range (HDR) images may provide more Dynamic Range and image detail than Low-Dynamic Range (LDR) images. According to the LDR images with different exposure times, the final HDR image can be synthesized by using the LDR image with the optimal detail corresponding to each exposure time, and the synthesized HDR image can better reflect the visual effect of people and objects in a real environment.
In consumer-grade devices, it is often not possible to directly capture high dynamic range images, and it is also difficult to capture successive frames of differently exposed images for multi-exposure blending to obtain high dynamic range images. Therefore, image enhancement techniques based on a single frame are important. The image enhancement based on the single frame can obtain the image enhancement effect similar to multi-exposure fusion by adjusting the parameters of the brightness, the contrast and the like of the current image.
Currently, the commonly used single frame image enhancement methods are: an image enhancement method based on deep learning. However, in the related image enhancement method based on the deep learning, since the convolution operation is directly performed on the original image with the high resolution, the enhanced image with the high resolution is directly output, and thus the complexity of the image enhancement processing process based on the deep learning is high.
In order to solve the problems of the prior art, embodiments of the present disclosure provide an image enhancement method and apparatus, an electronic device, and a storage medium.
Next, an image enhancement method provided by an embodiment of the present disclosure is first described.
Fig. 1 is a flow chart illustrating an image enhancement method according to an exemplary embodiment, and as shown in fig. 1, an image enhancement method may include the steps of:
in step S11, a target image to be enhanced is acquired, and downsampling processing is performed on the target image, resulting in a downsampled image.
An execution subject of an image enhancement method shown in the present embodiment may be an electronic device. In a specific application, the electronic device may be a terminal device or a server. For example: smart phones, tablet computers, desktop computers, and the like. When the electronic device needs to perform image enhancement processing on a certain image, the image can be used as a target image to be enhanced.
The target image may be a high resolution image. Further, the target image may be a single-channel image or a multi-channel image. Wherein the color space of the multi-channel image may be: RGB (Red, Green, Blue, Red Green Blue), YUV (Luminance, Chroma), or other color spaces, to which the present disclosure is not limited.
The electronic equipment can shoot the target image through a built-in or external camera device, and can also communicate with other equipment to receive the target image sent by the other equipment. The manner in which the electronic device acquires the target image is not limited in this disclosure.
After the electronic device acquires the target image, the electronic device may perform downsampling on the target image to obtain a downsampled image. The obtained down-sampled image may be further input to a deep learning network, and the down-sampled image may be subjected to image enhancement processing by the deep learning network.
In this embodiment, the target image may be subjected to x-fold down-sampling processing, so that the resolution, the width, and the height of the obtained down-sampled image are all 1/x times of corresponding parameters of the target image.
In step S12, the downsampled image is input to a deep learning network trained in advance, and image enhancement data corresponding to the downsampled image is obtained.
In order to perform image enhancement processing on a downsampled image, in this embodiment, a deep learning network may be trained in advance according to a sample image and a sample enhanced image corresponding to the sample image, so as to obtain a trained deep learning network. Wherein the sample images and corresponding sample enhanced images are training samples for training. The sampled image and the corresponding sample enhanced image may be obtained in an existing training library as training samples. The sample enhanced image may be obtained by means of multi-exposure fusion, or may be obtained by means of other single-frame image enhancement methods, which is not limited in this disclosure. Moreover, in this embodiment, for ease of training, the resolution of the sample images and corresponding sample enhanced images may be the same as the resolution of the downsampled images.
In this embodiment, the structure of the deep learning network may be any one of the existing deep learning network models. Specifically, during training, the number of training samples can be determined according to actual needs. Meanwhile, a reasonable loss function or an objective function and a corresponding target value can be set to determine whether the deep learning network is trained well.
After the deep learning network is trained, the network parameters of the deep learning network are determined. Because the deep learning network is trained according to the sample images and the corresponding sample enhancement images, the image enhancement data corresponding to the down-sampling images can be obtained after the down-sampling images are input into the trained deep learning network. The image enhancement data is data representing the degree of enhancement of an image enhanced by a down-sampled image with respect to the down-sampled image. The image enhancement data may be in a variety of forms, for example: the image enhancement data may include a mapping parameter for mapping each pixel in the down-sampled image to a corresponding pixel, where the corresponding pixel of any pixel is a pixel in the image after enhancement of the down-sampled image, the pixel having the same position as the pixel. As another example, the image enhancement data may include a downsampled image-enhanced image. Further, it is understood that the output result of the deep learning network is the same type as the image enhancement data corresponding to the down-sampled image, that is, when the output result of the deep learning network is a mapping parameter, the image enhancement data corresponding to the down-sampled image is a mapping parameter, and when the output result of the deep learning network is an enhanced image, the image enhancement data corresponding to the down-sampled image is an image enhanced by the down-sampled image.
In step S13, a matching point of each pixel point in the target image in the downsampled image is determined.
Since the resolution of the downsampled image is lower than the resolution of the target image, the image enhancement data corresponding to the downsampled image does not correspond to the target image. In order to obtain an enhanced image corresponding to a target image, in this embodiment, a matching point of each pixel point in the target image in a downsampled image may be determined first, then a target enhancement parameter corresponding to the pixel point is determined, and finally, the pixel value of the pixel point is adjusted by using the target enhancement parameter.
In one implementation, determining a matching point of each pixel point in the target image in the downsampled image may include:
and aiming at each pixel point in the target image, determining a corresponding pixel point of the pixel point in the downsampled image, searching a pixel point with the minimum absolute value of the difference between the absolute value of the pixel value of the corresponding pixel point and the absolute value of the pixel value of the corresponding pixel point in a searching area which takes the corresponding pixel point as the center and has the size of MxN, and taking the searched pixel point as a matching point of the pixel point in the downsampled image.
For a pixel point with the coordinate of (u, v) in the target image, the coordinate of a corresponding pixel point of the pixel point in the downsampled image is (u/x, v/x), and x represents the downsampling multiple.
In the above implementation, if the value of u/x or v/x is not an integer, then rounding is performed. For example, if the coordinates of a pixel in the target image are (9, 9), and x is 2, the coordinates of the corresponding pixel are (4, 4).
The range of the search area can be set according to actual needs. For example: m × N is 3 × 3. M and N may be set to be generally odd. When part of a certain search area exceeds the boundary of the down-sampling image, namely the search area contains virtual pixel points which are not in the down-sampling image, the pixel points in the down-sampling image contained in the search area are used as search objects for searching. For example, if the coordinates of the corresponding pixel are (1, 4), and the size of the search area is 3 × 3, the search area includes 5 pixels in the down-sampled image, the coordinates are (1, 3), (1, 5), (2, 3), (2, 4), and the search range is the 5 pixels.
For a pixel point with a coordinate of (u, v) in the target image, taking a pixel point with the minimum absolute value of the difference between the pixel values of the corresponding pixel point and the pixel point in the search area as a matching point of the pixel point (u, v) in the downsampled image, which means that: the found matching point is the closest pixel point to the pixel point (u, v) in the downsampled image.
In step S14, for each pixel point in the target image, a target enhancement parameter corresponding to the pixel point is determined based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data.
For each pixel point in the target image, after determining a matching point of the pixel point in the downsampled image, determining a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to the matching point, and further adjusting a pixel value of the pixel point based on the target enhancement parameter.
Specifically, the image enhancement data are in different forms, and the manner of determining the target enhancement parameter corresponding to each pixel point in the target image is also different. Two modes are given below for explanation.
Optionally, in the first mode, the image enhancement data may include: and each pixel point in the down-sampling image is mapped into a mapping parameter of a corresponding pixel point, and the corresponding pixel point of any pixel point is the pixel point which has the same position as the pixel point in the image after the down-sampling image is enhanced. So-called mapping a pixel to a corresponding pixel specifically means: and adjusting the pixel value of one pixel point to the pixel value of the corresponding pixel point.
Correspondingly, for each pixel point in the target image, determining a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data may include:
and aiming at each pixel point in the target image, determining a target parameter corresponding to the matching point of the pixel point from each mapping parameter, and taking the determined target parameter as a target enhancement parameter corresponding to the pixel point.
In the first mode, when the target image is a single-channel image, the mapping parameter for mapping each pixel point in the down-sampled image to a corresponding pixel point may be a real number; when the target image is three channels, the mapping parameter for mapping each pixel point in the down-sampled image to the corresponding pixel point may be a 3 × 4 matrix, the matrix includes mapping parameters for mapping three channel values of each pixel point in the down-sampled image to the corresponding channel values of the corresponding pixel point, respectively, and the matrix may be represented as:
Figure BDA0001837258010000111
wherein m is1A mapping parameter, m, representing the mapping of the first channel value of each pixel in the down-sampled image to the first channel value of the corresponding pixel2A mapping parameter, m, representing the mapping of the second channel value of each pixel in the down-sampled image to the second channel value of the corresponding pixel3And the mapping parameter represents that the third channel value of each pixel point in the down-sampling image is mapped into the third channel value of the corresponding pixel point.
Optionally, in a second mode, the image enhancement data includes: the image enhanced image is downsampled.
Correspondingly, for each pixel point in the target image, determining a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data may include:
and aiming at each pixel point in the target image, determining a target point which is the same as the matching point of the pixel point in the image after the down-sampling image enhancement, calculating the ratio of the pixel value of the target point to the pixel value of the matching point of the pixel point, and determining the ratio as a target enhancement parameter corresponding to the pixel point.
In the second way, when the target image is a single-channel image, the ratio of the pixel value of the target point to the pixel value of the matching point of the pixel point may be a real number; when the target image is three channels, the ratio of the pixel value of the target point to the pixel value of the matching point of the pixel point comprises: the ratio of the three channel values of the target point to the corresponding channel values of the matching points of the pixel point.
The down-sampled image enhanced image exhibits an image enhanced mapping relationship as compared to the down-sampled image. Therefore, based on the mapping relationship, the target enhancement parameter corresponding to each pixel point in the target image can be determined. Specifically, the mapping relationship may be: the ratio of the pixel value of the target point of each pixel point in the down-sampling image to the pixel value of the pixel point in the down-sampling image.
In step S15, for each pixel point in the target image, the pixel value of the pixel point is adjusted based on the target enhancement parameter corresponding to the pixel point, so as to obtain an enhanced image corresponding to the target image.
For each pixel point in the target image, after the target enhancement parameter corresponding to the pixel point is determined, the pixel value of the pixel point can be adjusted based on the target enhancement parameter. This adjustment process is a process of performing image enhancement processing on the target image.
Optionally, in an implementation manner, adjusting, for each pixel point in the target image, a pixel value of the pixel point based on the target enhancement parameter corresponding to the pixel point may include:
based on the target enhancement parameter corresponding to the pixel point, the pixel value of the pixel point is adjusted through the following formula:
AIp=Op
wherein A represents the target enhancement parameter corresponding to the pixel point, IpIndicating the pixel value, O, of the pixel point before adjustmentpAnd the pixel value of the pixel point after adjustment is represented.
Specifically, when the forms of the target images are different, the forms of the target enhancement parameters are also different. For example, when the target image is a single-channel image, the target enhancement parameter may be a real number; when the target image is a three-channel image, the target enhancement parameter may be a 3 × 4 matrix, where the matrix includes target enhancement parameters of three channels corresponding to the pixel, and the matrix may be represented as:
Figure BDA0001837258010000131
wherein, t1A target enhancement parameter, t, representing the first channel corresponding to the pixel point2Target enhancement parameter, t, representing the corresponding second channel of the pixel point3And representing the target enhancement parameter of the third channel corresponding to the pixel point.
In the formula, the pixel value of each pixel point in the target image is adjusted, and the image enhancement processing on the target image is realized. Thus, the obtained enhanced image is an image obtained by subjecting the target image to image enhancement processing.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the complexity of image enhancement processing is reduced by performing downsampling processing on a target image with high resolution and inputting the obtained downsampled image with low resolution into a deep learning network. And after the down-sampling image is input into the deep learning network, mapping each pixel point of the target image based on the obtained image enhancement data corresponding to the down-sampling image and the pixel point matching result of the target image and the down-sampling image to obtain the enhanced image corresponding to the target image. The enhanced image thus obtained has the high resolution of the target image, and the better image enhancement effect obtained based on the deep learning is maintained.
Based on the steps S11-S15, the image enhancement of the target image is realized, and the image enhancement effect is realized based on the trained deep learning network. However, in practical application, due to the limited training set, practical application scenes are ever varied, that is, images of any scene which may appear due to the limited training set cannot be targeted, and a good image enhancement effect is generated. In general, due to the limited training set, the enhanced image may suffer from color cast, overexposure, and underexplosion. In this case, further image enhancement processing may be performed on the enhanced image on the basis of the enhanced image obtained in step S15.
In the embodiment shown in fig. 2, the enhanced image is further image enhanced specifically for the color cast problem described above.
Fig. 2 is a flowchart illustrating an image enhancement method according to another exemplary embodiment, and as shown in fig. 2, an image enhancement method may include the steps of:
in step S21, a target image to be enhanced is acquired, and downsampling processing is performed on the target image, resulting in a downsampled image.
In step S22, the downsampled image is input to a deep learning network trained in advance, and image enhancement data corresponding to the downsampled image is obtained.
In step S23, a matching point of each pixel point in the target image in the downsampled image is determined.
In step S24, for each pixel point in the target image, a target enhancement parameter corresponding to the pixel point is determined based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data.
In step S25, for each pixel point in the target image, the pixel value of the pixel point is adjusted based on the target enhancement parameter corresponding to the pixel point, so as to obtain an enhanced image corresponding to the target image.
The above steps S21-S25 may be identical to the steps S11-S15, and are not described herein.
In step S26, a first RGB image of the target image in RGB color mode and a second RGB image of the enhanced image in RGB color mode are determined.
In this embodiment, the target image is a three-way image. If the color mode of the target image is an RGB color mode, directly taking the target image as a first RGB image; and if the color mode of the target image is YUV or other color modes, converting the color mode of the target image into an RGB color mode to obtain a first RGB image. The method of conversion may be referred to in the art and will not be described in detail here.
Similarly, for the enhanced image, the second RGB image of the enhanced image in the RGB color mode may be determined by referring to the manner of determining the first RGB image of the target image in the RGB color mode.
In step S27, a first luminance image corresponding to the first RGB image is generated, and the luminance value of any pixel in the first luminance image is: and the first pixel point is the pixel point in the first RGB image, which has the same position as the pixel point, in the maximum value in the RGB channel value of the first pixel point corresponding to the pixel point.
The generated first luminance image may be used in conjunction with a second luminance image generated in the following steps to determine a gain parameter of the second luminance image relative to the first luminance image.
In step S28, a second luminance image corresponding to the second RGB image is generated, and the luminance value of any pixel in the second luminance image is: and the second pixel point is the pixel point in the second RGB image, which has the same position as the pixel point, in the maximum value in the RGB channel value of the second pixel point corresponding to the pixel point.
The generated second luminance image may be used to determine a gain parameter of the second luminance image relative to the first luminance image, with respect to the first luminance image generated in the above step.
In step S29, a gain parameter of the second luminance image with respect to the first luminance image is calculated.
After the first luminance image and the second luminance image are generated, a gain parameter of the second luminance image relative to the first luminance image may be determined based on the first luminance image and the second luminance image.
Optionally, in an implementation, calculating a gain parameter of the second luminance image relative to the first luminance image may include:
calculating a gain parameter of the second luminance image relative to the first luminance image:
Figure BDA0001837258010000151
wherein R ispRepresenting one in the second luminance imageGain parameter, V, of a pixel point relative to a corresponding pixel point in the first luminance imageopRepresenting the brightness value, V, of a pixel point in the second brightness imageipAnd expressing the brightness value of the corresponding pixel point in the first brightness image.
In the above implementation, VopAnd VipAre normalized luminance values. According to the formula, the gain parameter reflects the brightness change condition of one pixel point in the second brightness image relative to the corresponding pixel point in the first brightness image. For the black pixel point with the luminance value of 0 in the first luminance image, the gain parameter may be set to 0, that is, the corresponding pixel point of the black pixel point in the first luminance image in the second luminance image is still black. The second RGB image may be further subjected to luminance enhancement processing based on the change.
In step S210, a luminance enhancement process is performed on the second RGB image based on the calculated gain parameter, resulting in a luminance enhanced image.
After the gain parameter of the second luminance image relative to the first luminance image is calculated, luminance enhancement processing may be performed on the second RGB image based on the gain parameter.
Optionally, in an implementation manner, performing a luminance enhancement process on the second RGB image based on the calculated gain parameter to obtain a luminance enhanced image may include:
based on the calculated gain parameter, performing brightness enhancement processing on the second RGB image through the following formula to obtain a brightness enhanced image:
Y‘p,c=Yp,c*Rp
wherein, Y'p,cChannel value, Y, of channel c representing a pixel in a luminance enhanced imagep,cRepresenting the channel value of a channel c of a corresponding pixel point in the second RGB image, wherein the value of c is 1, 2, 3, 1 represents an R channel, 2 represents a G channel, 3 represents a B channel, and RpExpressing the gain parameter of a pixel point in the second brightness image relative to the corresponding pixel point in the first brightness image, and enhancing the brightness of the pixel point in the image and the second RAnd coordinates of a corresponding pixel point in the GB image, a pixel point in the second brightness image and a corresponding pixel point in the first brightness image are the same.
In the above implementation, the second RGB image and the luminance-enhanced image are both three-way images, and the color mode is an RGB color mode.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the complexity of image enhancement processing is reduced by performing downsampling processing on a target image with high resolution and inputting the obtained downsampled image with low resolution into a deep learning network. And after the down-sampling image is input into the deep learning network, mapping each pixel point of the target image based on the obtained image enhancement data corresponding to the down-sampling image and the pixel point matching result of the target image and the down-sampling image to obtain the enhanced image corresponding to the target image. The enhanced image thus obtained has the high resolution of the target image, and the better image enhancement effect obtained based on the deep learning is maintained. Further, based on the RGB images of the target image and the enhanced image in the RGB mode, corresponding luminance images are generated, gain parameters are determined, finally luminance enhancement processing is carried out on the RGB images of the enhanced image in the RGB mode based on the gain parameters, and due to the fact that luminance adjustment is carried out on channel values of three channels of RGB, the problem of color cast of the enhanced image caused by the fact that a training set is limited when deep learning based image enhancement is carried out is solved.
In practical applications, when image enhancement is performed based on deep learning, overexposure or underexplosion may occur due to the limited training set. To address this problem, in the embodiment shown in fig. 2, optionally, the calculating the gain parameter of the second luminance image relative to the first luminance image in step S29 may include:
carrying out exposure correction processing on the second brightness image to obtain a third brightness image;
and calculating a gain parameter of the third luminance image relative to the first luminance image.
Specifically, after exposure correction processing is performed on the second luminance image, there are no over-exposed or under-exploded pixel points in the obtained third luminance image, so that the gain parameter of the third luminance image relative to the first luminance image can be calculated by using the third luminance image.
Optionally, in an implementation manner, performing exposure correction processing on the second luminance image to obtain a third luminance image may include:
and carrying out exposure correction processing on the second brightness image by the following formula to obtain a third brightness image:
Figure BDA0001837258010000161
wherein, V'opRepresenting the pixel value, V, of a pixel point in the third luminance imageopRepresenting the pixel value, V, of a corresponding pixel point in the second luminance imageminRepresenting the minimum pixel value, V, in the second luminance imagemaxRepresenting the maximum pixel value, th, in the second luminance imagelIndicates a first preset threshold value, thhIndicates a second preset threshold value, thlAnd thhSatisfies the following conditions: 0 < thl<thh<1。
In the above formula, when Vop<thlIn the process, the situation that the corresponding pixel points in the second brightness image are under-exploded is explained, and V can be adjustedopSo that V isopThe value of (d) increases; when V isop>thhWhen the second brightness image is over-exploded, the corresponding pixel points in the second brightness image are explained, and V can be adjustedopSo that V isopThe value of (c) is decreased. While when it isl≤Vop≤thhIn the process, the problem of under-explosion or over-exposure does not exist in the corresponding pixel point in the second brightness image, and the V can not be adjustedop
The first preset threshold and the second preset threshold may be determined according to actual conditions, generally, the first preset threshold may be a value greater than 0 and closer to 0, and the second preset threshold may be a value less than 1 and closer to 1.
The way of calculating the gain parameter of the third luminance image relative to the first luminance image may refer to the way of calculating the gain parameter of the second luminance image relative to the first luminance image in step 29.
In the above embodiment, when the target image is a multi-channel image, the obtained brightness enhanced image is an image after brightness adjustment is performed to the same extent on the channel value of each channel, and in order to further improve the image enhancement effect, the present disclosure provides an image enhancement method that can perform adjustment for each channel, and since adjustment is performed for each channel, and each channel corresponds to a different color, color enhancement processing is substantially performed on the brightness enhanced image.
Fig. 3 is a flowchart illustrating an image enhancement method according to another exemplary embodiment, and as shown in fig. 3, an image enhancement method may include the following steps.
In step S31, a target image to be enhanced is acquired, and downsampling processing is performed on the target image, resulting in a downsampled image.
In step S32, the downsampled image is input to a deep learning network trained in advance, and image enhancement data corresponding to the downsampled image is obtained.
In step S33, a matching point of each pixel point in the target image in the downsampled image is determined.
In step S34, for each pixel point in the target image, a target enhancement parameter corresponding to the pixel point is determined based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data.
In step S35, for each pixel point in the target image, the pixel value of the pixel point is adjusted based on the target enhancement parameter corresponding to the pixel point, so as to obtain an enhanced image corresponding to the target image.
In step S36, a first RGB image of the target image in RGB color mode and a second RGB image of the enhanced image in RGB color mode are determined.
In step S37, a first luminance image corresponding to the first RGB image is generated, and the luminance value of any pixel in the first luminance image is: and the first pixel point is the pixel point in the first RGB image, which has the same position as the pixel point, in the maximum value in the RGB channel value of the first pixel point corresponding to the pixel point.
In step S38, a second luminance image corresponding to the second RGB image is generated, and the luminance value of any pixel in the second luminance image is: and the second pixel point is the pixel point in the second RGB image, which has the same position as the pixel point, in the maximum value in the RGB channel value of the second pixel point corresponding to the pixel point.
In step S39, a gain parameter of the second luminance image corresponding to the first luminance image is calculated.
In step S310, a luminance enhancement process is performed on the second RGB image based on the calculated gain parameter, resulting in a luminance enhanced image.
The above steps S31-S310 may be identical to the steps S21-S210, and are not described herein.
In step S311, the color enhancement processing is performed on the luminance enhanced image, resulting in a color enhanced image.
Optionally, in an implementation manner, performing color enhancement processing on the brightness enhanced image to obtain a color enhanced image may include:
carrying out color enhancement processing on the brightness enhanced image by the following formula to obtain a color enhanced image:
Figure BDA0001837258010000181
wherein, Y ″)p,cChannel value, Y ', representing channel c of a pixel in a color enhanced image'p,cChannel value, Y, of channel c representing a corresponding pixel in a luminance enhanced imagep,cThe channel value of a channel c of a corresponding pixel point in the second RGB image is represented, and the value of c is 1, 2, 3, th1,th2And th3And the third preset threshold values corresponding to the R, G and B channels are respectively represented.
In the above implementation, different third preset thresholds may be set for the three channels R, G, and B, respectively. In particularIf c is 1, then Y ″)p,cChannel value, Y ', of R channel representing a pixel point in a color enhanced image'p,cChannel value, Y, of R channel representing corresponding pixel point in luminance enhanced imagep,cRepresenting the channel value of the R channel of the corresponding pixel point in the second RGB image; if c is 2, then Y ″)p,cChannel value, Y ', of G channel representing a pixel point in a color enhanced image'p,cChannel value, Y, of the G channel representing a corresponding pixel in a luminance enhanced imagep,cRepresenting a channel value of a G channel of a corresponding pixel point in the second RGB image; if c is 3, then Y ″)p,cChannel value, Y ', of B channel representing a pixel point in a color enhanced image'p,cChannel value, Y, of B channel representing corresponding pixel in luminance enhanced imagep,cAnd representing the channel value of the B channel of the corresponding pixel point in the second RGB image.
In the above formula, the third preset threshold represents the degree of acceptance of the brightness adjustment degree of the brightness enhanced image with respect to each channel of the second RGB image.
And respectively adjusting the channel values of the RGB three channels of all the pixel points in the brightness enhanced image to obtain the color enhanced image after color enhancement processing.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the complexity of image enhancement processing is reduced by performing downsampling processing on a target image with high resolution and inputting the obtained downsampled image with low resolution into a deep learning network. And after the down-sampling image is input into the deep learning network, mapping each pixel point of the target image based on the obtained image enhancement data corresponding to the down-sampling image and the pixel point matching result of the target image and the down-sampling image to obtain the enhanced image corresponding to the target image. The enhanced image thus obtained has the high resolution of the target image, and the better image enhancement effect obtained based on the deep learning is maintained. Further, based on the RGB images of the target image and the enhanced image in the RGB mode, corresponding luminance images are generated, gain parameters are determined, finally luminance enhancement processing is carried out on the RGB images of the enhanced image in the RGB mode based on the gain parameters, and due to the fact that luminance adjustment is carried out on channel values of three channels of RGB, the problem of color cast of the enhanced image caused by the fact that a training set is limited when deep learning based image enhancement is carried out is solved. Furthermore, the color enhancement processing is carried out on the brightness enhanced image, and the effect of the image enhancement processing is improved.
Fig. 4 is a block diagram illustrating an image enhancement apparatus according to an exemplary embodiment. Referring to fig. 4, the apparatus includes: an obtaining module 401, a processing module 402, a first determining module 403 and an adjusting module 404. Wherein the content of the first and second substances,
an obtaining module 401, configured to obtain a target image to be enhanced, and perform downsampling processing on the target image to obtain a downsampled image;
a processing module 402, configured to input the downsampled image into a deep learning network trained in advance, so as to obtain image enhancement data corresponding to the downsampled image; the deep learning network is obtained by training a sample image and a sample enhanced image corresponding to the sample image, and the image enhanced data is data representing the enhancement degree of the image enhanced by the down-sampling image relative to the down-sampling image;
a first determining module 403 configured to determine a matching point of each pixel point in the target image in the down-sampled image;
a second determining module 404, configured to determine, for each pixel point in the target image, a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data;
an adjusting module 405, configured to adjust, for each pixel point in the target image, a pixel value of the pixel point based on a target enhancement parameter corresponding to the pixel point, so as to obtain an enhanced image corresponding to the target image.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the complexity of image enhancement processing is reduced by performing downsampling processing on a target image with high resolution and inputting the obtained downsampled image with low resolution into a deep learning network. And after the down-sampling image is input into the deep learning network, mapping each pixel point of the target image based on the obtained image enhancement data corresponding to the down-sampling image and the pixel point matching result of the target image and the down-sampling image to obtain the enhanced image corresponding to the target image. The enhanced image thus obtained has the high resolution of the target image, and the better image enhancement effect obtained based on the deep learning is maintained.
Optionally, the first determining module 403 is specifically configured to determine, for each pixel in the target image, a corresponding pixel of the pixel in the downsampled image, search, in a search area that takes the corresponding pixel as a center and has a size of mxn, a pixel with a minimum absolute value of a difference between a pixel value of the corresponding pixel and a pixel value of the corresponding pixel, and use the found pixel as a matching point of the pixel in the downsampled image;
and for the pixel point with the coordinate of (u, v) in the target image, the coordinate of the corresponding pixel point of the pixel point in the downsampled image is (u/x, v/x), and x represents the downsampling multiple.
Optionally, the image enhancement data comprises: each pixel point in the down-sampling image is mapped into a mapping parameter of a corresponding pixel point, and the corresponding pixel point of any pixel point is the pixel point which has the same position as the pixel point in the image after the down-sampling image is enhanced;
the second determining module 404 is specifically configured to determine, for each pixel point in the target image, a target parameter corresponding to a matching point of the pixel point from the mapping parameters, and use the determined target parameter as a target enhancement parameter corresponding to the pixel point.
Optionally, the image enhancement data comprises: the downsampled image enhanced image;
the second determining module 404 is specifically configured to determine, for each pixel point in the target image, a target point in the image after the enhancement of the downsampled image, where the position of the target point is the same as the matching point of the pixel point, calculate a ratio between a pixel value of the target point and a pixel value of the matching point of the pixel point, and determine the ratio as a target enhancement parameter corresponding to the pixel point.
Optionally, the adjusting module 405 is specifically configured to, for each pixel point in the target image, adjust a pixel value of the pixel point based on a target enhancement parameter corresponding to the pixel point by using the following formula:
AIp=Op
wherein A represents the target enhancement parameter corresponding to the pixel point, IpIndicating the pixel value, O, of the pixel point before adjustmentpAnd the pixel value of the pixel point after adjustment is represented.
Optionally, the apparatus further comprises:
a third determination module configured to determine a first RGB image of the target image in RGB color mode and a second RGB image of the enhanced image in RGB color mode;
a first generating module, configured to generate a first luminance image corresponding to the first RGB image, where a luminance value of any pixel in the first luminance image is: the maximum value in the RGB channel value of a first pixel point corresponding to the pixel point is the pixel point in the first RGB image, and the position of the first pixel point is the same as that of the pixel point;
a second generating module, configured to generate a second luminance image corresponding to the second RGB image, where a luminance value of any pixel in the second luminance image is: the maximum value in the RGB channel value of a second pixel point corresponding to the pixel point is the pixel point in the second RGB image, and the position of the second pixel point is the same as that of the pixel point;
a calculation module configured to calculate a gain parameter of the second luminance image relative to the first luminance image;
and the first enhancement module is configured to perform brightness enhancement processing on the second RGB image based on the calculated gain parameter to obtain a brightness enhanced image.
Optionally, the calculating module is specifically configured to calculate a gain parameter of the second luminance image relative to the first luminance image:
Figure BDA0001837258010000201
wherein R ispRepresenting a gain parameter, V, of a pixel in the second luminance image relative to a corresponding pixel in the first luminance imageopA luminance value, V, representing said pixel point in said second luminance imageipAnd expressing the brightness value of the corresponding pixel point in the first brightness image.
Optionally, the first enhancement module is specifically configured to perform, based on the calculated gain parameter, luminance enhancement processing on the second RGB image by using the following formula to obtain a luminance enhanced image:
Y‘p,c=Yp,c*Rp
wherein, Y'p,cA channel value, Y, of a channel c representing a pixel in the luminance enhanced imagep,cRepresenting the channel value of a channel c of a corresponding pixel point in the second RGB image, wherein the value of c is 1, 2, 3, 1 represents an R channel, 2 represents a G channel, 3 represents a B channel, and RpAnd representing a gain parameter of a pixel point in the second luminance image relative to a corresponding pixel point in the first luminance image, wherein the coordinates of the pixel point in the luminance enhanced image, the corresponding pixel point in the second RGB image, the pixel point in the second luminance image and the corresponding pixel point in the first luminance image are the same.
Optionally, the calculation module comprises: a correction unit and a calculation unit;
the correction unit is configured to perform exposure correction processing on the second brightness image to obtain a third brightness image;
the calculation unit is configured to calculate a gain parameter of the third luminance image with respect to the first luminance image.
Optionally, the correction unit is specifically configured to perform exposure correction processing on the second luminance image by using the following formula to obtain a third luminance image:
Figure BDA0001837258010000211
wherein, V'opA pixel value, V, representing a pixel point in the third luminance imageopRepresenting pixel values, V, of corresponding pixel points in said second luminance imageminRepresenting a minimum pixel value, V, in said second luminance imagemaxRepresenting a maximum pixel value, th, in said second luminance imagelIndicates a first preset threshold value, thhIndicates a second preset threshold value, thlAnd thhSatisfies the following conditions: 0 < thl<thh<1。
Optionally, the apparatus further comprises:
and the second enhancement module is configured to perform color enhancement processing on the brightness enhanced image to obtain a color enhanced image.
Optionally, the second enhancing module is specifically configured to perform color enhancement processing on the brightness enhanced image by using the following formula to obtain a color enhanced image:
Figure BDA0001837258010000212
wherein, Y ″)p,cRepresenting a channel value, Y ', of a channel c of a pixel point in the color enhanced image'p,cA channel value, Y, of a channel c representing a corresponding pixel point in the luminance enhanced imagep,cRepresenting the channel value of the channel c of the corresponding pixel point in the second RGB image, wherein the value of c is 1, 2, 3, th1,th2And th3And the third preset threshold values corresponding to the R, G and B channels are respectively represented.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In addition, corresponding to the image enhancement method provided by the foregoing embodiment, an embodiment of the present application further provides an electronic device, as shown in fig. 5, the electronic device may include:
a processor 510;
a memory 520 for storing processor-executable instructions;
wherein the processor 510 is configured to: when the executable instructions stored in the memory 520 are executed, the steps of the image enhancement method provided by the embodiment of the present application are implemented.
It is understood that the electronic device may be a server or a terminal device, and in particular applications, the terminal device may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
Fig. 6 is a block diagram illustrating an apparatus 600 for image enhancement according to an example embodiment. For example, the apparatus 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, apparatus 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an interface to input/output (I/O) 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operation at the device 600. Examples of such data include instructions for any application or method operating on device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply component 606 provides power to the various components of device 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 600.
The multimedia component 608 includes a screen that provides an output interface between the device 600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 600 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a Microphone (MIC) configured to receive external audio signals when apparatus 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing status assessment of various aspects of the apparatus 600. For example, the sensor component 614 may detect an open/closed state of the device 600, the relative positioning of components, such as a display and keypad of the apparatus 600, the sensor component 614 may also detect a change in position of the apparatus 600 or a component of the apparatus 600, the presence or absence of user contact with the apparatus 600, orientation or acceleration/deceleration of the apparatus 600, and a change in temperature of the apparatus 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the apparatus 600 and other devices in a wired or wireless manner. The apparatus 600 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the apparatus 600 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 7 is a block diagram illustrating an apparatus 700 for image enhancement according to an example embodiment. For example, the apparatus 700 may be provided as a server. Referring to fig. 7, apparatus 700 includes a processing component 722 that further includes one or more processors and memory resources, represented by memory 732, for storing instructions, such as applications, that are executable by processing component 722. The application programs stored in memory 732 may include one or more modules that each correspond to a set of instructions. Further, the processing component 722 is configured to execute instructions to perform the image enhancement methods described above.
The apparatus 700 may also include a power component 726 configured to perform power management of the apparatus 700, a wired or wireless network interface 750 configured to connect the apparatus 700 to a network, and an input output (I/O) interface 758. The apparatus 700 may operate based on an operating system stored in memory 732, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In addition, a non-transitory computer-readable storage medium is provided, and when executed by a processor of an electronic device, instructions in the storage medium enable the electronic device to perform the steps of an image enhancement method provided by an embodiment of the present application.
In addition, the present application also provides a computer program product, and when the instructions in the computer program product are executed by a processor of the electronic device, the electronic device is enabled to execute the steps of the image enhancement method described above.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (26)

1. An image enhancement method, comprising:
acquiring a target image to be enhanced, and performing down-sampling processing on the target image to obtain a down-sampled image;
inputting the downsampled image into a deep learning network trained in advance to obtain image enhancement data corresponding to the downsampled image; the deep learning network is obtained by training a sample image and a sample enhanced image corresponding to the sample image, and the image enhanced data is data representing the enhancement degree of the image enhanced by the down-sampling image relative to the down-sampling image;
determining a matching point of each pixel point in the target image in the down-sampling image;
aiming at each pixel point in the target image, determining a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data;
and aiming at each pixel point in the target image, adjusting the pixel value of the pixel point based on the target enhancement parameter corresponding to the pixel point to obtain an enhanced image corresponding to the target image.
2. The method of claim 1, wherein determining a matching point of each pixel point in the target image in the down-sampled image comprises:
aiming at each pixel point in the target image, determining a corresponding pixel point of the pixel point in the down-sampling image, searching a pixel point with the minimum absolute value of the difference between the absolute value of the pixel value of the corresponding pixel point and the absolute value of the pixel value of the corresponding pixel point in a searching area which takes the corresponding pixel point as the center and has the size of M multiplied by N, and taking the searched pixel point as a matching point of the pixel point in the down-sampling image;
and for the pixel point with the coordinate of (u, v) in the target image, the coordinate of the corresponding pixel point of the pixel point in the downsampled image is (u/x, v/x), and x represents the downsampling multiple.
3. The method of claim 1, wherein the image enhancement data comprises: each pixel point in the down-sampling image is mapped into a mapping parameter of a corresponding pixel point, and the corresponding pixel point of any pixel point is the pixel point which has the same position as the pixel point in the image after the down-sampling image is enhanced;
the determining, for each pixel point in the target image, a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data includes:
and aiming at each pixel point in the target image, determining a target parameter corresponding to the matching point of the pixel point from each mapping parameter, and taking the determined target parameter as a target enhancement parameter corresponding to the pixel point.
4. The method of claim 1, wherein the image enhancement data comprises: the downsampled image enhanced image;
the determining, for each pixel point in the target image, a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data includes:
and aiming at each pixel point in the target image, determining a target point which is the same as the matching point of the pixel point in the image after the down-sampling image enhancement, calculating the ratio of the pixel value of the target point to the pixel value of the matching point of the pixel point, and determining the ratio as a target enhancement parameter corresponding to the pixel point.
5. The method of claim 1, wherein for each pixel point in the target image, adjusting the pixel value of the pixel point based on the target enhancement parameter corresponding to the pixel point comprises:
aiming at each pixel point in the target image, based on the target enhancement parameter corresponding to the pixel point, the pixel value of the pixel point is adjusted through the following formula:
AIp=Op
wherein A represents the target enhancement parameter corresponding to the pixel point, IpIndicating the pixel value, O, of the pixel point before adjustmentpAnd the pixel value of the pixel point after adjustment is represented.
6. The method according to any one of claims 1-5, further comprising:
determining a first RGB image of the target image in an RGB color mode and a second RGB image of the enhanced image in the RGB color mode;
generating a first brightness image corresponding to the first RGB image, where a brightness value of any pixel in the first brightness image is: the maximum value in the RGB channel value of a first pixel point corresponding to the pixel point is the pixel point in the first RGB image, and the position of the first pixel point is the same as that of the pixel point;
generating a second brightness image corresponding to the second RGB image, where a brightness value of any pixel in the second brightness image is: the maximum value in the RGB channel value of a second pixel point corresponding to the pixel point is the pixel point in the second RGB image, and the position of the second pixel point is the same as that of the pixel point;
calculating a gain parameter of the second luminance image relative to the first luminance image;
and performing brightness enhancement processing on the second RGB image based on the calculated gain parameter to obtain a brightness enhanced image.
7. The method of claim 6, wherein the calculating the gain parameter of the second luminance image relative to the first luminance image comprises:
calculating a gain parameter of the second luminance image relative to the first luminance image:
Figure FDA0001984159500000021
wherein R ispRepresenting a gain parameter, V, of a pixel in the second luminance image relative to a corresponding pixel in the first luminance imageopA luminance value, V, representing said pixel point in said second luminance imageipAnd expressing the brightness value of the corresponding pixel point in the first brightness image.
8. The method according to claim 6, wherein performing a luminance enhancement process on the second RGB image based on the calculated gain parameter to obtain a luminance enhanced image comprises:
based on the calculated gain parameter, performing brightness enhancement processing on the second RGB image through the following formula to obtain a brightness enhanced image:
Y‘p,c=Yp,c*Rp
wherein, Y'p,cRepresenting the brightness increaseChannel value, Y, of channel c of a pixel in a strong imagep,cRepresenting the channel value of a channel c of a corresponding pixel point in the second RGB image, wherein the value of c is 1, 2, 3, 1 represents an R channel, 2 represents a G channel, 3 represents a B channel, and RpAnd representing a gain parameter of a pixel point in the second luminance image relative to a corresponding pixel point in the first luminance image, wherein the coordinates of the pixel point in the luminance enhanced image, the corresponding pixel point in the second RGB image, the pixel point in the second luminance image and the corresponding pixel point in the first luminance image are the same.
9. The method of claim 6, wherein the calculating the gain parameter of the second luminance image relative to the first luminance image comprises:
carrying out exposure correction processing on the second brightness image to obtain a third brightness image;
and calculating a gain parameter of the third luminance image relative to the first luminance image.
10. The method according to claim 9, wherein performing exposure correction processing on the second luminance image to obtain a third luminance image comprises:
carrying out exposure correction processing on the second brightness image by the following formula to obtain a third brightness image:
Figure FDA0001984159500000031
wherein, V'opA pixel value, V, representing a pixel point in the third luminance imageopRepresenting pixel values, V, of corresponding pixel points in said second luminance imageminRepresenting a minimum pixel value, V, in said second luminance imagemaxRepresenting a maximum pixel value, th, in said second luminance imagelIndicates a first preset threshold value, thhIndicates a second preset threshold value, thlAnd thhSatisfies the following conditions: 0 < thl<thh<1。
11. The method of claim 6, wherein after the step of performing a luminance enhancement process on the second RGB image based on the calculated gain parameter to obtain a luminance enhanced image, the method further comprises:
and carrying out color enhancement processing on the brightness enhanced image to obtain a color enhanced image.
12. The method of claim 11, wherein said color enhancing said luminance-enhanced image to obtain a color-enhanced image comprises:
carrying out color enhancement processing on the brightness enhanced image through the following formula to obtain a color enhanced image:
Figure FDA0001984159500000041
wherein, Y ″)p,cRepresenting a channel value, Y ', of a channel c of a pixel point in the color enhanced image'p,cA channel value, Y, of a channel c representing a corresponding pixel point in the luminance enhanced imagep,cRepresenting the channel value of the channel c of the corresponding pixel point in the second RGB image, wherein the value of c is 1, 2, 3, th1,th2And th3And the third preset threshold values corresponding to the R, G and B channels are respectively represented.
13. An image enhancement apparatus, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is configured to acquire a target image to be enhanced and perform downsampling processing on the target image to obtain a downsampled image;
the processing module is configured to input the downsampled image into a deep learning network trained in advance to obtain image enhancement data corresponding to the downsampled image; the deep learning network is obtained by training a sample image and a sample enhanced image corresponding to the sample image, and the image enhanced data is data representing the enhancement degree of the image enhanced by the down-sampling image relative to the down-sampling image;
a first determining module configured to determine a matching point of each pixel point in the target image in the down-sampled image;
a second determining module configured to determine, for each pixel point in the target image, a target enhancement parameter corresponding to the pixel point based on enhancement data corresponding to a matching point of the pixel point in the image enhancement data;
and the adjusting module is configured to adjust the pixel value of each pixel point in the target image based on the target enhancement parameter corresponding to the pixel point to obtain an enhanced image corresponding to the target image.
14. The apparatus of claim 13,
the first determining module is specifically configured to determine, for each pixel in the target image, a corresponding pixel of the pixel in the downsampled image, search, in a search area that is centered on the corresponding pixel and has a size of M × N, for a pixel having a minimum absolute value of a difference between a pixel value of the corresponding pixel and a pixel value of the corresponding pixel, and use the searched pixel as a matching point of the pixel in the downsampled image;
and for the pixel point with the coordinate of (u, v) in the target image, the coordinate of the corresponding pixel point of the pixel point in the downsampled image is (u/x, v/x), and x represents the downsampling multiple.
15. The apparatus of claim 13, wherein the image enhancement data comprises: each pixel point in the down-sampling image is mapped into a mapping parameter of a corresponding pixel point, and the corresponding pixel point of any pixel point is the pixel point which has the same position as the pixel point in the image after the down-sampling image is enhanced;
the second determining module is specifically configured to determine, for each pixel point in the target image, a target parameter corresponding to a matching point of the pixel point from the mapping parameters, and use the determined target parameter as a target enhancement parameter corresponding to the pixel point.
16. The apparatus of claim 13, wherein the image enhancement data comprises: the downsampled image enhanced image;
the second determining module is specifically configured to determine, for each pixel point in the target image, a target point in the image after the enhancement of the downsampled image, where the position of the target point is the same as the matching point of the pixel point, calculate a ratio between a pixel value of the target point and a pixel value of the matching point of the pixel point, and determine the ratio as a target enhancement parameter corresponding to the pixel point.
17. The apparatus of claim 13,
the adjusting module is specifically configured to, for each pixel point in the target image, adjust a pixel value of the pixel point based on a target enhancement parameter corresponding to the pixel point by using the following formula:
AIp=Op
wherein A represents the target enhancement parameter corresponding to the pixel point, IpIndicating the pixel value, O, of the pixel point before adjustmentpAnd the pixel value of the pixel point after adjustment is represented.
18. The apparatus of any one of claims 13-17, further comprising:
a third determination module configured to determine a first RGB image of the target image in RGB color mode and a second RGB image of the enhanced image in RGB color mode;
a first generating module, configured to generate a first luminance image corresponding to the first RGB image, where a luminance value of any pixel in the first luminance image is: the maximum value in the RGB channel value of a first pixel point corresponding to the pixel point is the pixel point in the first RGB image, and the position of the first pixel point is the same as that of the pixel point;
a second generating module, configured to generate a second luminance image corresponding to the second RGB image, where a luminance value of any pixel in the second luminance image is: the maximum value in the RGB channel value of a second pixel point corresponding to the pixel point is the pixel point in the second RGB image, and the position of the second pixel point is the same as that of the pixel point;
a calculation module configured to calculate a gain parameter of the second luminance image relative to the first luminance image;
and the first enhancement module is configured to perform brightness enhancement processing on the second RGB image based on the calculated gain parameter to obtain a brightness enhanced image.
19. The apparatus of claim 18,
the calculation module is specifically configured to calculate a gain parameter of the second luminance image with respect to the first luminance image:
Figure FDA0001984159500000061
wherein R ispRepresenting a gain parameter, V, of a pixel in the second luminance image relative to a corresponding pixel in the first luminance imageopA luminance value, V, representing said pixel point in said second luminance imageipAnd expressing the brightness value of the corresponding pixel point in the first brightness image.
20. The apparatus of claim 18,
the first enhancement module is specifically configured to perform, based on the calculated gain parameter, luminance enhancement processing on the second RGB image by using the following formula to obtain a luminance enhanced image:
Y‘p,c=Yp,c*Rp
wherein, Y'p,cA channel value, Y, of a channel c representing a pixel in the luminance enhanced imagep,cRepresenting the channel value of a channel c of a corresponding pixel point in the second RGB image, wherein the value of c is 1, 2, 3, 1 represents an R channel, 2 represents a G channel, 3 represents a B channel, and RpAnd representing a gain parameter of a pixel point in the second luminance image relative to a corresponding pixel point in the first luminance image, wherein the coordinates of the pixel point in the luminance enhanced image, the corresponding pixel point in the second RGB image, the pixel point in the second luminance image and the corresponding pixel point in the first luminance image are the same.
21. The apparatus of claim 18, wherein the computing module comprises: a correction unit and a calculation unit;
the correction unit is configured to perform exposure correction processing on the second brightness image to obtain a third brightness image;
the calculation unit is configured to calculate a gain parameter of the third luminance image with respect to the first luminance image.
22. The apparatus of claim 21,
the correction unit is specifically configured to perform exposure correction processing on the second luminance image by the following formula to obtain a third luminance image:
Figure FDA0001984159500000062
wherein, V'opA pixel value, V, representing a pixel point in the third luminance imageopRepresenting pixel values, V, of corresponding pixel points in said second luminance imageminRepresenting a minimum pixel value, V, in said second luminance imagemaxRepresenting a maximum pixel value, th, in said second luminance imagelIndicates a first preset threshold value, thhIndicates a second preset threshold value, thlAnd thhSatisfies the following conditions: 0 < thl<thh<1。
23. The apparatus of claim 18, further comprising:
and the second enhancement module is configured to perform color enhancement processing on the brightness enhanced image to obtain a color enhanced image.
24. The apparatus of claim 23,
the second enhancement module is specifically configured to perform color enhancement processing on the brightness enhanced image by using the following formula to obtain a color enhanced image:
Figure FDA0001984159500000071
wherein, Y ″)p,cRepresenting a channel value, Y ', of a channel c of a pixel point in the color enhanced image'p,cA channel value, Y, of a channel c representing a corresponding pixel point in the luminance enhanced imagep,cRepresenting the channel value of the channel c of the corresponding pixel point in the second RGB image, wherein the value of c is 1, 2, 3, th1,th2And th3And the third preset threshold values corresponding to the R, G and B channels are respectively represented.
25. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: when the program stored in the memory is executed, the image enhancement method according to any one of claims 1 to 12 is implemented.
26. A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of an electronic device, enable the electronic device to perform the image enhancement method of any of claims 1-12.
CN201811233579.0A 2018-10-22 2018-10-22 Image enhancement method and device, electronic equipment and storage medium Active CN109345485B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811233579.0A CN109345485B (en) 2018-10-22 2018-10-22 Image enhancement method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811233579.0A CN109345485B (en) 2018-10-22 2018-10-22 Image enhancement method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109345485A CN109345485A (en) 2019-02-15
CN109345485B true CN109345485B (en) 2021-04-16

Family

ID=65311530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811233579.0A Active CN109345485B (en) 2018-10-22 2018-10-22 Image enhancement method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109345485B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919869B (en) * 2019-02-28 2021-06-04 腾讯科技(深圳)有限公司 Image enhancement method and device and storage medium
CN110049242B (en) * 2019-04-18 2021-08-24 腾讯科技(深圳)有限公司 Image processing method and device
CN111986069A (en) 2019-05-22 2020-11-24 三星电子株式会社 Image processing apparatus and image processing method thereof
CN112308785B (en) * 2019-08-01 2024-05-28 武汉Tcl集团工业研究院有限公司 Image denoising method, storage medium and terminal equipment
CN113284054A (en) * 2020-02-19 2021-08-20 华为技术有限公司 Image enhancement method and image enhancement device
CN112084936B (en) * 2020-09-08 2024-05-10 济南博观智能科技有限公司 Face image preprocessing method, device, equipment and storage medium
CN112261438B (en) * 2020-10-16 2022-04-15 腾讯科技(深圳)有限公司 Video enhancement method, device, equipment and storage medium
CN112884849A (en) * 2021-02-03 2021-06-01 无锡安科迪智能技术有限公司 Panoramic image splicing and color matching method and device
CN113822809B (en) * 2021-03-10 2023-06-06 无锡安科迪智能技术有限公司 Dim light enhancement method and system thereof
CN115601274A (en) * 2021-07-07 2023-01-13 荣耀终端有限公司(Cn) Image processing method and device and electronic equipment
CN113781320A (en) * 2021-08-02 2021-12-10 中国科学院深圳先进技术研究院 Image processing method and device, terminal equipment and storage medium
CN114257741B (en) * 2021-12-15 2022-12-06 浙江大学 Vehicle-mounted HDR method with rapid response
CN117314795B (en) * 2023-11-30 2024-02-27 成都玖锦科技有限公司 SAR image enhancement method by using background data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8670630B1 (en) * 2010-12-09 2014-03-11 Google Inc. Fast randomized multi-scale energy minimization for image processing
CN104091310A (en) * 2014-06-24 2014-10-08 三星电子(中国)研发中心 Image defogging method and device
CN107133933A (en) * 2017-05-10 2017-09-05 广州海兆印丰信息科技有限公司 Mammography X Enhancement Method based on convolutional neural networks
CN108305236A (en) * 2018-01-16 2018-07-20 腾讯科技(深圳)有限公司 Image enhancement processing method and device
CN108648163A (en) * 2018-05-17 2018-10-12 厦门美图之家科技有限公司 A kind of Enhancement Method and computing device of facial image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8670630B1 (en) * 2010-12-09 2014-03-11 Google Inc. Fast randomized multi-scale energy minimization for image processing
CN104091310A (en) * 2014-06-24 2014-10-08 三星电子(中国)研发中心 Image defogging method and device
CN107133933A (en) * 2017-05-10 2017-09-05 广州海兆印丰信息科技有限公司 Mammography X Enhancement Method based on convolutional neural networks
CN108305236A (en) * 2018-01-16 2018-07-20 腾讯科技(深圳)有限公司 Image enhancement processing method and device
CN108648163A (en) * 2018-05-17 2018-10-12 厦门美图之家科技有限公司 A kind of Enhancement Method and computing device of facial image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Dim infrared image enhancement based on convolutional neural network;Zunlin Fan等;《Neurocomputing》;20180131;第272卷;全文 *
云天背景下的红外弱小目标检测算法;樊民革等;《电子测量技术》;20090630;第32卷(第6期);第55-64页 *

Also Published As

Publication number Publication date
CN109345485A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109345485B (en) Image enhancement method and device, electronic equipment and storage medium
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
CN111709890B (en) Training method and device for image enhancement model and storage medium
CN110708559B (en) Image processing method, device and storage medium
CN110958401B (en) Super night scene image color correction method and device and electronic equipment
CN106131441B (en) Photographing method and device and electronic equipment
CN108154465B (en) Image processing method and device
CN105528765B (en) Method and device for processing image
CN108040204B (en) Image shooting method and device based on multiple cameras and storage medium
CN111953904B (en) Shooting method, shooting device, electronic equipment and storage medium
CN109784164B (en) Foreground identification method and device, electronic equipment and storage medium
CN105791790B (en) Image processing method and device
US11222235B2 (en) Method and apparatus for training image processing model, and storage medium
CN111953903A (en) Shooting method, shooting device, electronic equipment and storage medium
CN107730443B (en) Image processing method and device and user equipment
CN114827391A (en) Camera switching method, camera switching device and storage medium
CN111741187A (en) Image processing method, device and storage medium
CN108156381B (en) Photographing method and device
CN105472228B (en) Image processing method and device and terminal
CN115552415A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112950503A (en) Training sample generation method and device and truth value image generation method and device
CN107025638B (en) Image processing method and device
CN115641269A (en) Image repairing method and device and readable storage medium
CN112785537A (en) Image processing method, device and storage medium
CN112188095B (en) Photographing method, photographing device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant