CN111918095B - Dim light enhancement method and device, mobile terminal and storage medium - Google Patents

Dim light enhancement method and device, mobile terminal and storage medium Download PDF

Info

Publication number
CN111918095B
CN111918095B CN202010779098.0A CN202010779098A CN111918095B CN 111918095 B CN111918095 B CN 111918095B CN 202010779098 A CN202010779098 A CN 202010779098A CN 111918095 B CN111918095 B CN 111918095B
Authority
CN
China
Prior art keywords
image data
original
brightness
target
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010779098.0A
Other languages
Chinese (zh)
Other versions
CN111918095A (en
Inventor
杨敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bigo Technology Singapore Pte Ltd
Original Assignee
Guangzhou Baiguoyuan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Baiguoyuan Information Technology Co Ltd filed Critical Guangzhou Baiguoyuan Information Technology Co Ltd
Priority to CN202010779098.0A priority Critical patent/CN111918095B/en
Publication of CN111918095A publication Critical patent/CN111918095A/en
Application granted granted Critical
Publication of CN111918095B publication Critical patent/CN111918095B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/70
    • G06T5/77
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the invention provides a dim light enhancement method, a dim light enhancement device, a mobile terminal and a storage medium, wherein the method comprises the following steps: the method comprises the steps of collecting original image data, carrying out denoising processing on the original brightness image data to obtain original illumination image data, carrying out image processing on the original illumination image data to generate target illumination image data, improving the brightness value of the original illumination image data to serve as target brightness image data, synthesizing the target illumination image data and the reflection image data into characteristic image data if the original image data is the superposition between the original illumination image data and the reflection image data, synthesizing the target brightness image data and the characteristic chromaticity image data into the target image data, combining the target illumination image data under a Retinex model, slightly brightening the original image data with low brightness under the condition of guaranteeing true color, improving image quality, and enabling dark light enhancement to be possible in devices with limited performance such as a mobile terminal.

Description

Dim light enhancement method and device, mobile terminal and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computer vision, in particular to a dim light enhancement method and device, a mobile terminal and a storage medium.
Background
With the rapid development of the mobile internet and the mobile terminal, the mobile terminal usually collects video data in real time when performing business operations, such as live broadcast, video call, video conference, etc., which contain a large amount of information of objects, and become one of the ways for people to obtain external original information.
The mobile terminal collects video data in a dark environment, the video data can generate noise, the image brightness is low, and the like, so that the quality of the video data is integrally low, and the subjective quality evaluation of a user on the video data is reduced. Therefore, in a dim environment, video data is generally subjected to dim enhancement to improve the quality of the video data.
However, in order to ensure the real-time performance of the business operation, and the mobile terminal device also bears the pressure of encoding the video data, the time and the calculation power for enhancing the dim light are very limited.
Disclosure of Invention
The embodiment of the invention provides a dim light enhancement method, a dim light enhancement device, a mobile terminal and a storage medium, and aims to solve the problem of how to enhance the dim light of video data under the condition of limited time and computing power.
In a first aspect, an embodiment of the present invention provides a dim light enhancement method, including:
acquiring original image data, wherein the original image data comprises original brightness image data representing brightness;
denoising the original brightness image data to obtain original illumination image data;
performing image processing on the original illumination image data to generate target illumination image data;
improving the brightness value of the original illumination image data to be used as target brightness image data;
if the original image data is the superposition of the original illumination image data and the reflection image data, synthesizing the target illumination image data and the reflection image data into characteristic image data, wherein the characteristic image data comprises characteristic chromaticity image data representing chromaticity;
and synthesizing the target brightness image data and the characteristic chrominance image data into target image data.
In a second aspect, an embodiment of the present invention further provides a dim light enhancing device, including:
the system comprises an original image data acquisition module, a brightness acquisition module and a brightness acquisition module, wherein the original image data acquisition module is used for acquiring original image data which comprises original brightness image data representing brightness;
the de-noising processing module is used for de-noising the original brightness image data to obtain original illumination image data;
the target illumination image data generation module is used for carrying out image processing on the original illumination image data to generate target illumination image data;
the brightness improving module is used for improving the brightness value of the original illumination image data to serve as target brightness image data;
a feature image data generation module, configured to synthesize the target illumination image data and the reflection image data into feature image data if the original image data is an overlay between the original illumination image data and the reflection image data, where the feature image data includes feature chrominance image data representing chrominance;
and the target image data synthesis module is used for synthesizing the target brightness image data and the characteristic chromaticity image data into target image data.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the dim light enhancement method according to the first aspect.
In a fourth aspect, the embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the dim light enhancement method according to the first aspect.
In this embodiment, the original image data is collected, the original image data includes original luminance image data representing luminance, denoising processing is performed on the original luminance image data to obtain original illumination image data, image processing is performed on the original illumination image data to generate target illumination image data, the luminance value of the original illumination image data is increased to serve as target luminance image data, if the original image data is the superposition between the original illumination image data and the reflection image data, the target illumination image data and the reflection image data are synthesized into characteristic image data, the characteristic image data includes characteristic chromaticity image data representing chromaticity, the target luminance image data and the characteristic chromaticity image data are synthesized into target image data, the illumination image data is generated by performing denoising processing on the original luminance image data, and the operation is simple, the method has the advantages that the noise of the illumination image data is suppressed, the calculation amount is reduced while the effect of dark light enhancement is ensured, the characteristic chromaticity image data is separated from the target characteristic data under the Retinex model and combined with the brightened target brightness image data to form the target illumination image data, the original image data with low brightness is slightly brightened under the condition of ensuring the real color, the image quality is improved, the damage to the image quality caused by greatly brightening is avoided, and the dark light enhancement is possible in devices with limited performance, such as a mobile terminal.
Drawings
Fig. 1 is a flowchart of a dim light enhancement method according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a dark light enhancement method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a Retinex according to an embodiment of the present invention;
FIG. 4 is a flowchart of a dim light enhancement method according to a second embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a structure of a dim light enhancement method according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of YUV according to the second embodiment of the present invention;
fig. 7 is an exemplary diagram of bilinear difference values according to a second embodiment of the present invention;
FIG. 8 is a comparison graph of brightening and denoising effects according to a second embodiment of the present invention;
fig. 9 is a schematic structural diagram of a dim light enhancement device according to a third embodiment of the present invention;
fig. 10 is a schematic structural diagram of a mobile terminal according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a dim light enhancement method according to an embodiment of the present invention, where this embodiment is applicable to a situation of applying Retinex to perform dim light enhancement in a YUV ("Y" represents brightness (Luma or Luma), that is, a gray value, "U" and "V" represent Chroma (Chroma or Chroma) color space, which is used for describing image color and saturation, and for specifying a color of a pixel), and the method may be executed by a dim light enhancement apparatus, which may be implemented by software and/or hardware, and may be configured in a mobile terminal, for example, a mobile phone, a tablet computer, a smart wearable device (e.g., a smart watch, smart glasses, and the like), and specifically includes the following steps:
step 101, collecting original image data.
In practical application, if image data is generated in a dark environment and has problems of more noise, lower brightness and the like, the image data is collected to wait for the enhancement of dark light so as to solve or alleviate the problems of more noise, lower brightness and the like, and the image data can be called as original image data.
By dim environment, it is meant an environment with little light (e.g., less than 500lux), such as outdoors at night, indoors with poor light transmission, etc.
In one case, the original image data is image data in video data generated, transmitted or played in a real-time business operation performed by the mobile terminal.
In general, the video data is subjected to dim enhancement in the mobile terminal that generates the video data, and in this case, as shown in fig. 2, in S201, the camera of the mobile terminal is turned on, in S202, the camera captures the video data, and in S203, the original image data is extracted from the video data.
The video data in S202 is original video data, and the original image data is in a first color space, i.e., YUV color space, which includes original luminance image data representing luminance Y and original chrominance image data representing chrominance UV, without being subjected to other image processing.
In addition, in addition to performing the dim enhancement on the video data in the mobile terminal that generates the video data, the dim enhancement may also be performed on the video data in the mobile terminal that plays the video data, which is not limited in this embodiment.
For example, in a live service operation, video data waiting for dim light enhancement may refer to video data used for carrying live content, where a mobile terminal logged by an anchor user generates video data, and the video data is distributed to a device logged by each viewer user through a live platform for playing.
For another example, in a service operation of a video call, the video data waiting for dim light enhancement may refer to video data for carrying call content, the mobile terminal that the user initiating the call logs in generates video data, and sends the video data to the device that each user invited to the call logs in for playing, and at this time, the video data is usually dim light enhanced at the mobile terminal that the user initiating the call logs in.
For another example, in a service operation of a video conference, the video data waiting for dim light enhancement may refer to video data for carrying conference content, a mobile terminal logged in by a speaking user generates video data, and the video data is transmitted to a device logged in by each user participating in the conference for playing.
Of course, in addition to the video data generated in the real-time service operation, such as live broadcast, video call, and video conference, the video data waiting for dim light enhancement may also refer to the video data generated in the non-real-time service operation, such as short video, and the like, which is not limited in this embodiment.
And 102, denoising the original brightness image data to obtain original illumination image data.
In this embodiment, as shown in fig. 2, in S204, a denoising process suitable for being executed when the mobile terminal performs a service operation may be selected according to the total amount of resources in the mobile terminal, resources occupied by the service operation, and the like, so as to reduce noise in the original luminance image data Y, obtain the original illumination image data Y' (I) in the Retinex model, and perform the denoising process before the brightening, so as to greatly suppress negative effects of noise on the image data after the brightening, improve the contrast of the image data after the brightening, and not amplify the effects of noise.
In a specific implementation, the denoising processing can be performed on the original luminance image data by applying two ways as follows:
1. denoising algorithm based on deep learning
The deep learning based denoising algorithm includes utilizing a stacked denoising auto-encoder (stacked denoising auto-encoder), implementing neural network denoising by a Multi Layer Perceptron (MLP) method, and the like.
Under the conditions of shallow network, hardware improvement of the mobile terminal and the like, the video data can be subjected to denoising processing based on a deep learning denoising algorithm.
2. Denoising algorithm for non-deep learning
The non-deep learning denoising algorithm can be distinguished into 2DNR (2D Noise Reduction, two-dimensional denoising process) and 3DNR (3D Noise Reduction, three-dimensional denoising process).
The 2DNR belongs to a space-based noise reduction algorithm, and the 2DNR performs filtering or block matching on the window, so that if the denoising processing is two-dimensional denoising processing, the window is set based on the intensity, and the window is used for performing filtering processing or block matching on the video data.
The size of the window is positively correlated with the strength, namely the larger the strength is, the larger the window is, and conversely, the smaller the strength is, the smaller the window is.
For block matching, when the height of the input image data is H, the width is W, the size of the window for sliding block matching is D × D, the selected block size is D × D, and then the complexity of calculation is O (HWD)2d2)。
For filtering, the calculation of Gaussian filter with the height of H and the width of W in the input image dataMethod complexity of O (HWd)2) Where d x d is the filter window size and the temporal complexity of the guided filtering is o (hw).
Further, when filtering, protecting the original image data by guiding an operation of edge protection of the image data may specifically include the following ways:
2.1, guided Filtering
Guided filtering filters input image data P (e.g., raw luminance image data) using guide image data I to obtain denoised image data Q, and the mathematical formula of guided filtering is expressed as:
Figure BDA0002619555930000071
wherein i represents a pixel point, j represents a filtering window, and Wij(I) Indicating that the weights used in the weighted average operation are determined by the pilot image data I, which may be the image data of a single frame or the image data P itself (in which case the pilot filter degrades into a side-preserving filter).
2.2 Joint bilateral Filtering
The joint bilateral filtering utilizes the guide image data I to filter the input image data (such as the original brightness image data) so as to obtain the denoised image data J (such as the original illumination image data), if the guide image data I is the input image data (such as the original brightness image data), the joint bilateral filtering becomes bilateral filtering, and the specific mathematical expression is as follows:
Figure BDA0002619555930000072
wherein, p represents a pixel point, q represents a filtering window, f (·) is a spatial filter, the weight is calculated by using the distance between the current pixel point and the surrounding pixel points, and g · is a range filter, which represents the difference between the pixel values of the current pixel point and the surrounding pixel points of the guide image data to calculate the weight. k is a radical of formulapThe edge preserving method is a normalization parameter, and when the difference between the distance and the pixel value is large, the product of the distance and the pixel value is small, so that the edge preserving operation is realized.
The 3DNR belongs to a time and space based noise reduction algorithm, and 3DNR is a function that assumes that randomly generated noise in video data varies with time, and is expressed as follows:
F(t)=F+N(t)
wherein F (t) is image data with noise, F is original image data, and n (t) is noise that varies with time, and the noise follows a gaussian distribution with a mean value of 0, and according to the theorem of majors, if n (t) is accumulated, the more it is accumulated with time, the noise is approximately close to zero.
For 3DNR, the video data in this embodiment is original video data, that is, video data that has not undergone other processing (such as white balance, brightness adjustment, etc.), and is usually in YUV format, and at this time, noise carried by the video data most conforms to gaussian distribution with an average value of 0, so that the effect of 3DNR can be ensured.
Of course, if the operating system in the mobile terminal is limited, the original video data cannot be obtained, and the video data subjected to other processing (such as white balance, brightness adjustment, etc.) may also be subjected to denoising processing, which is not limited in this embodiment.
And 103, carrying out image processing on the original illumination image data to generate target illumination image data.
In the present embodiment, as shown in fig. 2, the original illumination image data may be subjected to image processing 205 to adjust the original illumination image data Y '(I) according to the requirements of Retinex, thereby generating target illumination image data I'.
In general, in consideration that the brightness value of the original illumination image data is increased in step 104, the image processing in step 103 may be other operations besides increasing the brightness, such as a filtering process, a sharpening process, and the like, which is not limited in this embodiment.
And step 104, improving the brightness value of the original illumination image data to be used as target brightness image data.
In the present embodiment, as shown in fig. 2, in S206, the original lighting image data Y' may be used for brightening, i.e., increasing the brightness value of each pixel point included therein, in addition to the target lighting image data, thereby generating target brightness image data Y ″.
And 105, if the original image data is the superposition between the original illumination image data and the reflection image data, synthesizing the target illumination image data and the reflection image data into characteristic image data.
In this embodiment, under the framework of the Retinex model, the feature image data may be generated with reference to the relationship between the target illumination image data and the original image data.
Retinex belongs to the synthetic word, consisting of Retina (Retina) and Cortex, which is based on the following:
first, the real world is colorless, and the perceived color is the result of the interaction of light with matter.
And each color zone is composed of three primary colors of red, green and blue with given wavelengths.
And thirdly, the three primary colors determine the color of each unit area.
In Retinex theory, the color of an object is determined by the reflection ability of the object for long-wave (red), medium-wave (green), and short-wave (blue) light, rather than the absolute value of the intensity of the reflected light, and the color of the object is not affected by illumination non-uniformity and has uniformity.
That is, Retinex theory is based on color sense uniformity (color constancy), which is the ability of human eyes to recognize the original color of an object even at different brightness levels.
As shown in fig. 3, in Retinex theory, image data obtained by the human eye depends on incident light and the reflection of the incident light by the surface of an object. The image data is first illuminated by the incident light and reflected by the object into the imaging system to form what is seen. In this process, the reflectivity is determined by the object itself, is not affected by the incident light, and can be expressed by the following formula:
L=I·T
where L represents the raw image data received by the observed or camera, I represents the illumination component of the ambient light, i.e., the raw illumination image data, and T represents the reflection component of the target object carrying the image detail information, i.e., the reflection image data.
In this embodiment, the original illumination image data I determines a dynamic range that each pixel in a frame of original image data can reach, the reflection image data T determines an intrinsic property of the original image data, and after the original illumination image data I in the original image data L is determined, as shown in fig. 2, in S207, a Retinex model is run, and the reflection image data is decomposed from the original image data with reference to the original illumination image data, that is, the property of the original illumination image data I is separated from the original image data L, so that an original appearance of an object, that is, the reflection image data T is separated, and an influence of illumination unevenness is eliminated.
At this time, under the framework of the Retinex model, the target illumination image data and the reflection image data are synthesized into the characteristic image data.
Further, the original image data is in a first color space, and the Retinex model decomposed reflection image data is usually in a second color space, so that the original image data can be converted from the first color space to the second color space by a specified conversion relation.
In order to make those skilled in the art better understand the present embodiment, in the present embodiment, a YUV color space is taken as an example of the first color space, and an RGB (R represents red, G represents green, and B represents blue) color space is taken as an example of the second color space.
For example, raw image data may be converted from YUV color space to RGB color space by the following conversion relation:
R=Y+1.4075*(V-128)
G=Y-0.3455*(U-128)-0.7169*(V-128)
B=Y+1.779*(U-128)
and traversing each pixel point in the original image data and the original illumination image data, and respectively determining the original color component of the pixel point expressed in the second color space and the brightness value of the pixel point expressed in the first color space aiming at the pixel point at the same position.
And calculating the ratio of the original color component to the brightness value as the reflection color component represented by the pixel points in the reflection image data.
For example, the reflected color components R ' G ' B ' represented by pixel points in the reflected image data are represented as follows:
R'=R/Y
G'=G/Y
B'=B/Y
where RGB represents the original color components in the original image data and Y represents the luminance values.
In this embodiment, as shown in fig. 2, in S207, a Retinex model is run to synthesize the target illumination image data and the reflection image data after brightness is increased, and obtain characteristic image data, which is expressed as follows:
I'=Iα
L'=I'·T
wherein, IαIndicating that the original illumination image data I is subjected to image processing, I 'indicating target illumination image data, T indicating reflection image data, and L' indicating characteristic image data.
Thereafter, the feature image data may be converted from the second color space to the first color space by the specified conversion relation for subsequent processing.
For example, the feature image data may be converted from an RGB color space to a YUV color space by the following conversion relation:
Y=0.299*R+0.587*G+0.114*B
Figure BDA0002619555930000111
Figure BDA0002619555930000112
the feature image data is converted into a second color space such as RGB, and the like, so that the feature image data after being brightened can be maximally ensured not to generate color difference.
And 106, synthesizing the target brightness image data and the characteristic chrominance image data into target image data.
Since the characteristic image data includes characteristic luminance image data indicating luminance, characteristic chrominance image data indicating chrominance U 'V', and the target luminance image data indicates luminance Y ″, the target luminance image data Y ″ may be combined with the characteristic chrominance image data U 'V' in the YUV color space as target image data Y "U 'V' in place of the characteristic luminance image data in S208, as shown in fig. 2.
For the target image data after the dim light enhancement, subsequent processing may be performed according to the service scene, which is not limited in this embodiment.
For example, as shown in fig. 2, other image processing such as face detection, beauty processing, etc. is performed on the target image data after brightness enhancement and denoising, in S209, the target image data after image processing is displayed on a screen, and in S210, the target image data after image processing is encoded, that is, Video data is encoded, such as encoded in a h.264 format, and packaged in a FLV (Flash Video) format, and waits for transmission to a device that plays the Video data.
In this embodiment, original image data is collected, the original image data includes original luminance image data representing luminance, denoising processing is performed on the original luminance image data to obtain original illumination image data, image processing is performed on the original illumination image data to generate target illumination image data, the luminance value of the original illumination image data is increased to serve as target luminance image data, if the original image data is the superposition between the original illumination image data and the reflection image data, the target illumination image data and the reflection image data are synthesized into feature image data, the feature image data includes feature chrominance image data representing chrominance, the target luminance image data and the feature chrominance image data are synthesized into target image data, the illumination image data is generated by performing denoising processing on the original luminance image data, the operation is simple, the method has the advantages that the noise of the illumination image data is suppressed, the calculation amount is reduced while the effect of dark light enhancement is ensured, the characteristic chromaticity image data is separated from the target characteristic data under the Retinex model and combined with the brightened target brightness image data to form the target illumination image data, the original image data with low brightness is slightly brightened under the condition of ensuring the real color, the image quality is improved, the damage to the image quality caused by greatly brightening is avoided, and the dark light enhancement is possible in devices with limited performance, such as a mobile terminal.
Example two
Fig. 4 is a flowchart of a dim light enhancement method according to a second embodiment of the present invention, where the present embodiment further details operations of denoising processing, generating target illumination image data, a luminance value of original illumination image data, and Retinex based on the foregoing embodiments, and the method specifically includes the following steps:
step 401, collecting original image data.
As shown in fig. 5, in S501, the raw image data may be divided into raw luminance image data representing luminance Y and raw chrominance image data representing chrominance UV.
Step 402, filtering the original brightness image data to obtain the original illumination image data.
In a specific implementation, as shown in fig. 5, in S502, the original luminance image data Y is subjected to filtering processing so as to maintain edges and smooth noise reduction, and the image data after the filtering processing is the original illumination image data Y' (I) in the Retinex model.
The filtering process may include Fast Guide Filter (FGF), Bilateral Filter (BF), Edge Preserving Filter (EPF), mean filtering, weighted mean filtering, and the like.
The original image data belongs to YUV, and when the YUV original image data is stored, the storage format and the sampling mode are related, wherein the sampling mode comprises three sampling modes, namely YUV4:4:4, YUV4:2:2 and YUV4:2: 0.
YUV does not require simultaneous transmission of three independent signals, so that less bandwidth is occupied by YUV transmission, and therefore, in real-time service operations such as video call, video data is generated by mostly using YUV4:2:0, that is, in the service operation, most of original image data is in the format of YUV4:2: 0.
As shown in fig. 6, in the YUV4:2:0 format, four Y pixels 601 share one UV pixel 602, and at this time, weighted mean filtering may be used for processing.
For original image data, if four original luminance pixel points Y share one original chrominance pixel point UV, a plurality of filter windows with the same original luminance pixel point structure are generated, if the filter window is 2 × 2, when performing filter processing, the image data is generally read first, the pixel points therein are read, the pixel points are filtered and then written back to the image data, and if the filter window has the same structure as the original luminance pixel points, the filter processing can be directly performed on the original luminance image data after reading the original image data, thereby reducing the reading and writing of the image data.
The original brightness pixel points are pixel points in original brightness image data, and the original chrominance pixel points are pixel points in original chrominance image data.
When the filtering window traverses to the current original brightness pixel point, weight is configured on the original brightness pixel point in the filtering window, the sum value of the original brightness pixel point in the filtering window after the weight is configured is calculated, and the sum value is assigned to the current original brightness pixel point, wherein the formula is represented as follows:
Figure BDA0002619555930000141
wherein, wjFor the filtering window, YijIs the ith pixel point, WjIs the weight, Y, of the jth pixel point in the filtering windowi' are pixels after filtering.
And step 403, down-sampling the original illumination image data to obtain target illumination image data.
In this embodiment, as shown in fig. 5, in S503, the original illumination image data Y '(I) is downsampled by a nearest neighbor method, a bilinear interpolation method, or the like, so as to smooth and suppress noise of the original illumination image data, obtain the target illumination image data I' meeting the requirement of the Retinex model, and ensure the quality of the feature image data.
In an example, if four filtered luminance pixel points Y share one original chrominance pixel point UV, that is, the format of the original illumination image data is YUV4:2:0, bilinear interpolation is performed on the original illumination image data to obtain target illumination image data.
The filtering brightness pixel points are pixel points in original brightness image data, and the original chrominance pixel points are pixel points in original chrominance image data.
The bilinear interpolation, also called as bilinear interpolation, is linear interpolation expansion of an interpolation function with two variables, linear interpolation is respectively carried out in two directions once, and the structure of YUV4:2:0 can be maintained by using a bilinear interpolation mode to carry out down-sampling.
As shown in fig. 7, assuming that the unknown function f has a value of (x, y) at P, it is assumed that the known function f has a value of Q at11=(x1,y1),Q12=(x1,y2),Q21=(x2,y1),Q22=(x2,y2) Values of four points.
Linear interpolation in the x direction yields:
Figure BDA0002619555930000151
Figure BDA0002619555930000152
linear interpolation in the y direction yields:
Figure BDA0002619555930000153
combining the linear interpolation in the x direction and the linear interpolation in the y direction, f (x, y):
Figure BDA0002619555930000154
step 404, determining the gamma coefficient.
Step 405, performing gamma correction on the original illumination image data according to the gamma coefficient to obtain target illumination brightness data.
In a specific implementation, as shown in fig. 5, in S504, gamma correction may be performed on the original illumination image data Y 'to increase the brightness value of each pixel in the original illumination image data Y', and the image data after gamma correction is the target brightness image data Y ″.
The gamma correction is to edit a gamma curve of the original light image data to perform nonlinear tone editing on the original light image data, detect a high-gradation portion and a low-gradation portion in the original light image data, and increase a ratio of the high-gradation portion and the low-gradation portion to improve the contrast of the original light image dataγγ denotes a gamma coefficient of gamma correction, Y ″γRepresenting the luminance value after gamma correction.
The gamma value is divided by 1, the smaller the value is, the stronger the expansion effect on the low gray part of the original illumination image data is, the larger the value is, the stronger the expansion effect on the high gray part of the original illumination image data is, and the effect of enhancing the details of the low gray part or the high gray part can be achieved through different gamma values.
Step 406, determine the transformation relationship.
Step 407, in the first color space, processing the original image data according to the conversion relationship to obtain the feature image data.
The original image data is located in a first color space (YUV color space), and the ultradim light brightening by applying the Retinex model is performed in a second color space (such as RGB color space), the conversion from the first color space to the second color space and the conversion from the second color space to the first color space, and the conversion of the color space has a certain loss to the image quality and consumes a certain computing resource.
In this embodiment, as shown in fig. 5, an operation of applying the Retinex model in the first color space (YUV) to perform super-dim highlighting equivalent to the operation of applying the Retinex model in the second color space (e.g. GGB) may be provided, and the mapping operation of applying the Retinex model in the second color space to perform super-dim highlighting equivalent to applying the Retinex model in the second color space to perform dim-light enhancement in the first color space may be performed on the original image data in the first color space, so as to reduce the conversion between the color spaces, reduce the loss of image quality, reduce the amount of computation, and reduce the consumption of computing resources.
In a particular implementation, a transformation relationship may be determined that represents a relationship for performing a mapping of a target operation in a second color space to the first color space.
The target operation is to apply a Retinex model to carry out dim light brightening, namely, referring to original illumination image data, and resolving reflected image data from the original image data; and synthesizing the target illumination image data and the reflection image data into characteristic image data.
After the conversion relationship is determined, the components of each pixel point in the original image data (such as the luminance component Y in the original luminance image data and the chrominance component UV in the original chrominance image data) may be input into the conversion relationship in the first color space, and the components after the target operation is performed are obtained as the feature image data.
For example, for YUV color space, RGB color space, the conversion relationship for performing the target operation is as follows:
suppose I ═ I{γ-1}Where I represents the original illumination image data, γ represents the gamma coefficient, and I' represents the original enhancement coefficient.
Reflecting image data are separated from reference original illumination image data in an RGB color space, and target illumination image data and the reflecting image data are combined to obtain characteristic image data which are expressed as follows:
Figure BDA0002619555930000171
Figure BDA0002619555930000172
Figure BDA0002619555930000173
wherein, IγFor the target illumination image data, R ' G ' B ' is the characteristic color component of the characteristic image data in the RGB color space.
On the one hand, referring to the conversion relation from YUV color space to RGB color space:
R=Y+1.4075*(V-128)
G=Y-0.3455*(U-128)-0.7169*(V-128)
B=Y+1.779*(U-128)
on the other hand, referring to the conversion relation from RGB color space to YUV color space:
Y=0.299*R+0.587*G+0.114*B
Figure BDA0002619555930000174
Figure BDA0002619555930000181
the following conversion relationship can be obtained:
YR=0.299*R′+0.587*G′+0.114*B′
=(0.299*R+0.587*G+0.114*B)*I′
={0.299*[Y+1.4075*(V-128)]+0.587*[Y-0.3455*(U-128)-0.7169*(V-128)]+0.114*[Y+1.779*(U-128)]}
=Y*I′
Figure BDA0002619555930000182
Figure BDA0002619555930000183
wherein, YRThe image data is the characteristic brightness image data of the characteristic image data under the brightness channel in the YUV space, and U 'V' is the characteristic chrominance image data of the characteristic image data under the chrominance channel in the YUV space.
The above conversion relationship is simplified as follows:
I′=Iγ-1
YR=Y·I′
U′=(U-128)·downsmaple(I′)+128
V′=(V-128)·downsmaple(I′)+128
wherein, the downsample () represents downsampling, and the downsample (I') represents a target enhancement coefficient.
Then, in this example, the coefficient that highlights the original lighting image data to the target lighting image data may be calculated as the original enhancement coefficient I'.
In one example, as shown in FIG. 5, a gamma coefficient γ in a gamma correction used to brighten the raw illumination image data into target brightness image data may be determined.
Calculating the target value power of the original illumination image data as an original enhancement coefficient Iγ-1Wherein the target value is a difference between the gamma coefficient gamma and one.
Setting a first product Y & I 'between the original luminance image data Y and the original enhancement coefficient I' as the characteristic luminance image data Y in the characteristic image dataR
And (3) performing down-sampling on the original enhancement coefficient to obtain a target enhancement coefficient downlink (I').
The sum of the second product and a preset first parameter (e.g., 128) is set as the characteristic chrominance image data U ' V ' in the characteristic image data, and the second product is the product of the original chrominance image data UV minus the second parameter (e.g., 128) and multiplied by the target enhancement coefficient downszaple (I '), i.e., (U-128) · downszaple (I '), (V-128) · downszaple (I ').
And step 408, synthesizing the target brightness image data and the characteristic chrominance image data into target image data.
As shown in fig. 5, in S506, the target luminance image data Y "is combined with the characteristic chromaticity image data U 'V' in place of the characteristic luminance image data to be the target image data Y" U 'V'.
In order to make the skilled person better understand the embodiments of the present invention, the following describes, by way of specific examples, a method for enhancing dark light based on Retinex to image data in the embodiments of the present invention.
As shown in fig. 8, a user performs a video call outdoors at night, where the image data 801 is image data collected in a dim light environment, the image data 802 is image data brightened and subjected to a skin-beautifying process, and the image data 803 is image data subjected to a skin-polishing process after performing a dim light enhancement on the image data by applying this embodiment.
With respect to the image data 801, it can be seen that in a dark light environment, low luminance causes a reduction in picture quality.
For the image data 802, it can be seen that pure brightness enhancement is performed without performing denoising processing, and the effect of noise on the image quality may cause the brightness enhancement effect to become negative.
For the image data 803, it can be seen that the present embodiment can simultaneously improve the influence of low brightness on other image processing and the negative influence of noise on the brightening effect, so that the effect of dim light brightening is ensured while the denoising processing is fused, the computational complexity is considered, a small amount of computation is increased, and the application to the real-time video communication process becomes possible.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
EXAMPLE III
Fig. 9 is a block diagram of a structure of a dim light enhancement device provided in the third embodiment of the present invention, which may specifically include the following modules:
an original image data collecting module 901, configured to collect original image data, where the original image data includes original luminance image data representing luminance;
a denoising processing module 902, configured to perform denoising processing on the original luminance image data to obtain original illumination image data;
a target illumination image data generation module 903, configured to perform image processing on the original illumination image data to generate target illumination image data;
a brightness increasing module 904, configured to increase a brightness value of the original illumination image data as target brightness image data;
a feature image data generation module 905, configured to synthesize the target illumination image data and the reflection image data into feature image data if the original image data is superposition between the original illumination image data and the reflection image data, where the feature image data includes feature chrominance image data representing chrominance;
a target image data synthesizing module 906, configured to synthesize the target luminance image data and the feature chrominance image data into target image data.
In one embodiment of the present invention, the denoising module 902 includes:
and the filtering processing submodule is used for filtering the original brightness image data to obtain original illumination image data.
In one embodiment of the present invention, the original image data further includes original chrominance image data representing chrominance; the filtering processing submodule comprises:
a filtering window generating unit, configured to generate a plurality of filtering windows with the same original luminance pixel point structure if four original luminance pixel points share one original chrominance pixel point, where the original luminance pixel point is a pixel point in the original luminance image data, and the original chrominance pixel point is a pixel point in the original chrominance image data;
the weight configuration unit is used for configuring the weight for the original brightness pixel point in the filtering window when the filtering window traverses to the current original brightness pixel point;
and the pixel point assignment unit is used for calculating the sum value of the original brightness pixel points in the filtering window after the weights are configured and assigning the sum value to the current original brightness pixel points.
In one embodiment of the present invention, the target illumination image data generation module 903 includes:
and the down-sampling sub-module is used for down-sampling the original illumination image data to obtain target illumination image data.
In one embodiment of the invention, the downsampling sub-module comprises:
the bilinear interpolation unit is used for carrying out bilinear interpolation on the original illumination image data to obtain target illumination image data if the four filtering brightness pixel points share one original chrominance pixel point;
the filtering brightness pixel points are pixel points in the original illumination image data, and the original chrominance pixel points are pixel points in the original chrominance image data.
In one embodiment of the present invention, the brightness enhancing module 904 comprises:
a gamma coefficient determination submodule for determining a gamma coefficient;
and the gamma correction sub-module is used for carrying out gamma correction on the original illumination image data according to the gamma coefficient to obtain target illumination image data.
In one embodiment of the invention, the raw image data and the feature image space are both located in a first color space;
the feature image data generation module 905 includes:
a conversion relation determining submodule for determining a conversion relation for representing a relation of performing a target operation mapping in a second color space to the first color space;
the conversion processing submodule is used for processing the original image data according to the conversion relation in the first color space to obtain characteristic image data;
wherein the target operation is to resolve reflected image data from the original image data with reference to the original illumination image data; and synthesizing the target illumination image data and the reflection image data into characteristic image data.
In one embodiment of the present invention, the original image data further includes original chrominance image data representing chrominance;
the conversion processing sub-module comprises:
an original enhancement coefficient calculation unit configured to calculate a coefficient for brightening the original illumination image data into the target luminance image data as an original enhancement coefficient;
a target luminance image data generation unit configured to set a first product between the original luminance image data and the original enhancement coefficient as characteristic luminance image data in characteristic image data;
the down-sampling unit is used for down-sampling the original enhancement coefficient to obtain a target enhancement coefficient;
and the down-sampling generation unit is used for setting a sum value between a second product and a preset first parameter as the characteristic chrominance image data in the characteristic image data, wherein the second product is the product of the original chrominance image data minus the second parameter and multiplied by the target enhancement coefficient.
In one embodiment of the present invention, the original enhancement coefficient calculation unit includes:
a gamma coefficient determination subunit, configured to determine a gamma coefficient in gamma correction, where the gamma correction is used to brighten the original illumination image data into the target brightness image data;
and the power calculating subunit is used for calculating the power of a target value of the original illumination image data as an original enhancement coefficient, wherein the target value is a difference value obtained by subtracting one from the gamma coefficient.
The dim light enhancement device provided by the embodiment of the invention can execute the dim light enhancement method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 10 is a schematic structural diagram of a mobile terminal according to a fourth embodiment of the present invention. FIG. 10 illustrates a block diagram of an exemplary mobile terminal 12 suitable for use in implementing embodiments of the present invention. The mobile terminal 12 shown in fig. 10 is only an example and should not bring any limitation to the function and the scope of use of the embodiments of the present invention.
As shown in fig. 10, the mobile terminal 12 is embodied in the form of a general purpose computing device. The components of the mobile terminal 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The mobile terminal 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by mobile terminal 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The mobile terminal 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 10, and commonly referred to as a "hard drive"). Although not shown in FIG. 10, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Mobile terminal 12 may also communicate with one or more external devices 14, such as a keyboard, pointing device, display 24, etc., and may also communicate with one or more devices that enable a user to interact with the mobile terminal 12 and/or any devices that enable the mobile terminal 12 to communicate with one or more other computing devices, such as a network card, modem, etc. Such communication may be through an input/output (I/O) interface 22. Also, the mobile terminal 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with the other modules of the mobile terminal 12 via the bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the mobile terminal 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
The processing unit 16 executes various functional applications and data processing, such as implementing the dim light enhancement method provided by embodiments of the present invention, by running a program stored in the system memory 28.
EXAMPLE five
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the dim light enhancement method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
A computer readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (11)

1. A method of dim light enhancement, comprising:
acquiring original image data, wherein the original image data comprises original brightness image data representing brightness;
denoising the original brightness image data to obtain original illumination image data;
down-sampling the original illumination image data to obtain target illumination image data;
improving the brightness value of the original illumination image data to be used as target brightness image data;
if the original image data is the superposition of the original illumination image data and the reflection image data, synthesizing the target illumination image data and the reflection image data into characteristic image data, wherein the characteristic image data comprises characteristic chromaticity image data representing chromaticity;
and synthesizing the target brightness image data and the characteristic chrominance image data into target image data.
2. The method of claim 1, wherein the denoising the raw luminance image data to obtain raw illumination image data comprises:
and filtering the original brightness image data to obtain original illumination image data.
3. The method of claim 2, wherein the raw image data further comprises raw chrominance image data representing chrominance;
the filtering the original brightness image data to obtain original illumination image data includes:
if four original brightness pixel points share one original chroma pixel point, generating a plurality of filtering windows with the same structure as the original brightness pixel point, wherein the original brightness pixel point is a pixel point in the original brightness image data, and the original chroma pixel point is a pixel point in the original chroma image data;
when the filtering window traverses to the current original brightness pixel point, configuring weight for the original brightness pixel point in the filtering window;
and calculating the sum value of the original brightness pixel points in the filtering window after the weights are configured, and assigning the sum value to the current original brightness pixel points.
4. The method of claim 3, wherein the raw image data further comprises raw chrominance image data representing chrominance; the downsampling the original illumination image data to obtain target illumination image data comprises the following steps:
if the four filtering brightness pixel points share one original chrominance pixel point, carrying out bilinear interpolation on the original illumination image data to obtain target illumination image data;
the filtering brightness pixel points are pixel points in the original illumination image data, and the original chrominance pixel points are pixel points in the original chrominance image data.
5. The method according to claim 1, wherein the increasing the brightness value of the original illumination image data as target brightness image data comprises:
determining a gamma coefficient;
and carrying out gamma correction on the original illumination image data according to the gamma coefficient to obtain target brightness image data.
6. The method of any of claims 1-5, wherein the raw image data and the feature image data are both located in a first color space;
if the original image data is the superposition between the original illumination image data and the reflection image data, synthesizing the target illumination image data and the reflection image data into characteristic image data, including:
determining a conversion relation, wherein the conversion relation is used for representing the relation of mapping the target operation performed in the second color space to the first color space;
processing the original image data according to the conversion relation in the first color space to obtain characteristic image data;
wherein the target operation is to resolve reflected image data from the original image data with reference to the original illumination image data; and synthesizing the target illumination image data and the reflection image data into characteristic image data.
7. The method of claim 6, wherein the raw image data further comprises raw chrominance image data representing chrominance;
in the first color space, processing the original image data according to the conversion relationship to obtain feature image data, including:
calculating a coefficient for brightening the original illumination image data into the target brightness image data as an original enhancement coefficient;
setting a first product between the original brightness image data and the original enhancement coefficient as characteristic brightness image data in characteristic image data;
down-sampling the original enhancement coefficient to obtain a target enhancement coefficient;
setting a sum of a second product and 128 as the characteristic chrominance image data in the characteristic image data, wherein the second product is the product of the original chrominance image data minus 128 and multiplied by the target enhancement coefficient.
8. The method of claim 7, wherein calculating coefficients for brightening the original lighting image data into the target lighting image data as original enhancement coefficients comprises:
determining a gamma coefficient in a gamma correction used to brighten the raw illumination image data into the target brightness image data;
and calculating the power of a target value of the original illumination image data as an original enhancement coefficient, wherein the target value is the difference value obtained by subtracting one from the gamma coefficient.
9. A dim light enhancement device, comprising:
the system comprises an original image data acquisition module, a brightness acquisition module and a brightness acquisition module, wherein the original image data acquisition module is used for acquiring original image data which comprises original brightness image data representing brightness;
the de-noising processing module is used for de-noising the original brightness image data to obtain original illumination image data;
the target illumination image data generation module is used for down-sampling the original illumination image data to obtain target illumination image data;
the brightness improving module is used for improving the brightness value of the original illumination image data to serve as target brightness image data;
the characteristic image data generation module is used for synthesizing the target illumination image data and the reflection image data into characteristic image data if the original image data is the superposition between the original illumination image data and the reflection image data, and the characteristic image data comprises characteristic chromaticity image data representing chromaticity;
and the target image data synthesis module is used for synthesizing the target brightness image data and the characteristic chromaticity image data into target image data.
10. A mobile terminal, characterized in that the mobile terminal comprises:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the dim light enhancement method according to any one of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the dim light enhancement method according to any one of claims 1-8.
CN202010779098.0A 2020-08-05 2020-08-05 Dim light enhancement method and device, mobile terminal and storage medium Active CN111918095B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010779098.0A CN111918095B (en) 2020-08-05 2020-08-05 Dim light enhancement method and device, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010779098.0A CN111918095B (en) 2020-08-05 2020-08-05 Dim light enhancement method and device, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111918095A CN111918095A (en) 2020-11-10
CN111918095B true CN111918095B (en) 2022-06-03

Family

ID=73287866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010779098.0A Active CN111918095B (en) 2020-08-05 2020-08-05 Dim light enhancement method and device, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111918095B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580672A (en) * 2020-12-28 2021-03-30 安徽创世科技股份有限公司 License plate recognition preprocessing method and device suitable for dark environment and storage medium
CN113344801A (en) * 2021-03-04 2021-09-03 北京市燃气集团有限责任公司 Image enhancement method, system, terminal and storage medium applied to gas metering facility environment
CN114998120B (en) * 2022-05-17 2024-01-12 深圳小湃科技有限公司 Dim light image optimization training method, intelligent terminal and computer readable storage medium
CN116684739A (en) * 2023-06-20 2023-09-01 广东电网有限责任公司广州供电局 Image acquisition method and device for outdoor operation robot and computer equipment
CN117408906B (en) * 2023-12-14 2024-03-19 中南大学 Low-light level image enhancement method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303532A (en) * 2015-10-21 2016-02-03 北京工业大学 Wavelet domain Retinex image defogging method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100562067C (en) * 2007-07-26 2009-11-18 上海交通大学 The real time digital image processing and enhancing method that has noise removal function
CN102318330B (en) * 2009-02-13 2014-07-23 奥西-技术有限公司 Image processing system for processing a digital image and image processing method of processing a digital image
US8526736B2 (en) * 2010-10-29 2013-09-03 JVC Kenwood Corporation Image processing apparatus for correcting luminance and method thereof
CN104346776B (en) * 2013-08-02 2017-05-24 杭州海康威视数字技术股份有限公司 Retinex-theory-based nonlinear image enhancement method and system
JP6160426B2 (en) * 2013-10-04 2017-07-12 富士ゼロックス株式会社 Image processing apparatus and program
JP6786850B2 (en) * 2016-04-07 2020-11-18 富士ゼロックス株式会社 Image processing equipment, image processing methods, image processing systems and programs
CN107871303B (en) * 2016-09-26 2020-11-27 北京金山云网络技术有限公司 Image processing method and device
CN107527332B (en) * 2017-10-12 2020-07-31 长春理工大学 Low-illumination image color retention enhancement method based on improved Retinex
CN108122213B (en) * 2017-12-25 2019-02-12 北京航空航天大学 A kind of soft image Enhancement Method based on YCrCb

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303532A (en) * 2015-10-21 2016-02-03 北京工业大学 Wavelet domain Retinex image defogging method

Also Published As

Publication number Publication date
CN111918095A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN111918095B (en) Dim light enhancement method and device, mobile terminal and storage medium
US9495582B2 (en) Digital makeup
US10719918B2 (en) Dynamically determining filtering strength for noise filtering in image processing
US8766999B2 (en) Systems and methods for local tone mapping of high dynamic range images
US9390478B2 (en) Real time skin smoothing image enhancement filter
WO2018126962A1 (en) Systems and methods for enhancing edges in images
KR20160102524A (en) Method for inverse tone mapping of an image
US10002408B2 (en) Restoring color and infrared images from mosaic data
CN110248242B (en) Image processing and live broadcasting method, device, equipment and storage medium
US11017511B2 (en) Method and system of haze reduction for image processing
CN111260580B (en) Image denoising method, computer device and computer readable storage medium
CN111970432A (en) Image processing method and image processing device
CN111556227A (en) Video denoising method and device, mobile terminal and storage medium
CN115035011A (en) Low-illumination image enhancement method for self-adaptive RetinexNet under fusion strategy
WO2016197323A1 (en) Video encoding and decoding method, and video encoder/decoder
US11656457B2 (en) Method and device for correcting chromatic aberration in multiple bands
CN111915528A (en) Image brightening method and device, mobile terminal and storage medium
CN111899197B (en) Image brightening and denoising method and device, mobile terminal and storage medium
WO2022111269A1 (en) Method and device for enhancing video details, mobile terminal, and storage medium
CN111915529A (en) Video dim light enhancement method and device, mobile terminal and storage medium
KR102585573B1 (en) Content-based image processing
CN110874816B (en) Image processing method, device, mobile terminal and storage medium
CN111899197A (en) Image brightening and denoising method and device, mobile terminal and storage medium
CN113284058A (en) Underwater image enhancement method based on migration theory
KR100520970B1 (en) Method and apparatus for transforming a high dynamic range image into a low dynamic range image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221207

Address after: 31a, 15 / F, building 30, maple mall, bangrang Road, Brazil, Singapore

Patentee after: Baiguoyuan Technology (Singapore) Co.,Ltd.

Address before: 511402 5-13 / F, West Tower, building C, 274 Xingtai Road, Shiqiao street, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU BAIGUOYUAN INFORMATION TECHNOLOGY Co.,Ltd.