CN115239570A - Image processing method, image processing apparatus, and storage medium - Google Patents

Image processing method, image processing apparatus, and storage medium Download PDF

Info

Publication number
CN115239570A
CN115239570A CN202110441726.9A CN202110441726A CN115239570A CN 115239570 A CN115239570 A CN 115239570A CN 202110441726 A CN202110441726 A CN 202110441726A CN 115239570 A CN115239570 A CN 115239570A
Authority
CN
China
Prior art keywords
image
brightness
processing
layer
image layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110441726.9A
Other languages
Chinese (zh)
Inventor
刘月雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110441726.9A priority Critical patent/CN115239570A/en
Publication of CN115239570A publication Critical patent/CN115239570A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image processing method, an image processing apparatus, and a storage medium. The method comprises the following steps: obtaining a first image layer containing first frequency information in a first image based on the first image; obtaining a second image layer containing second frequency information in the first image according to the information difference between the first image and the first image layer; an image frequency of the first frequency information is lower than an image frequency of the second frequency information; performing image enhancement processing on the second image layer; and carrying out fusion processing on the second image layer subjected to the image enhancement processing and the first image layer to obtain a second image.

Description

Image processing method, image processing apparatus, and storage medium
Technical Field
The present disclosure relates to the field of image technologies, and in particular, to an image processing method, an image processing apparatus, and a storage medium.
Background
In recent years, cameras become indispensable functions on intelligent terminals, and users have higher and higher requirements on camera imaging effects; therefore, the quality of the image or the video becomes a breakthrough point of the mobile phone technology, the brightness of the image or the video plays a crucial role in the imaging process of the camera, and the final imaging effect of the camera is directly determined. In order to obtain better visual experience on the display device, the image displayed on the display device needs to have a better dynamic range and a higher contrast, and therefore, performing dynamic range adjustment on the image is an essential step in the image processing process.
At present, the dynamic range of an image is adjusted in a mode that a single adjustment curve is directly adopted to adjust the global dynamic range of the image, and the mode often has the problem of poor local contrast of the image; or the dynamic range adjustment is performed on the local information of the image, which increases the local contrast of the image but has higher complexity.
Disclosure of Invention
The present disclosure provides an image processing method, a graphic processing apparatus, and a storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including:
obtaining a first image layer containing first frequency information in a first image based on the first image;
obtaining a second image layer containing second frequency information in the first image according to the information difference between the first image and the first image layer; an image frequency of the first frequency information is lower than an image frequency of the second frequency information;
performing image enhancement processing on the second image layer;
and carrying out fusion processing on the second image layer subjected to the image enhancement processing and the first image layer to obtain a second image.
Optionally, the method further includes:
dividing each pixel in a third image into different brightness areas according to the brightness value of each pixel in the third image;
determining brightness adjusting functions corresponding to the brightness areas according to the different brightness areas; wherein, the brightness adjusting functions corresponding to different brightness areas are different;
and adjusting pixels in different brightness areas in the third image according to the brightness adjustment function corresponding to each brightness area to obtain the first image.
Optionally, the adjusting pixels in different brightness regions in the third image according to the brightness adjustment function corresponding to each brightness region to obtain the first image includes:
determining a first brightness adjustment curve according to the brightness adjustment function corresponding to each brightness area;
acquiring a histogram equalization curve of a Y-channel image corresponding to the third image;
obtaining a second brightness adjustment curve based on the histogram equalization curve and the first brightness adjustment curve;
and adjusting pixels in different brightness areas in the third image based on the second brightness adjustment curve to obtain the first image.
Optionally, the brightness region at least includes: a first luminance region, a second luminance region, and a third luminance region;
the brightness adjusting function corresponding to the first brightness area is a Gaussian function;
the brightness adjusting function corresponding to the second brightness area is a linear function;
the brightness adjusting function corresponding to the third brightness area is a sigmoid function;
the brightness value corresponding to the first brightness area is smaller than the brightness value corresponding to the second brightness area, and the brightness value corresponding to the second brightness area is smaller than the brightness value corresponding to the third brightness area.
Optionally, the method further includes:
determining a first proportional brightness parameter and a second proportional brightness parameter based on the number of pixel points with different brightness in the Y-channel image corresponding to the third image;
performing brightness stretching on the second image based on the first proportional brightness parameter and the second proportional brightness parameter to obtain a fourth image; the first proportional brightness parameter is a brightness value corresponding to a first proportional number of pixel points in the Y-channel image; the second proportion brightness parameter is the brightness value corresponding to the pixel points with the second proportion quantity in the Y channel image; the first proportional quantity is greater than the second proportional quantity.
Optionally, the obtaining a first image layer including first frequency information in the first image based on the first image includes:
and performing guiding filtering on the first image to obtain a first image layer.
Optionally, the method further includes:
performing brightness enhancement processing on the first image layer;
the fusion processing of the second image layer and the first image layer after the image enhancement processing to obtain the second image comprises:
and carrying out fusion processing on the second image layer subjected to the image enhancement processing and the first image layer subjected to the brightness enhancement processing to obtain a second image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, the apparatus including:
the processing module is used for obtaining a first image layer containing first frequency information in a first image based on the first image; obtaining a second image layer containing second frequency information in the first image according to the information difference between the first image and the first image layer; an image frequency of the first frequency information is lower than an image frequency of the second frequency information;
the enhancement module is used for carrying out image enhancement processing on the second image layer;
and the fusion module is used for carrying out fusion processing on the second image layer and the first image layer after the image enhancement processing to obtain a second image.
Optionally, the apparatus further comprises:
the determining module is used for dividing each pixel in a third image into different brightness areas according to the brightness value of each pixel in the third image; determining brightness adjusting functions corresponding to the brightness areas according to the different brightness areas; wherein, the brightness adjusting functions corresponding to different brightness areas are different;
and the adjusting module is used for adjusting pixels in different brightness areas in the third image according to the brightness adjusting function corresponding to each brightness area to obtain the first image.
Optionally, the adjusting module is further configured to:
determining a first brightness adjustment curve according to the brightness adjustment function corresponding to each brightness area;
acquiring a histogram equalization curve of a Y-channel image corresponding to the third image;
obtaining a second brightness adjustment curve based on the histogram equalization curve and the first brightness adjustment curve;
and adjusting pixels in different brightness areas in the third image based on the second brightness adjustment curve to obtain the first image.
Optionally, the brightness region at least includes: a first luminance region, a second luminance region, and a third luminance region;
the brightness adjusting function corresponding to the first brightness region is a Gaussian function;
the brightness adjusting function corresponding to the second brightness area is a linear function;
the brightness adjusting function corresponding to the third brightness area is a sigmoid function;
the brightness value corresponding to the first brightness region is smaller than the brightness value corresponding to the second brightness region, and the brightness value corresponding to the second brightness region is smaller than the brightness value corresponding to the third brightness region.
Optionally, the apparatus further comprises: a stretching module to:
determining a first proportional brightness parameter and a second proportional brightness parameter based on the number of pixel points with different brightness in the Y-channel image corresponding to the third image;
performing brightness stretching on the second image based on the first proportional brightness parameter and the second proportional brightness parameter to obtain a fourth image; the first proportional brightness parameter is a brightness value corresponding to a first proportional number of pixel points in the Y-channel image; the second proportion brightness parameter is the brightness value corresponding to the pixel points with the second proportion quantity in the Y channel image; the first proportional quantity is greater than the second proportional quantity.
Optionally, the processing module is further configured to:
and performing guide filtering on the first image to obtain first image layer information.
Optionally, the enhancing module is further configured to:
performing brightness enhancement processing on the first image layer;
the fusion module is further configured to:
and carrying out fusion processing on the second image layer subjected to the image enhancement processing and the first image layer subjected to the brightness enhancement processing to obtain a second image.
According to a third aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the executable instructions, when executed, implement the steps in the method according to the first aspect of the embodiments of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having instructions which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the steps of the method according to the first aspect of embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the image processing method provided by the embodiment of the disclosure obtains a first image layer including image background information based on a first image; causing the first image layer to lose image detail information with respect to the first image itself. In the embodiment of the disclosure, a second image layer comprising image detail information is obtained according to the information difference between the first image and the first image layer; fusing the second image layer information subjected to image enhancement processing with the first image layer by performing image enhancement processing on the second image layer to obtain a second image; because the second image layer comprising the image detail information is subjected to detail enhancement, the second image obtained by fusion can effectively highlight the image details, and the local contrast of the second image is effectively improved, so that the second image has better image quality relative to the first image before processing, and the optimization of the image visual effect is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment.
FIG. 2 is an original image shown in an exemplary embodiment.
FIG. 3 is a histogram of an original image shown in an exemplary embodiment.
FIG. 4 is an image with an adjusted global dynamic range, as shown in an exemplary embodiment.
FIG. 5 is a histogram of a global dynamic range adjusted image, shown in an exemplary embodiment.
FIG. 6 is an image with an adjusted local dynamic range, according to an exemplary embodiment.
Fig. 7 is a detail view of the reference numeral 601 in fig. 6.
Fig. 8 is a detail view of the reference numeral 602 in fig. 6.
Fig. 9 is a flowchart illustrating an image processing method according to an embodiment of the disclosure.
Fig. 10 is a flowchart of a second image processing method according to an embodiment of the disclosure.
Fig. 11 is a flowchart three of an image processing method shown in an exemplary embodiment.
FIG. 12 is a flowchart illustration of a method of image processing in accordance with an illustrative embodiment.
Fig. 13 is a flowchart illustrating a global brightness adjustment according to an exemplary embodiment.
FIG. 14 is a diagram illustrating a basic adjustment curve in accordance with an exemplary embodiment.
Fig. 15 is a diagram illustrating a histogram equalization curve in accordance with an exemplary embodiment.
Fig. 16 is a diagram illustrating a global brightness adjustment curve according to an exemplary embodiment.
Fig. 17 is a flow chart illustrating a local contrast adjustment according to an exemplary embodiment.
Fig. 18 is a flowchart illustrating an image brightness stretching according to an exemplary embodiment.
FIG. 19 is a diagram illustrating an image histogram in accordance with an exemplary embodiment.
Fig. 20 is a detail diagram of an image processed by an image processing method according to an exemplary embodiment.
Fig. 21 is a detail diagram of an image processed by an image processing method according to an exemplary embodiment.
Fig. 22 is a schematic configuration diagram illustrating an image processing apparatus according to an exemplary embodiment.
FIG. 23 is a block diagram of an electronic device apparatus shown in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
As shown in fig. 1, fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment. The dynamic range adjustment method can be divided into: a global dynamic range adjustment method and a local dynamic range adjustment method. The global dynamic range adjustment method usually performs gray mapping on an image directly based on an adjustment curve (such as a Gamma curve) to obtain an adjusted image. The local dynamic range adjusting method generally divides an image into a basic image layer and a detail image layer, adjusts the basic image layer, and superposes the adjusted basic image layer and the detail image layer to obtain an adjusted image.
Illustratively, the effect of adjusting the global dynamic range of the image can be achieved by performing histogram equalization processing on the image. The histogram equalization process is as follows:
for an image, its histogram defines Hist (x) as follows:
Hist(x)=n x
wherein x is a luminance value and n is a luminance value x X =0,1, \ 8230;, L-1, which is the number of pixels having a luminance value of x in the image.
Determining a brightness probability density function of the image according to the histogram information of the image; the luminance probability density function may be expressed as:
Figure BDA0003035363000000061
wherein p (x) is the luminance probability density function; x is a brightness value, x =0,1, \ 8230;, L-1; the Hist (x) is the number of pixels with the brightness value of x in the image; the N is the number of pixels in the image.
Determining a cumulative distribution function of the image according to the brightness probability density function of the image; the cumulative distribution function may be expressed as:
Figure BDA0003035363000000062
wherein c (x) is a cumulative distribution function of the image, and p (k) is a corresponding luminance probability density function value when the luminance value is k; the x is a brightness value, x =0,1, \ 8230;, L-1.
And performing gray mapping on the image based on the cumulative distribution function of the image to obtain an adjusted image. The adjusted image may be represented as:
f(x)=(L-1)·c(x);
wherein f (x) is the brightness value of the adjusted image pixel; the c (x) is a cumulative distribution function of the image.
As shown in fig. 2, 3, 4 and 5, fig. 2 is an original image shown in an exemplary embodiment; FIG. 3 is a histogram of an original image shown in an exemplary embodiment. FIG. 4 is an image with an adjusted global dynamic range shown in an exemplary embodiment; FIG. 5 is a histogram of a global dynamic range adjusted image, shown in an exemplary embodiment.
For the global dynamic adjustment method, the global dynamic range is directly adjusted through a single curve, the calculation complexity is low, but the local contrast of the adjusted image is poor.
Also illustratively, the image may be subject to local dynamic range adjustments via a local contrast enhancement algorithm based on image decomposition. The algorithm model may be represented by the following equation:
img out =LFP(img in )·gain+(img in -LFP(img in ));
wherein, the img out The adjusted image is obtained; the img in Is an input original image; the LFP (-) is a low-pass filter, and a bilateral filter is adopted; the gain is a preset adjusting parameter.
Performing low-pass filtering on an input image to obtain a smoothed image, and determining the smoothed image as a basic image layer; subtracting the basic image layer from the input image to obtain a detail image layer; adjusting the dynamic range of the basic image layer by presetting adjustment parameters; and fusing the adjusted basic image layer and the detail image layer to obtain an output image. Therefore, the dynamic range of the image is adjusted, and the detail information of the image can be kept.
FIG. 6 is an illustration of an exemplary embodiment showing a local dynamic range adjusted image, such as that shown in FIG. 6; although the method for adjusting the local dynamic range can better keep the local details of the image and has better local contrast effect, the realization complexity is higher; and gradient inversion phenomenon easily occurs at the details of the image, resulting in the image being defective. As shown in fig. 7 and 8, fig. 7 is a detail view shown by reference numeral 601 in fig. 6; fig. 8 is a detail view of the same fig. 6 as indicated by reference numeral 602. Color cast and white lines appear near the branches of the area indicated by reference numeral 701 in fig. 7; locally uneven smearlike color patches appear on the glass in the region indicated by reference numeral 801 in fig. 8.
Based on this, the embodiment of the present disclosure provides an image processing method. Fig. 9 is a flowchart illustrating a first image processing method according to an embodiment of the disclosure, and as shown in fig. 9, the method includes the following steps:
step S101, obtaining a first image layer containing first frequency information in a first image based on the first image;
step S102, obtaining a second image layer containing second frequency information in the first image according to the information difference between the first image and the first image layer; an image frequency of the first frequency information is lower than an image frequency of the second frequency information;
step S103, carrying out image enhancement processing on the second image layer;
and step S104, carrying out fusion processing on the second image layer subjected to the image enhancement processing and the first image layer to obtain a second image.
It should be noted that the image processing method can be applied to a mobile terminal and can also be applied to a server. When the method is applied to the server, the mobile terminal can send the collected first image to the server, and after the server processes the first image by adopting the steps of S101-S104 to obtain a second image, the second image is sent to the mobile terminal so as to be conveniently displayed by the mobile terminal.
Taking the application of the image processing method to a mobile terminal as an example, the mobile terminal may be: smart phones, tablet computers, wearable electronic devices, or the like; the mobile terminal comprises an image acquisition device, the image acquisition device generally refers to a device capable of completing a photographing function in the mobile terminal, and the device comprises a camera, a necessary processing module and a necessary storage module so as to complete image acquisition and transmission, and can also comprise some processing function modules. The image acquisition device can be a front camera or a rear camera in a mobile phone.
In step S101, the first image may be an image of an RGB color mode.
RGB is a standard for expressing colors in the digital domain, and is also called a color space, and each pixel value in an image in an RGB format is expressed by three components of R (red), G (green), and B (blue); a particular color is represented by a combination of different luminance values of the three primary colors R, G, B. If each component is represented by 8 bits, one pixel is represented by 3 × 8=24 bits in common.
Here, the image frequency is used to indicate how strongly the gradation value of the image changes; the first frequency information may be low frequency information; the first image layer includes low frequency information of the first image; the low frequency information may characterize basic information of the image, such as brightness, contour, and depth of the image. The first image layer may be a base image layer.
In some embodiments, the obtaining a first image layer including first frequency information in the first image based on the first image in step S101 may include:
and carrying out image smoothing filtering on the first image to obtain a first image layer.
In the embodiment of the present invention, the smoothed base image layer may be obtained by performing low-pass filtering processing on the first image.
In other embodiments, the image smoothing filtering the first image to obtain the first image layer may include:
and inputting the first image to a bilateral filter for filtering to obtain a first image layer. The bilateral filter may include: a high-speed low-pass filter and a value-domain filter after combination.
In the embodiment of the present disclosure, a first filtering result may be obtained by inputting the first image to a high-speed low-pass filter; the first filtering result is used for indicating the weight of the pixel value similarity of each pixel in the first image; inputting the first image into a value domain filter to obtain a second filtering result; the second filtering result is used for indicating the weight of the spatial proximity of each pixel in the first image; determining a target weight according to the product of the first filtering result and the second filtering result; and performing convolution operation on the first image based on the target weight to obtain a first image layer.
It should be noted that the bilateral filter is a filter that can retain edge information and has a good denoising effect. The bilateral filter is formed by combining two filters; wherein, the coefficient corresponding to the value domain filter is determined by the difference value between the image pixels; the coefficient corresponding to the spatial filter is determined by the geometric spatial distance of the image, and the influence of the spatial distance on the pixel value is reflected.
In the embodiment of the disclosure, the bilateral filter is obtained by combining a value domain filter and a gaussian low-pass filter. The working principle of the Gaussian low-pass filter is that different weighted values are set for pixel values in a certain range around a pixel point, all pixels in the region are weighted and averaged based on the weighted values, and the weighted and averaged result is determined as the final value of the current pixel point.
The image smoothing filtering processing is carried out on the first image, so that the pixel value change degree of adjacent pixels in the first image is reduced and more uniform; it should be noted that, a first image layer is obtained by performing smoothing filtering processing on the whole first image; all information in the first image layer is subjected to the blurring process once, and high-frequency detail information in the first image is lost.
In step S102, the second frequency information may be high frequency information; the second image layer may include: the second image layer can be a detail image layer because of the high-frequency information filtered out when the first image is subjected to smooth filtering.
The information difference between the first image and the first image layer can be obtained by carrying out residual error processing on the first image and the first image layer; the information difference is the second image layer. In this way, detail information hidden in dark or bright in the first image is extracted.
In step S103, enhancing high-frequency detail information in the first image by performing image enhancement processing on the second image layer;
illustratively, the detail enhancement of the second image layer can be achieved by suppressing high frequency noise in the second image layer.
Further illustratively, the second image layer may be non-linearly adjusted using a gamma correction algorithm to adjust the proportion of different color regions in the second image layer, thereby enhancing the display effect of the image.
In step S104, the second image layer and the first image layer after the image enhancement processing are subjected to fusion processing to obtain a second image.
It should be noted that, since the human eye is sensitive to high-frequency detail information in the image, if the high-frequency information is embedded in a large amount of low-frequency background information, the visibility of the high-frequency information is reduced. If the gray scale enhancement is directly performed on the whole image, the gray scale enhancement algorithm does not know which part of the gray scale is to be enhanced or which part of the gray scale is to be suppressed, the gray scales of all pixels in the image are directly adjusted to be distributed uniformly, but the gray scale enhancement of a certain part of pixels in the image inevitably compresses the gray scale of another part of pixels. It is possible that the high frequency information (i.e., detail information) of the image is enhanced and boosted in grayscale, and it is also possible that the low frequency information (i.e., background noise) in the image is enhanced.
In contrast, in the embodiment of the present disclosure, after the second image layer is obtained, the image enhancement processing is performed on the second image layer, and the enhanced second image layer and the enhanced first image layer are fused to obtain a second image; therefore, only the detail information in the second image can be enhanced, and a better image display effect can be achieved.
In some embodiments, the performing of the image enhancement processing on the second image layer in step S103 may include:
performing contrast enhancement processing on the second image layer;
in step S104, performing fusion processing on the enhanced second image layer and the enhanced first image layer to obtain a second image, which may include:
and fusing the second image layer after the contrast enhancement processing and the first image layer to obtain a second image.
In the embodiment of the present disclosure, the contrast enhancement processing may be performed on the second image layer based on the contrast adjustment coefficient.
Here, the contrast adjustment coefficient may be set according to actual requirements. For example, the contrast adjustment coefficient may be determined based on the variance of the pixels in the second image layer;
the contrast enhancement process may include: based on the contrast adjustment coefficient, performing linear adjustment on the pixel value of each pixel of the second image layer to obtain the adjusted pixel value of each pixel; and reconstructing according to the adjusted pixel value of each pixel to obtain a second image layer after contrast enhancement processing.
In other embodiments, the performing contrast enhancement processing on the second image layer may include:
counting the pixel value of each pixel point in the second image layer, and determining the pixel compensation value of the second image layer;
determining difference value information according to the pixel value of each pixel point in the second image layer and the pixel compensation value;
performing contrast enhancement on the difference information based on a contrast adjustment coefficient;
and determining a second image layer after contrast enhancement processing according to the difference information after contrast enhancement and the pixel compensation value.
Here, the pixel compensation value may be set according to actual requirements, and for example, the pixel compensation value may include: a pixel mean of the second image layer. The embodiment of the disclosure performs linear adjustment on the difference information of the second image layer to change the contrast degree of the second image layer.
In the embodiment of the present disclosure, the difference information of the second image layer may be determined by calculating a difference between the pixel value of each pixel point in the second image layer and the pixel compensation value; adjusting the difference information of the second image layer according to the contrast adjustment coefficient; compensating the adjusted difference information based on the pixel compensation value to obtain the pixel value of each pixel after compensation; and reconstructing according to the compensated pixel value of each pixel to obtain a second image layer after contrast enhancement processing.
After the second image layer is obtained, the contrast enhancement processing is performed on the second image layer, and the second image layer with the enhanced contrast and the first image layer are fused to obtain a second image; by enhancing the contrast between the pixels in the second image layer, a better local contrast effect is obtained.
In some embodiments, as shown in fig. 10, fig. 10 is a flowchart illustrating an image processing method according to an embodiment of the disclosure. Before the step S101, the method further includes:
step S105, dividing each pixel in a third image into different brightness areas according to the brightness value of each pixel in the third image;
step S106, determining brightness adjusting functions corresponding to the brightness areas according to the different brightness areas; wherein, the brightness adjusting functions corresponding to different brightness areas are different;
and step S107, adjusting pixels in different brightness areas in the third image according to the brightness adjustment function corresponding to each brightness area to obtain the first image.
In an embodiment of the present disclosure, the third image may be an image in YUV color mode; an image in RGB color mode is also possible.
In the image in the YUV color mode, Y represents luminance, and U and V represent chrominance. Y alone is a black and white image, plus UV is a color image. YUV and RGB are used to express color; but YUV and RGB describe colors differently. RGB decomposes a color into a luminance combination of 3 pure colors, YUV decomposes a color into a combination of luminance and 2 chrominance.
If the third image is an image in a YUV color mode, the third image can be directly divided into different brightness areas according to the brightness value of the Y-channel image corresponding to the third image.
If the third image is an image in an RGB color mode, the third image can be converted into an image in a YUV color mode, and the third image is divided into different brightness areas according to the brightness value of the Y-channel image.
The preset brightness threshold value can be determined according to the average brightness value of the image; and dividing each pixel in the third image into different brightness areas according to the brightness value of each pixel in the third image and a preset brightness threshold value.
In some embodiments, the dividing, in step S105, each pixel in the first image into different luminance regions according to the luminance value of each pixel in the third image includes:
clustering according to the brightness value of each pixel in the third image, and dividing different brightness regions; and the brightness difference between the pixel points in any brightness region is within a preset difference range.
In the embodiment of the present disclosure, the brightness values of the pixel points in the third image are clustered, and the pixel points with similar brightness (that is, the brightness difference is within a preset difference range) are clustered into a class, so as to partition different regions. For the divided different brightness regions, the average brightness values corresponding to the different brightness regions can be counted, and the brightness level of each brightness region is determined according to the average brightness value corresponding to the brightness region.
Here, the preset difference range may be set according to a requirement, and the present disclosure does not limit this.
It can be understood that, since the luminance difference of the pixels in each luminance region is within the preset difference range, the luminance level of each luminance region can be determined more accurately according to the average luminance value of each luminance region. Compared with a threshold-based method for dividing different brightness regions, the clustering method is more universal.
In step S106, an average brightness value of pixels in each brightness region may be obtained; and determining a brightness adjusting function corresponding to each brightness area according to the brightness interval to which the average brightness value of each brightness area belongs.
It can be understood that the embodiment of the present disclosure sets different brightness adjustment functions for different brightness intervals, so as to serve as a basis for adjusting the brightness of the pixels in different brightness areas of the image. Exemplarily, if the third image is divided into 3 different luminance regions according to the luminance value of each pixel point in the third image, the three different luminance regions are a first luminance region, a second luminance region and a third luminance region in sequence; acquiring the average brightness value of the pixel points in each brightness region; if the first average brightness value of the first brightness area is larger than a first brightness threshold value, determining the first brightness area as a high brightness area; if the second average brightness value of the second brightness area is smaller than the second brightness threshold, determining that the second brightness area is a dark area; if the third average brightness value of the third brightness area is smaller than the first brightness threshold and larger than the second brightness threshold, determining the third brightness area as a sub-brightness area; and respectively determining brightness adjusting functions of the first brightness area, the second brightness area and the third brightness area based on the contrast relation between the high brightness area, the dark area and the sub-bright area and the brightness adjusting function.
Because the brightness of the pixels in different brightness areas is different, the brightness of the pixels in different brightness areas can be adjusted by adopting different brightness adjustment functions, and the display effect of the image can be effectively improved.
In other embodiments, the determining the brightness adjustment function corresponding to each brightness region according to the different brightness regions may include:
respectively determining brightness adjusting parameters corresponding to the brightness areas according to the brightness mean value and the brightness standard deviation of the pixels in the different brightness areas;
and determining a brightness adjusting function corresponding to each brightness area according to the brightness adjusting parameters corresponding to each brightness area.
In the embodiment of the present disclosure, the mean luminance value of the pixels in the different luminance regions may be determined by the following formula:
Figure BDA0003035363000000121
wherein, the mu k The mean value of the brightness of the pixels in the kth brightness area is taken; x is said i The luminance value of the ith pixel in the kth luminance area is obtained; n is k Is the number of pixels in the kth luminance region.
The standard deviation of the luminance of the pixels in the different luminance regions can be determined by the following equation:
Figure BDA0003035363000000122
wherein, the σ k The standard deviation of the brightness of the pixels in the kth brightness region; said x i The luminance value of the ith pixel in the kth luminance area is obtained; n is k The number of pixels in the kth brightness region; the mu k Is the mean value of the luminance of the pixels in the kth luminance region.
The brightness adjustment parameter of the different brightness regions may be determined according to the following formula:
β k =x imaxmin )+β min
wherein, the beta k Adjusting parameters for the brightness of the kth brightness region; x is said i The luminance value of the ith pixel in the kth luminance area is obtained; beta is said min Is a first adjustment parameter, beta min =a+(σ kk ) (ii) a Beta is said max For the second adjustment parameter, beta max =β min +b(1+σ kk ) And a and b are preset constants.
The brightness adjustment function for the different brightness regions may be determined by:
Figure BDA0003035363000000131
wherein, the f k (x i ) The brightness adjustment function of the k-th brightness region, the beta k Adjusting parameters for the brightness of the kth brightness region; said x i The luminance value of the ith pixel in the kth luminance region.
In step S107, the brightness values of pixels in different brightness regions of the Y-channel image corresponding to the third image may be adjusted according to the brightness adjustment function corresponding to each brightness region, so as to obtain an adjusted Y-channel image; obtaining an image of an adjusted YUV color mode based on the adjusted Y-channel image; and performing color space conversion on the image in the YUV color mode after adjustment to obtain the first image.
In some embodiments, in step S107, adjusting pixels in different luminance areas in the third image according to the luminance adjustment function corresponding to each luminance area to obtain the first image, includes:
determining a first brightness adjustment curve according to the brightness adjustment function corresponding to each brightness area;
acquiring a histogram equalization curve of a Y-channel image corresponding to the third image;
carrying out weighted fusion on the histogram equalization curve and the first brightness adjustment curve to obtain a second brightness adjustment curve;
adjusting the brightness value of each pixel in the third image based on the second brightness adjustment curve; and obtaining the first image according to the adjusted brightness value of each pixel.
In the embodiment of the present disclosure, a luminance range corresponding to each luminance region may be determined according to a luminance value of a pixel in each luminance region; and integrating the brightness adjusting functions corresponding to the brightness areas according to the brightness range corresponding to each brightness area to obtain a first brightness adjusting curve.
And performing histogram equalization processing on the Y-channel image corresponding to the third image to obtain a histogram equalization curve of the Y-channel image corresponding to the third image.
The principle of performing histogram equalization processing on the Y-channel image is to change a luminance histogram of the image from a certain luminance interval in a comparison set to uniform distribution in the entire luminance range; further, histogram equalization is to perform nonlinear stretching on the input image, and redistribute the luminance values of the input image so that the number of pixels in a certain luminance range is approximately the same; that is, the histogram of the original input image is transformed into a uniformly distributed form, so that the dynamic range of the luminance values of the pixels is increased to achieve the effect of enhancing the overall contrast of the image.
In some embodiments, the obtaining a histogram equalization curve of a Y-channel image corresponding to the third image includes:
acquiring histogram information of a Y-channel image corresponding to the third image;
and preprocessing the histogram information of the Y-channel image, and performing equalization processing on the preprocessed histogram information of the Y-channel image to obtain a histogram equalization curve of the Y-channel image.
Here, the luminance value of each pixel in the Y channel image may be counted to obtain histogram information of the Y channel image;
it should be noted that the histogram can describe various things, such as the color distribution of the object, the edge gradient template of the object, and the probability distribution of the current hypothesis representing the target position. Specifically, the histogram reflects statistical information reflecting the number of pixels in the Y-channel image whose luminance value is within a specific range.
The preprocessing the histogram information of the Y-channel image may include:
comparing the histogram information of the Y-channel image with a first threshold value, and determining a target brightness interval value of which the number of pixels in the histogram is greater than the first threshold value;
and determining the number of pixels corresponding to the target brightness interval value as a first threshold value to obtain preprocessed histogram information.
Here, the first threshold may be set according to actual needs.
In some embodiments, the preprocessed histogram information may be determined by:
Figure BDA0003035363000000141
wherein, the Hist _ clip (x) is preprocessed histogram information; the Hist (x) is histogram information, and the th1 is the first threshold; and x is a brightness value.
The equalizing the histogram information of the preprocessed Y-channel image to obtain a histogram equalization curve of the Y-channel image includes:
residual error processing is carried out on the basis of the preprocessed histogram information and the histogram information of the Y-channel image, and a histogram information difference is obtained;
carrying out equalization processing on the histogram information difference;
fusing the equalized histogram information difference with the limited histogram information to obtain limited histogram information;
determining the cumulative distribution function of the defined histogram information as a histogram equalization curve of the Y-channel image.
It should be noted that the cumulative distribution function represents a ratio of the number of pixels smaller than or equal to each luminance interval value to the total number of pixels. In the process of equalization, it needs to be ensured that the size relationship between the brightness of the pixels after mapping and before mapping is unchanged, that is, a brighter area in the image is also a brighter area after mapping, and a darker area in the image is also a darker area after mapping. In addition, after the luminance values of the pixels are mapped, the value range should still be between the value ranges of the original image, and the value range cannot be said to be expanded. Therefore, mapping using the cumulative distribution function can effectively use image information of the original image.
The histogram information difference is obtained by carrying out residual error processing on the preprocessed histogram information and the histogram information of the Y-channel image, and the histogram information difference is subjected to equalization processing, so that the contrast of the image can be effectively improved, and the brightness difference of different brightness areas in the image is further amplified.
It can be understood that the feature points in the image can be made clearer by enlarging the brightness difference of different brightness areas in the image. And the histogram equalization processing mode can stretch the brightness of different brightness areas of the image to a proper degree without serious distortion of the image.
In the embodiment of the present disclosure, the weight coefficients of the histogram equalization curve and the first brightness adjustment curve may be obtained; and performing weighted fusion on the histogram equalization curve and the first brightness adjustment curve based on the weight coefficient to obtain a second brightness adjustment curve. Determining adjusted brightness values corresponding to pixels in different areas in the third image based on the second brightness adjustment curve; and determining the first image according to the adjusted brightness value of each pixel.
It can be understood that the second brightness adjustment curve is a curve corresponding to a piecewise function, and the second brightness adjustment curve may represent a relationship between the brightness of the pixel in the third image before the brightness adjustment and the brightness of the pixel in the third image after the brightness adjustment; substituting the brightness values of the pixels in different areas in the third image into the second brightness adjustment curve to determine an adjusted brightness value; and reconstructing according to the adjusted brightness value to obtain a first image.
In some embodiments, the luminance region includes at least: a first luminance region, a second luminance region, and a third luminance region;
the brightness adjusting function corresponding to the first brightness region is a Gaussian function;
the brightness adjusting function corresponding to the second brightness area is a linear function;
the brightness adjusting function corresponding to the third brightness area is a sigmoid function;
in an embodiment of the present disclosure, a brightness value corresponding to the first brightness region is smaller than a brightness value corresponding to the second brightness region, and a brightness value corresponding to the second brightness region is smaller than a brightness value corresponding to the third brightness region. For example, the first luminance region may be a dark region in an image, the second luminance region may be a medium-bright region in the image, and the third luminance region may be a high-bright region in the image.
The brightness adjustment function corresponding to the first brightness region may be represented by the following equation:
f 1 (x)=1-exp(-x 2 );
wherein, the f 1 (x) Representing the adjusted brightness of the image; the x represents the brightness of the image.
And adjusting the brightness of the first brightness region of the image based on a Gaussian function, and blurring the brightness value of each pixel in the first brightness region, so that the brightness change of the first brightness region is more stable.
The brightness adjustment function corresponding to the second brightness region may be represented by the following equation:
f 2 (x)=Ax+B;
wherein, the f 2 (x) Representing the adjusted brightness of the image; a and B are an adjustment constant;
the values of a and B may be determined according to the pixel adjustment value in the first luminance region and the pixel adjustment value in the third luminance region.
The brightness of the second brightness area of the image is adjusted based on the linear function, so that the brightness difference of adjacent pixels in the second brightness area can be improved, and the layering sense of the second brightness area of the image can be adjusted.
The brightness adjustment function corresponding to the third brightness region may be determined by the following equation:
Figure BDA0003035363000000161
wherein, the f 3 (x) Indicating the brightness of the adjusted image.
And adjusting the brightness of a third brightness area of the image based on a sigmoid function, and enhancing the contrast of each pixel in the third brightness area.
In some embodiments, the method further comprises:
determining a first proportional brightness parameter and a second proportional brightness parameter based on the number of pixel points with different brightness in the Y-channel image corresponding to the third image;
performing brightness stretching on the second image based on the first proportional brightness parameter and the second proportional brightness parameter to obtain a fourth image; the first proportional brightness parameter is a brightness value corresponding to a first proportional number of pixel points in the Y-channel image; the second proportion brightness parameter is the brightness value corresponding to the pixels with the second proportion quantity in the Y-channel image; the first proportional quantity is greater than the second proportional quantity.
In the embodiment of the present disclosure, histogram information of a Y-channel image corresponding to the third image may be acquired; determining a brightness probability density function of the third image according to the histogram information; determining the first proportional number and the second proportional number based on the luminance probability density function; and determining the first proportional brightness parameter and the second proportional brightness parameter based on the first proportional quantity and the second proportional quantity.
Since the histogram information reflects the number of pixels in the Y-channel image whose luminance value is within a specific range. The luminance probability density function of the third image may be used to indicate a proportion of pixels in the third image having luminance values within a particular range to pixels in the third image.
Here, the first proportional quantity and the second proportional quantity may be set according to actual requirements; for example, the first proportional amount is determined to be 95% and the second proportional amount is determined to be 5%. For another example, the maximum value of the luminance probability density function of the third image may be determined as a first proportional number, and the minimum value of the luminance probability density function may be determined as a second proportional number.
After the first proportional quantity and the second proportional quantity are determined, the brightness values corresponding to the first proportional quantity and the second proportional quantity in the brightness probability density function can be determined as a first proportional brightness parameter and a second brightness parameter.
Performing brightness stretching on the second image based on the first proportional brightness parameter and the second proportional brightness parameter to obtain a fourth image, including:
determining the brightness difference according to the first proportional brightness parameter and the second proportional brightness parameter;
and performing brightness stretching on the second image based on the pixel value of each pixel point in the second image and the brightness difference to obtain the fourth image.
In an embodiment of the present disclosure, the luminance difference is determined based on a difference between the first proportional luminance parameter and the second proportional luminance parameter.
The pixel value of each pixel point in the fourth image may be determined by the following equation:
img 4 (i)=(img 2 (i)-str min )/△str;
wherein, the img 4 (i) The pixel value of the ith pixel point in the fourth image is obtained; the img 2 (i) The pixel value of the ith pixel point in the second image is obtained; the str min The second proportional brightness parameter is obtained; Δ str is the brightness difference, Δ str = str max -str min Said str max Is the first proportional luminance parameter.
In the embodiment of the disclosure, the brightness of the second image is stretched according to the first proportional brightness parameter and the second proportional brightness parameter determined by the number of the pixel points with different brightness in the second image, so that the brightness stretching degree of each pixel point can be adaptively adjusted according to the brightness value of each pixel point in the image; the relatively dark areas of the second image are stretched strongly and the relatively bright areas of the second image are stretched weakly.
In some embodiments, the obtaining, in step S101, a first image layer including first frequency information in a first image based on the first image includes:
and performing guiding filtering on the first image to obtain a first image layer.
It should be noted that the guide filtering is a filtering algorithm for keeping the image edge, and when filtering an image, a guide image is needed, and the guide image may be another separate image or may be the image to be filtered itself. If the guide image is the image to be filtered, the guide filtering is carried out based on the guide image, and the chrominance channel of the obtained image can contain continuous edges consistent with the luminance channel.
The first image layer obtained by guiding filtering and the second image layer after image enhancement processing are fused to obtain the second image, so that local detail information can be considered, a good local contrast effect is achieved, the problem of image defects caused by edge gradient inversion can be effectively avoided, and a better visual effect is achieved.
In some embodiments, the method further comprises:
performing brightness enhancement processing on the first image layer;
in step S104, the fusion processing is performed on the second image layer and the first image layer after the image enhancement processing, so as to obtain a second image, including:
and carrying out fusion processing on the second image layer after the image enhancement processing and the first image layer after the brightness enhancement processing to obtain a second image.
In the embodiment of the present disclosure, the brightness enhancement processing may be performed on the first image layer based on a brightness enhancement coefficient. Here, the brightness enhancement coefficient may be set according to actual requirements.
The brightness of low-frequency information (namely background information) in the image is improved by performing brightness enhancement on the first image layer; the contrast between high-frequency information (namely detail information) in the image is improved by performing contrast enhancement on the second image layer; when the first image layer and the second image layer are fused, the brightness of the background information of the generated second image is improved compared with that of the first image, the detail information in the first image is also reserved, the contrast of the image is enhanced, and the display effect of the image is better.
The present disclosure also provides the following embodiments:
FIG. 11 is a flowchart of a method of image processing, shown in an exemplary embodiment; fig. 12 is a flowchart illustrating a fourth image processing method according to an exemplary embodiment, as illustrated in fig. 11 and 12, including:
step S201, dividing the input third image into different brightness areas according to the brightness value of each pixel in the third image;
in this example, the input third image may be an image in an RGB color mode, and the third image is divided into three different luminance regions according to the luminance values of the respective pixels in the Y-channel image corresponding to the third image by converting the third image into an image in a YUV color mode; respectively a highlight area, a middle light area and a dark area.
Step S202, determining brightness adjusting functions corresponding to the brightness areas according to the different brightness areas; determining a global brightness adjustment curve based on the brightness adjustment function and a histogram equalization curve of a third image
As shown in fig. 13, fig. 13 is a schematic flowchart illustrating a global brightness adjustment according to an exemplary embodiment. In this example, the brightness adjustment functions corresponding to the different brightness regions are different, and a basic adjustment curve may be determined according to the brightness adjustment functions corresponding to the brightness regions; the brightness adjusting function corresponding to the dark area is a Gaussian function, the brightness adjusting function corresponding to the medium-brightness area is a linear function, and the brightness adjusting function corresponding to the high-brightness area is a Sigmoid function. As shown in fig. 14, fig. 14 is a schematic diagram of a basic adjustment curve according to an exemplary embodiment.
The base adjustment curve calib _ curve may be represented by the following equation:
Figure BDA0003035363000000191
Figure BDA0003035363000000192
Figure BDA0003035363000000193
wherein x represents the brightness of the image; the dark _ area and the bright _ area are respectively preset brightness threshold values and are used for controlling a dark area, a medium bright area and a high bright area which need to be adjusted.
Acquiring histogram information of a Y-channel image corresponding to a third image, and preprocessing the histogram information of the Y-channel image; the preprocessed histogram information may be represented by the following equation:
Figure BDA0003035363000000194
wherein, the Hist _ clip (x) is preprocessed histogram information; the Hist (x) is histogram information, and the th1 is the first threshold; and x is a brightness value.
And performing residual error processing based on the preprocessed histogram information and the histogram information of the Y-channel image to obtain a histogram information difference. The histogram information difference may be represented by:
Figure BDA0003035363000000195
wherein the sum (x) is the histogram information difference; the Hist _ clip (x) is preprocessed histogram information; the Hist (x) is histogram information; the x is a luminance value.
Carrying out equalization processing on the histogram information difference to obtain an equalized histogram; fusing the equalized histogram information difference with the limited histogram information to obtain limited histogram information; the defined histogram information may be represented by the following equation:
Figure BDA0003035363000000196
determining a cumulative distribution function of the defined histogram information as a histogram equalization curve of the Y-channel image. The histogram equalization curve may be determined by:
Figure BDA0003035363000000201
wherein the he _ curve is the histogram equalization curve.
As shown in fig. 15, fig. 15 is a schematic diagram of a histogram equalization curve according to an exemplary embodiment.
And weighting and fusing the basic adjustment curve and the histogram equalization curve to obtain the global brightness adjustment curve. The global brightness adjustment curve may be represented by the following equation:
g_curve(x)=he_curve(x)·wt+(1-wt)·calib_curve(x);
wherein wt is preset weight information.
As shown in fig. 16, fig. 16 is a schematic diagram of a global brightness adjustment curve according to an exemplary embodiment.
Step S203, adjusting pixels in different pixel areas in the third image according to the global brightness adjustment curve to obtain a first image with global brightness adjusted;
the global brightness adjusted first image may be determined by:
img g =g_curve(img in );
wherein, the img g For the adjusted first image, the img in Is the input third image.
Step S204, carrying out image smoothing filtering on the first image to obtain a first image layer and a second image layer; performing brightness enhancement processing on the first image layer, and performing contrast enhancement on the second image layer; fusing the first image layer after brightness enhancement processing and the second image layer after contrast enhancement processing to obtain a second image after local contrast adjustment;
as shown in fig. 17, fig. 17 is a flowchart illustrating a local contrast adjustment according to an exemplary embodiment. In this example, the first image after global brightness adjustment is low-pass filtered, and the low-pass filtering result is determined as the first image layer; the first image layer may be represented by the following formula:
base=LPF(img g );
wherein the base is a first image layer; the LPF (·) represents the low-pass filtering, the guided filtering employed in this example.
Obtaining a second image layer according to the information difference between the first image and the first image layer; the second image layer may be determined by:
detail=img g -base;
wherein the detail is a second image layer.
The local contrast adjusted second image may be determined by:
img l =base·α+detail·β;
wherein, the img l The second image after the local contrast adjustment is obtained; the alpha is a brightness enhancement coefficient; and the beta is a contrast adjustment coefficient.
Step S205, obtaining histogram information of a Y-channel image corresponding to the third image, and determining a first proportional brightness parameter and a second proportional brightness parameter; and performing brightness stretching on the second image based on the first proportional brightness parameter and the second proportional brightness parameter to obtain a fourth image.
In this example, as shown in fig. 18, fig. 18 is a schematic flow chart illustrating an image brightness stretching according to an exemplary embodiment. Converting the third image into an image in a YUV color mode, counting the brightness value of each pixel in a Y-channel image corresponding to the third image, and determining the histogram information of the Y-channel image; and determining a first proportional brightness parameter with the first proportional quantity of 95% and a second proportional brightness parameter with the second proportional quantity of 5% according to the histogram information of the Y-channel image. As shown in fig. 19, fig. 19 is a schematic diagram of an image histogram according to an exemplary embodiment. Wherein reference 1901 indicates a second proportional quantity; reference numeral 1902 denotes a first proportional number.
And performing brightness stretching on the second image according to the first proportion brightness parameter and the second proportion brightness parameter to obtain a fourth image. The fourth image may be determined by:
img out =(img l -str min )/(str max -str min )
the img out Is the fourth image; the str is max Is said first proportional brightness parameter, said str min Is the second proportional brightness parameter.
As shown in fig. 20 and 21, fig. 20 is a detail view of an image processed by an image processing method according to an exemplary embodiment, and fig. 21 is a detail view of an image processed by an image processing method according to an exemplary embodiment. Relative to the area denoted by reference numeral 701 in fig. 7, no color cast and no white line appear near the branches of the area denoted by reference numeral 2001 in fig. 20; the region indicated by reference numeral 2101 in fig. 21 also has no locally uneven smearlike color patches on the glass relative to the region indicated by reference numeral 801 in fig. 8.
The image processing method provided by the embodiment of the disclosure can adjust the global dynamic range of the image, can also give consideration to the local dynamic range, and effectively avoids the problem of image defects caused by edge gradient inversion; the processed image can achieve a better visual effect.
The embodiment of the disclosure also provides an image processing device. Fig. 22 is a schematic diagram illustrating a configuration of an image processing apparatus according to an exemplary embodiment, and as shown in fig. 22, the image processing apparatus 100 includes:
the processing module 101 is configured to obtain a first image layer including first frequency information in a first image based on the first image; obtaining a second image layer containing second frequency information in the first image according to the information difference between the first image and the first image layer; an image frequency of the first frequency information is lower than an image frequency of the second frequency information;
the enhancement module 102 is configured to perform image enhancement processing on the second image layer;
and the fusion module 103 is configured to perform fusion processing on the second image layer and the first image layer after the image enhancement processing to obtain a second image.
In some embodiments, the apparatus further comprises:
the determining module is used for dividing each pixel in a third image into different brightness areas according to the brightness value of each pixel in the third image; determining brightness adjusting functions corresponding to the brightness areas according to the different brightness areas; wherein, the brightness adjusting functions corresponding to different brightness areas are different;
and the adjusting module is used for adjusting pixels in different brightness areas in the third image according to the brightness adjusting function corresponding to each brightness area to obtain the first image.
In some embodiments, the adjustment module is further configured to:
determining a first brightness adjustment curve according to the brightness adjustment function corresponding to each brightness area;
acquiring a histogram equalization curve of a Y-channel image corresponding to the third image;
obtaining a second brightness adjustment curve based on the histogram equalization curve and the first brightness adjustment curve;
and adjusting pixels in different brightness areas in the third image based on the second brightness adjustment curve to obtain the first image.
In some embodiments, the luminance region includes at least: a first luminance region, a second luminance region, and a third luminance region;
the brightness adjusting function corresponding to the first brightness region is a Gaussian function;
the brightness adjusting function corresponding to the second brightness area is a linear function;
the brightness adjusting function corresponding to the third brightness area is a sigmoid function;
the brightness value corresponding to the first brightness region is smaller than the brightness value corresponding to the second brightness region, and the brightness value corresponding to the second brightness region is smaller than the brightness value corresponding to the third brightness region.
In some embodiments, the apparatus further comprises: a stretching module for:
determining a first proportional brightness parameter and a second proportional brightness parameter based on the number of pixel points with different brightness in the Y-channel image corresponding to the third image;
performing brightness stretching on the second image based on the first proportional brightness parameter and the second proportional brightness parameter to obtain a fourth image; the first proportional brightness parameter is a brightness value corresponding to a first proportional number of pixel points in the Y-channel image; the second proportion brightness parameter is the brightness value corresponding to the pixel points with the second proportion quantity in the Y channel image; the first proportional quantity is greater than the second proportional quantity.
In some embodiments, the processing module 101 is further configured to:
and performing guiding filtering on the first image to obtain first image layer information.
In some embodiments, the enhancement module 102 is further configured to:
performing brightness enhancement processing on the first image layer;
the fusion module 103 is further configured to:
and carrying out fusion processing on the second image layer subjected to the image enhancement processing and the first image layer subjected to the brightness enhancement processing to obtain a second image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 23 is a block diagram illustrating an electronic device apparatus in accordance with an example embodiment. For example, the device 200 may be a mobile phone, a mobile computer, or the like.
Referring to fig. 23, the apparatus 200 may include one or more of the following components: processing components 202, memory 204, power components 206, multimedia components 208, audio components 210, input/output (I/O) interfaces 212, sensor components 214, and communication components 216.
The processing component 202 generally controls overall operation of the device 200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 202 may include one or more processors 220 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 202 can include one or more modules that facilitate interaction between the processing component 202 and other components. For example, the processing component 202 may include a multimedia module to facilitate interaction between the multimedia component 208 and the processing component 202.
Memory 204 is configured to store various types of data to support operation at device 200. Examples of such data include instructions for any application or method operating on the device 200, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A power supply component 206 provides power to the various components of the device 200. The power components 206 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 200.
The multimedia component 208 includes a screen that provides an output interface between the device 200 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 208 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 200 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 210 is configured to output and/or input audio signals. For example, audio component 210 includes a Microphone (MIC) configured to receive external audio signals when apparatus 200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 804 or transmitted via the communication component 216. In some embodiments, audio component 210 also includes a speaker for outputting audio signals.
The I/O interface 212 provides an interface between the processing component 202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 214 includes one or more sensors for providing various aspects of status assessment for the device 200. For example, the sensor component 214 may detect an open/closed state of the device 200, the relative positioning of components, such as a display and keypad of the apparatus 200, the sensor component 214 may also detect a change in position of the apparatus 200 or a component of the apparatus 200, the presence or absence of user contact with the apparatus 200, orientation or acceleration/deceleration of the apparatus 200, and a change in temperature of the apparatus 200. The sensor assembly 214 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 216 is configured to facilitate wired or wireless communication between the apparatus 200 and other devices. The device 200 may access a wireless network based on a communication standard, such as Wi-Fi,2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 216 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as memory 204, comprising instructions executable by processor 220 of device 200 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes can be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (18)

1. An image processing method, comprising:
obtaining a first image layer containing first frequency information in a first image based on the first image;
obtaining a second image layer containing second frequency information in the first image according to the information difference between the first image and the first image layer; an image frequency of the first frequency information is lower than an image frequency of the second frequency information;
performing image enhancement processing on the second image layer;
and carrying out fusion processing on the second image layer subjected to the image enhancement processing and the first image layer to obtain a second image.
2. The method according to claim 1, wherein the performing image enhancement processing on the second image layer comprises:
performing contrast enhancement processing on the second image layer;
the fusing the second image layer after the image enhancement processing and the first image layer to obtain the second image may include:
and fusing the second image layer after the contrast enhancement processing and the first image layer to obtain a second image.
3. The method of claim 1, further comprising:
dividing each pixel in a third image into different brightness areas according to the brightness value of each pixel in the third image;
determining brightness adjusting functions corresponding to the brightness areas according to the different brightness areas; wherein, the brightness adjusting functions corresponding to different brightness areas are different;
and adjusting pixels in different brightness areas in the third image according to the brightness adjustment function corresponding to each brightness area to obtain the first image.
4. The method according to claim 3, wherein the adjusting pixels in different luminance regions in the third image according to the luminance adjustment function corresponding to each luminance region to obtain the first image comprises:
determining a first brightness adjustment curve according to the brightness adjustment function corresponding to each brightness area;
acquiring a histogram equalization curve of a Y-channel image corresponding to the third image;
obtaining a second brightness adjustment curve based on the histogram equalization curve and the first brightness adjustment curve;
and adjusting pixels in different brightness areas in the third image based on the second brightness adjustment curve to obtain the first image.
5. The method according to claim 3 or 4, wherein the luminance area comprises at least: a first luminance region, a second luminance region, and a third luminance region;
the brightness adjusting function corresponding to the first brightness region is a Gaussian function;
the brightness adjusting function corresponding to the second brightness area is a linear function;
the brightness adjusting function corresponding to the third brightness area is a sigmoid function;
the brightness value corresponding to the first brightness region is smaller than the brightness value corresponding to the second brightness region, and the brightness value corresponding to the second brightness region is smaller than the brightness value corresponding to the third brightness region.
6. The method of claim 3, further comprising:
determining a first proportional brightness parameter and a second proportional brightness parameter based on the number of pixel points with different brightness in the Y-channel image corresponding to the third image;
performing brightness stretching on the second image based on the first proportional brightness parameter and the second proportional brightness parameter to obtain a fourth image; the first proportional brightness parameter is a brightness value corresponding to a first proportional number of pixel points in the Y-channel image; the second proportion brightness parameter is the brightness value corresponding to the pixel points with the second proportion quantity in the Y channel image; the first proportional quantity is greater than the second proportional quantity.
7. The method of claim 1, wherein obtaining the first image layer containing the first frequency information in the first image based on the first image comprises:
and performing guide filtering on the first image to obtain a first image layer.
8. The method according to claim 1 or 2, characterized in that the method further comprises:
performing brightness enhancement processing on the first image layer;
the fusion processing of the second image layer and the first image layer after the image enhancement processing to obtain the second image comprises:
and carrying out fusion processing on the second image layer subjected to the image enhancement processing and the first image layer subjected to the brightness enhancement processing to obtain a second image.
9. An image processing apparatus, characterized in that the apparatus comprises:
the processing module is used for obtaining a first image layer containing first frequency information in a first image based on the first image; obtaining a second image layer containing second frequency information in the first image according to the information difference between the first image and the first image layer; an image frequency of the first frequency information is lower than an image frequency of the second frequency information;
the enhancement module is used for carrying out image enhancement processing on the second image layer;
and the fusion module is used for carrying out fusion processing on the second image layer and the first image layer after the image enhancement processing to obtain a second image.
10. The apparatus of claim 9, wherein the augmentation module is further configured to:
performing contrast enhancement processing on the second image layer;
the fusion module is further used for:
and fusing the second image layer after the contrast enhancement processing and the first image layer to obtain a second image.
11. The apparatus of claim 9, further comprising:
the determining module is used for dividing each pixel in a third image into different brightness areas according to the brightness value of each pixel in the third image; determining brightness adjusting functions corresponding to the brightness areas according to the different brightness areas; wherein, the brightness adjusting functions corresponding to different brightness areas are different;
and the adjusting module is used for adjusting pixels in different brightness areas in the third image according to the brightness adjusting function corresponding to each brightness area to obtain the first image.
12. The apparatus of claim 11, wherein the adjustment module is further configured to:
determining a first brightness adjustment curve according to the brightness adjustment function corresponding to each brightness area;
acquiring a histogram equalization curve of a Y-channel image corresponding to the third image;
obtaining a second brightness adjustment curve based on the histogram equalization curve and the first brightness adjustment curve;
and adjusting pixels in different brightness areas in the third image based on the second brightness adjustment curve to obtain the first image.
13. The apparatus according to claim 11 or 12, wherein the luminance area comprises at least: a first luminance region, a second luminance region, and a third luminance region;
the brightness adjusting function corresponding to the first brightness region is a Gaussian function;
the brightness adjusting function corresponding to the second brightness area is a linear function;
the brightness adjusting function corresponding to the third brightness area is a sigmoid function;
the brightness value corresponding to the first brightness region is smaller than the brightness value corresponding to the second brightness region, and the brightness value corresponding to the second brightness region is smaller than the brightness value corresponding to the third brightness region.
14. The apparatus of claim 9, further comprising: a stretching module for:
determining a first proportional brightness parameter and a second proportional brightness parameter based on the number of pixel points with different brightness in the Y-channel image corresponding to the third image;
performing brightness stretching on the second image based on the first proportional brightness parameter and the second proportional brightness parameter to obtain a fourth image; the first proportional brightness parameter is a brightness value corresponding to a first proportional number of pixel points in the Y-channel image; the second proportion brightness parameter is the brightness value corresponding to the pixels with the second proportion quantity in the Y-channel image; the first proportional quantity is greater than the second proportional quantity.
15. The apparatus of claim 9, wherein the processing module is further configured to:
and performing guide filtering on the first image to obtain first image layer information.
16. The apparatus of claim 9 or 10, wherein the enhancement module is further configured to:
performing brightness enhancement processing on the first image layer;
the fusion module is further configured to:
and carrying out fusion processing on the second image layer subjected to the image enhancement processing and the first image layer subjected to the brightness enhancement processing to obtain a second image.
17. An image processing apparatus characterized by comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to: implementing the image processing method of any one of claims 1 to 8 when executing executable instructions stored in the memory.
18. A non-transitory computer-readable storage medium in which instructions, when executed by a processor of an image processing apparatus, enable the image processing apparatus to perform the image processing method of any one of claims 1 to 8.
CN202110441726.9A 2021-04-23 2021-04-23 Image processing method, image processing apparatus, and storage medium Pending CN115239570A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110441726.9A CN115239570A (en) 2021-04-23 2021-04-23 Image processing method, image processing apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110441726.9A CN115239570A (en) 2021-04-23 2021-04-23 Image processing method, image processing apparatus, and storage medium

Publications (1)

Publication Number Publication Date
CN115239570A true CN115239570A (en) 2022-10-25

Family

ID=83665730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110441726.9A Pending CN115239570A (en) 2021-04-23 2021-04-23 Image processing method, image processing apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN115239570A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128413A (en) * 2023-04-17 2023-05-16 四川科斯特自动化设备有限公司 Intelligent warehouse material statistics system based on Bluetooth communication
CN116704316A (en) * 2023-08-03 2023-09-05 四川金信石信息技术有限公司 Substation oil leakage detection method, system and medium based on shadow image reconstruction

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128413A (en) * 2023-04-17 2023-05-16 四川科斯特自动化设备有限公司 Intelligent warehouse material statistics system based on Bluetooth communication
CN116128413B (en) * 2023-04-17 2023-06-16 四川科斯特自动化设备有限公司 Intelligent warehouse material statistics system based on Bluetooth communication
CN116704316A (en) * 2023-08-03 2023-09-05 四川金信石信息技术有限公司 Substation oil leakage detection method, system and medium based on shadow image reconstruction

Similar Documents

Publication Publication Date Title
CN111418201B (en) Shooting method and equipment
US20210272251A1 (en) System and Method for Real-Time Tone-Mapping
CN104517268B (en) Adjust the method and device of brightness of image
CN109345485B (en) Image enhancement method and device, electronic equipment and storage medium
CN112614064B (en) Image processing method, device, electronic equipment and storage medium
CN106131441B (en) Photographing method and device and electronic equipment
CN105528765B (en) Method and device for processing image
CN108932696B (en) Signal lamp halo suppression method and device
CN104050645B (en) Image processing method and device
CN112950499B (en) Image processing method, device, electronic equipment and storage medium
JP7136956B2 (en) Image processing method and device, terminal and storage medium
CN112785537B (en) Image processing method, device and storage medium
CN115239570A (en) Image processing method, image processing apparatus, and storage medium
CN105574834B (en) Image processing method and device
CN111625213A (en) Picture display method, device and storage medium
CN106341613B (en) Wide dynamic range image method
CN117616774A (en) Image processing method, device and storage medium
US20220036518A1 (en) Method for processing image, electronic device and storage medium
CN107563957B (en) Eye image processing method and device
CN111383166A (en) Method and device for processing image to be displayed, electronic equipment and readable storage medium
CN113472997B (en) Image processing method and device, mobile terminal and storage medium
CN104992416A (en) Image enhancement method and device, and intelligent equipment
CN116866495A (en) Image acquisition method, device, terminal equipment and storage medium
Yun et al. A contrast enhancement method for HDR image using a modified image formation model
CN114331852A (en) Method and device for processing high dynamic range image and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination