CN114066748A - Image processing method and electronic device - Google Patents

Image processing method and electronic device Download PDF

Info

Publication number
CN114066748A
CN114066748A CN202111217780.1A CN202111217780A CN114066748A CN 114066748 A CN114066748 A CN 114066748A CN 202111217780 A CN202111217780 A CN 202111217780A CN 114066748 A CN114066748 A CN 114066748A
Authority
CN
China
Prior art keywords
image
enhanced
brightness
enhancement
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111217780.1A
Other languages
Chinese (zh)
Inventor
郭章
罗力华
李彩玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN CULTRAVIEW DIGITAL TECHNOLOGY CO LTD
Original Assignee
SHENZHEN CULTRAVIEW DIGITAL TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN CULTRAVIEW DIGITAL TECHNOLOGY CO LTD filed Critical SHENZHEN CULTRAVIEW DIGITAL TECHNOLOGY CO LTD
Priority to CN202111217780.1A priority Critical patent/CN114066748A/en
Publication of CN114066748A publication Critical patent/CN114066748A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method and electronic equipment, and relates to the field of image processing, wherein the image processing method comprises the following steps: the method comprises the steps of carrying out global brightness enhancement processing on an input image to obtain a first enhanced image, and carrying out local contrast enhancement processing on the first enhanced image to obtain a second enhanced image. According to the method and the device, the brightness of the dark field in the image is increased by adjusting the brightness and the local contrast of the image, the brightness of the bright field is reduced, the layering performance is not influenced, the dynamic adjustment range is refined, the picture quality is enhanced, and the image visual effect is improved.

Description

Image processing method and electronic device
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method and an electronic device.
Background
With the development of the technology, more and more adjustment technologies are provided on the existing electronic equipment to meet the visual needs of people. For example, taking an electronic device as a television as an example, a television at present is generally provided with a plurality of image modes for people to select when watching, such as: bright mode, soft mode, sports mode, movie mode, etc. By switching the image mode, the user can select a desired visual effect.
However, when such a preset mode is switched, the image content with high brightness in the screen is easily overexposed, and the image content with low brightness is easily not visible.
Therefore, a new image processing method is urgently needed to effectively adjust the display effect and improve the visual experience of the user.
Disclosure of Invention
The embodiment of the application provides an image processing method and electronic equipment, brightness of a dark field in an image is raised, brightness of a bright field is reduced, layering performance is not affected, a dynamic adjustment range is refined, and therefore picture quality is enhanced, and the visual effect of the image is improved.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, an image processing method is provided, which is applied to an electronic device, and includes:
performing global brightness enhancement processing on an input image to obtain a first enhanced image, wherein the global brightness enhancement processing is used for indicating that a brightness value corresponding to each pixel in the input image is subjected to nonlinear enhancement, and the brightness value corresponding to each pixel in the first enhanced image is a first enhanced brightness value; and performing local contrast enhancement processing on the first enhanced image to obtain a second enhanced image, wherein the local contrast enhancement processing is used for indicating that a first enhanced brightness value of a local area of the first enhanced image is enhanced.
The embodiment of the application provides an image processing method and electronic equipment, wherein the brightness of a dark field in an image is raised by adjusting the global brightness and the local contrast of the image, the brightness of a bright field is reduced, the layering expression is not influenced, the dynamic adjustment range is refined, the image quality is enhanced, and the improvement of the image visual effect is realized.
In one possible implementation manner, performing global brightness enhancement processing on an input image to obtain a first enhanced image includes: determining the maximum value of the three-primary-color pixel values corresponding to each pixel in the input image, and taking the maximum value as a corresponding initial brightness value; carrying out normalization processing on the initial brightness value to obtain a middle brightness value; and determining the corresponding first enhanced brightness value by utilizing a brightness enhancement formula according to the intermediate brightness value. In this implementation, the luminance value corresponding to each pixel in the input image may be non-linearly enhanced using a luminance enhancement formula.
In one possible implementation, the brightness enhancement formula is:
Figure BDA0003311339940000021
Figure BDA0003311339940000022
wherein, B is the first enhanced luminance value, V is the intermediate luminance value, and L is the initial luminance value corresponding to when the sum of the accumulated luminance values accounts for 10% of the sum of the luminances of all the pixel values when the accumulation is performed from small to large in the luminance accumulation histogram.
In a possible implementation manner, performing local contrast enhancement processing on the first enhanced image to obtain a second enhanced image includes: determining an image standard deviation under a preset scale aiming at the first enhanced image; determining a doubling index corresponding to the image standard deviation according to a doubling index formula; determining a contrast enhancement index according to the first enhancement brightness value and the doubling index corresponding to each pixel in the first enhancement image; and determining a second enhanced brightness value according to the first enhanced brightness value and the contrast enhancement index corresponding to each pixel in the first enhanced image, and generating the second enhanced image. In this implementation, different doubling indexes corresponding to different local regions can be determined by using a doubling index formula, so that different contrast enhancement indexes are determined, and different enhancement can be performed on different regions of the first enhanced image.
In one possible implementation manner, determining, for the first enhanced image, an image standard deviation at a preset scale includes: and determining the image standard deviation under the preset scale by using Gaussian filtering, bilateral filtering or guided filtering aiming at the first enhanced image.
In one possible implementation, the doubling index formula is:
ρ=(p1+p3*σ+p5*σ2+p7*σ3+p9*σ4)/(1+p2*σ+p4*σ2+p6*σ3+p8*σ4)
wherein σ is the image standard deviation under the preset scale, P1 to P9 are preset values, and ρ is the doubling index.
In a possible implementation manner, determining a contrast enhancement index according to the first enhanced luminance value and the doubling index corresponding to each pixel in the first enhanced image includes: determining a corresponding convolution brightness value by utilizing self-adaptive scale bilateral filtering according to the first enhancement brightness value corresponding to each pixel in the first enhancement image; and determining the contrast enhancement index according to the convolution brightness value, the first enhancement brightness value and the doubling index.
In one possible implementation, the method further includes: and determining the corresponding enhancement saturation by using a saturation adjustment formula according to the brightness mean value of the second enhancement image.
In one possible implementation, the method further includes: converting the input image from an RGB domain to an HSV domain, and determining a hue angle, saturation and brightness corresponding to each pixel of the input image; determining a skin color image block in the input image in an HSV domain; converting the skin tone image patch from an HSV domain to an RGB domain; conducting guided filtering processing on the skin color image block located in the RGB domain to obtain a skin color enhanced image block; and obtaining an output image according to the skin color enhanced image block and the input image.
In a second aspect, there is provided an electronic device for performing the image processing method as in the first aspect above or any possible implementation manner of the first aspect.
In a third aspect, a computer-readable storage medium is provided, in which a computer program or instructions are stored, which, when read and executed by a computer, cause the computer to perform the image processing method as in the first aspect above or any possible implementation manner of the first aspect.
The embodiment of the application provides an image processing method and electronic equipment, wherein brightness of a dark field in an image is raised and brightness of a bright field is reduced by adjusting overall brightness, local contrast and skin color of the image, layering expression is not affected, a dynamic adjustment range is refined, so that picture quality is enhanced, and the visual effect of the image is improved.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a luminance cumulative histogram provided in an embodiment of the present application;
FIG. 3 is a graph showing a variation trend of the intermediate luminance value and the first enhanced luminance value;
FIG. 4 is a graph of image standard deviation versus doubling index;
FIG. 5 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
FIG. 6 is a table of adjustment relationships provided in an embodiment of the present application;
FIG. 7 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
FIG. 8 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
FIG. 9 is an HSV model;
FIG. 10 is a projection of the HSV model shown in FIG. 9.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the description of the embodiments of the present application, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The directional terms "left", "right", "upper" and "lower" are defined relative to the orientation in which the display assembly is schematically positioned in the drawings, and it is to be understood that these directional terms are relative concepts that are used for descriptive and clarity relative to each other and that may vary accordingly depending on the orientation in which the display assembly is positioned.
With the development of the technology, more and more adjustment technologies are provided on the existing electronic equipment to meet the visual needs of people. For example, taking an electronic device as a television as an example, a television at present is generally provided with a plurality of image modes for people to select when watching, such as: bright mode, soft mode, sports mode, movie mode, etc. By switching the image mode, the user can select a desired visual effect. In addition, televisions typically have the ability to adjust for sharpness alone.
However, when such a preset mode is switched, the image content with high brightness in the screen is easily overexposed, and the image content with low brightness is easily not visible.
In addition, the existing television does not process skin colors, so that the face colors in some displayed images are distributed unevenly, and the performance is poor.
Therefore, a new image processing method is urgently needed to effectively adjust the display effect and improve the visual experience of the user.
In view of this, an embodiment of the present application provides an image processing method, which performs brightness increase on a dark field and brightness reduction on a bright field in an image by adjusting global brightness, local contrast, and skin color of the image, and does not affect layering performance, and refines a dynamic adjustment range to enhance a picture quality, thereby improving an image visual effect.
The following describes an image processing method provided in an embodiment of the present application in detail with reference to the accompanying drawings.
Fig. 1 illustrates a flow diagram of an image processing method. As shown in fig. 1, the method includes the following S10 to S20.
And S10, carrying out global brightness enhancement processing on the input image to obtain a first enhanced image.
The global brightness enhancement processing is used for indicating that the brightness corresponding to each pixel in the input image is subjected to nonlinear enhancement. The brightness value corresponding to each pixel in the first enhanced image is a first enhanced brightness value.
It should be understood that the non-linear enhancement means that the intensity of the luminance enhancement corresponding to each pixel is different, so that the overall picture quality can be improved and the improvement is more exquisite. The input image is located in an RGB domain, and the first enhanced image is located in an HSV domain.
And S20, carrying out local contrast enhancement processing on the first enhanced image to obtain a second enhanced image.
Wherein the local contrast enhancement processing is for instructing enhancement of a first enhanced luminance value of a local region of the first enhanced image.
It should be understood that after global brightness enhancement, the sense of picture layering in the first enhanced image is reduced relative to the input image, and in order to further improve it, the first enhanced image may be subjected to local contrast enhancement processing to improve the contrast of the local content. The second enhanced image is located in the HSV domain.
The embodiment of the application provides an image processing method, which is characterized in that the brightness of a dark field in an image is raised by adjusting the global brightness and the local contrast of the image, the brightness of the bright field is reduced, the layering expression is not influenced, the dynamic adjustment range is refined, the picture quality is enhanced, and the image visual effect is improved.
Alternatively, as an implementable embodiment, the above S10 may include S11 to S13.
And S11, determining the maximum value of the three primary color pixel values corresponding to each pixel in the input image, and taking the maximum value as the corresponding initial brightness value.
And S12, carrying out normalization processing on the initial brightness value of each pixel to obtain an intermediate brightness value.
And S13, determining a corresponding first enhanced brightness value by utilizing a brightness enhancement formula according to the intermediate brightness value.
Wherein, the brightness enhancement formula is:
Figure BDA0003311339940000071
Figure BDA0003311339940000072
it is to be understood that B is the first enhanced luminance value; v is a corresponding intermediate brightness value after the initial brightness value is subjected to normalization processing; l is an initial luminance value corresponding to when the sum of the integrated luminance values accounts for 10% of the sum of the luminances of all the pixel values when the luminance integrated histogram is integrated from small to large.
It should be understood that if L is smaller, it indicates a higher dark field fraction in the input image, and if L is larger, it indicates a higher bright field fraction in the input image.
For example, if the three primary color pixel value corresponding to each pixel in the input image is a red pixel value r, a green pixel value g, and a blue pixel value b, respectively, a maximum value of r, g, and b is determined, where the maximum value is an initial luminance value corresponding to the pixel, and the formula can be expressed as: v (x, y) max (r, g, b); where x and y are used to represent the coordinates of the pixel.
Then, the initial brightness value corresponding to each pixel is normalized, that is, the ratio of each initial brightness value to the maximum initial brightness value is determined, and the formula can be expressed as: v (x, y)/255. It should be understood that the range corresponding to the initial brightness value is 0-255, and the maximum initial brightness value is 255.
Meanwhile, the initial brightness value is accumulated to generate a brightness accumulation histogram.
Illustratively, fig. 2 shows a luminance cumulative histogram. As shown in fig. 2, the range of the initial brightness value is 0 to 255, and correspondingly, the value range of L is 0 to 255. If L is 30, the sum of the respective initial luminance values representing the initial luminance values less than and equal to 30 accounts for 10% of the sum of the initial luminance values of the entire image. At this time, L belongs to the first value range (0, 30), so λ is 0, and after the above luminance enhancement formula is substituted, the first enhanced luminance value corresponding to each pixel can be determined by using the formula.
If L is 150, the sum of the initial luminance values representing the initial luminance values less than and equal to 150 accounts for 10% of the sum of the initial luminance values of the entire image, whereas the sum of the initial luminance values greater than 150 accounts for 90% of the sum of the initial luminance values of the entire image, in which case the bright-field ratio is very high. At this time, L belongs to the second value range (30,150), so λ is 1, and after the above luminance enhancement formula is substituted, the first enhanced luminance value corresponding to each pixel can be determined by using the formula.
Illustratively, fig. 3 illustrates a variation trend of the intermediate luminance value and the first enhanced luminance value. As shown in fig. 3, the horizontal axis represents the intermediate luminance value V and the vertical axis represents the first enhanced luminance value B.
With the change of lambda, the smaller intermediate brightness value is obviously improved after the global brightness enhancement processing. That is, for the low luminance region, the luminance is significantly increased after the global luminance enhancement processing. Here, the brightness at dark field 0 is preserved, so that the image frame is not too whitish.
As can be seen from left to right in fig. 3, the intensity of the enhancement gradually decreases as the middle luminance value increases until the middle luminance value is finally unchanged.
Alternatively, as an implementable embodiment, the above S20 may include S21 to S24.
And S21, determining the standard deviation of the image at the preset scale aiming at the first enhanced image.
Optionally, for the first enhanced image, the image standard deviation at the preset scale may be determined by using gaussian filtering, bilateral filtering or guided filtering. Of course, the standard deviation of the image at the preset scale may also be determined in other manners, which is not limited in this application.
The preset scale is used to indicate the size of the sample when determining the standard deviation of the image, for example, if the preset scale is 4 × 4, for every 4 × 4 pixels, a corresponding standard deviation of the image is determined.
And S22, determining the doubling index corresponding to the image standard deviation according to the doubling index formula.
Optionally, the doubling index formula is: ρ ═ p1+ p3 σ + p5 σ2+p7*σ3+p9*σ2)/(1+p2*σ+p4*σ2+p6*σ3+p8*σ4)
Wherein σ is an image standard deviation under a preset scale, P1-P9 are preset values, and ρ is a doubling index.
Illustratively, fig. 4 illustrates a graph of image standard deviation versus doubling index. The doubling index formula was fitted from curves obtained from multiple experiments with actual effects (as shown in fig. 4).
When the image standard deviation is below 3, the doubling index is approximately 3; when the standard deviation of the image ranges from 3 to 10, the doubling index is decreased to 1; when the image standard deviation is more than 10, because the self contrast is very strong, the enhancement can not be carried out any more, and the doubling index is close to 1. According to the above rule, a curve as shown in fig. 4 can be fitted, and thus, the sizes of the preset values P1 to P9 can be determined.
For example, p1 may be preset to 2.79346116062302, p2 may be preset to-0.368889384242114, p3 may be preset to-0.826063393568492, p4 may be preset to 0.0782179297518896, p5 may be preset to 0.119570046400767, p6 may be preset to-0.00875163735457361, p7 may be preset to-0.0102994601530972, p8 may be preset to 0.000377888114853294, and p9 may be preset to 0.000397578762299649.
Based on this, in the case where the image standard deviation is determined, and P1 to P9 are assigned to the above-described doubling index formula, the doubling index ρ corresponding to the image standard deviation can be determined. In actual calculation, the multiplication index ρ may be rounded to retain a fraction of a decimal, thereby simplifying the calculation.
And S23, determining a contrast enhancement index according to the first enhancement brightness value and the doubling index corresponding to each pixel in the first enhancement image.
Optionally, the S23 may include:
determining a corresponding convolution brightness value by utilizing self-adaptive scale bilateral filtering according to a first enhancement brightness value corresponding to each pixel in a first enhancement image; and then, determining the contrast enhancement index according to the convolution brightness value, the first enhancement brightness value and the doubling index.
It will be appreciated that with the adaptive bilateral filtering algorithm, the problem of dark edges at the edges of the image content in the first enhanced image may be avoided. Of course, other algorithms may be used to determine the convolution luminance values, and the embodiment of the present application does not limit this.
Illustratively, each pixel in the first enhanced image corresponds to a first enhanced luminance value of B, and the corresponding convolved luminance value can be determined to be B by using adaptive scale bilateral filteringacAccording to the formula
Figure BDA0003311339940000091
The contrast enhancement index is determined to be K.
And S24, determining a second enhanced brightness value according to the first enhanced brightness value corresponding to each pixel in the first enhanced image and the contrast enhancement index, and generating a second enhanced image.
If the first enhanced luminance value is B and the contrast enhancement index is K, the corresponding second enhanced luminance value F is Bκ
It will be appreciated that by virtue of the contrast enhancement index K, the bright field in the first enhanced image may be enhanced and the dark field reduced.
It should be understood that the image standard deviation σ at the preset scale reflects the image distribution in the first enhanced image. The larger the image standard deviation is, the steeper the picture is, and the doubling index ρ calculated therefrom is smaller, resulting in a decrease in the contrast enhancement index K, whereby F is increased little by little, so that the dark field does not become very dark and the bright field does not become very bright.
The smaller the image standard deviation, the flatter the picture, and the greater the doubling index p calculated therefrom, resulting in an increase in the contrast enhancement index K, and thus a corresponding increase in F, which results in a darker dark field and a brighter light field.
In the prior art, when an electronic device adjusts a dynamic contrast, the electronic device usually utilizes three adjustment curves corresponding to different brightness intervals to adjust the dynamic contrast. For example, the three luminance curves are a low luminance adjustment curve, a medium luminance adjustment curve, and a high luminance adjustment curve, respectively.
Wherein, the adjustment relationship between the three adjustment curves and the image is as follows:
when the average brightness value corresponding to the image is smaller than the first brightness value L1, the low brightness adjustment curve is adjusted. When the average brightness value corresponding to the image is greater than the first brightness value L1 and less than the second brightness value L2, the brightness is adjusted by using the medium brightness adjustment curve, and the brightness is adjusted by using the high brightness adjustment curve.
When the average brightness value corresponding to the image is greater than the second brightness value L2, the highlight adjustment curve is used for adjustment, and the mid-brightness adjustment curve is used for assisting adjustment.
In view of the fact that the three traditional adjustment curves are relatively rough in controlling brightness, the image processing method provided by the embodiment of the application performs different adjustments on the images belonging to each brightness mean interval through thinning the brightness mean interval and preset rules, so that the image brightness control is more precise and accurate.
On the basis of the image processing method shown in fig. 1, as shown in fig. 5, the method may further include the following S30 to S40.
S10 and S20 are the same as the above steps and are not described in detail herein.
And S30, determining the brightness mean value of the second enhanced image.
I.e. the mean of all second enhanced luminance values of the second enhanced image is determined.
And S40, determining the adjustment amplitude of the dynamic contrast adjustment curve by using the adjustment relation table according to the brightness mean value. The dynamic contrast adjustment curve comprises a low brightness adjustment curve, a medium brightness adjustment curve and a high brightness adjustment curve; the adjustment relation table is used for indicating the corresponding relation between the brightness mean value and the dynamic contrast adjustment curve.
For example, fig. 6 is an adjustment relationship table provided in an embodiment of the present application. As shown in fig. 6, when the determined brightness mean value of the second enhanced image belongs to the first brightness mean value range [0,15], it indicates that the brightness dark field of the current second enhanced image is larger in ratio, and therefore, the low-brightness adjustment curve can be greatly increased by 30%, the medium-brightness adjustment curve can be greatly increased by 15%, and the high-brightness adjustment curve is unchanged, so as to adjust the dark field, but not change the bright field.
On the basis of the image processing method shown in fig. 5, as shown in fig. 7, the method may further include the following S50.
S10-S40 are the same as the above steps, and are not described herein.
And S50, determining the corresponding enhanced saturation by using a saturation adjustment formula according to the brightness mean value of the second enhanced image.
Optionally, the saturation adjustment formula is:
Figure BDA0003311339940000111
wherein F is the second incrementA second enhanced brightness value of the strong image, wherein V is a corresponding intermediate brightness value after the normalization processing of the initial brightness value; μ is the mean value of the luminance of the second enhanced image,
Figure BDA0003311339940000112
for saturation compensation negative feedback, S is the initial saturation corresponding to the second enhanced image, and S' is the enhanced saturation corresponding to the second enhanced image.
It should be appreciated that in order to prevent the saturation from deviating significantly, the mean value of the luminance of the second enhanced image is added to regulate when the saturation is adjusted. When the average value μ of the luminance of the second enhanced image is small, that is, the overall luminance of the second enhanced image is small,
Figure BDA0003311339940000113
and is smaller, its value gradually approaches 1 as the luminance increases.
It will be appreciated that saturation compensation negative feedback is used to prevent boosting too much. The magnitude of the saturation compensation negative feedback can be set and changed empirically, and the embodiment of the present application does not limit this.
Fig. 8 shows a flowchart of another image processing method provided in the embodiment of the present application, and as shown in fig. 8, the method may further include the following S110 to S130.
S110, converting the input image from the RGB domain to the HSV domain, and determining the hue angle, saturation and brightness corresponding to each pixel of the input image.
Wherein, the hue angle is H, the saturation is S, and the brightness is V.
It should be understood that the input image may be converted from the RGB domain to the HSV domain using the following conversion formula.
For example, according to
Figure BDA0003311339940000121
And calculating to obtain a hue angle H corresponding to each pixel. According to
Figure BDA0003311339940000122
And calculating to obtain the saturation S corresponding to each pixel. And calculating to obtain the brightness V corresponding to each pixel according to the V-MAX.
It should be understood that the input image may be an image that has not been processed by the method shown in fig. 1, 5, and 7, or may also be an image that has been processed by the method shown in fig. 1, 5, and 7, and in this case, since the processed images are all located in the HSV domain, the hue angle, saturation, and brightness corresponding to each pixel may be directly determined.
And S120, determining a skin color image block in the input image in the HSV domain.
Optionally, a skin color reference area may be determined in a projection area of the HSV model in the horizontal plane, and then an image block satisfying the skin color reference range in an input image in the HSV area is determined, where the image block is a skin color image block.
Fig. 9 is a diagram of an HSV model, and correspondingly, fig. 10 is a diagram of a projection corresponding to the HSV model. The colors in fig. 10 are divided by the angles and the inner and outer edge depths, and the divided skin color reference area is the local area in fig. 10.
And S130, converting the skin color image block from the HSV domain to the RGB domain.
And S140, conducting guided filtering processing on the skin color image block located in the RGB domain to obtain a skin color enhanced image block.
And S150, obtaining an output image according to the skin color enhanced image block and the input image.
Alternatively, the guided filtering process may be performed by using the following set of formulas, and a part of pixels in the sampled data is selected as a window for filtering.
Figure BDA0003311339940000131
Figure BDA0003311339940000132
qi=akpk+bk
Wherein p iskThe pixel value of a skin color image block positioned in an RGB domain under a current window is obtained;
Figure BDA0003311339940000133
is to calculate bkMean value of pixel values of skin-color image blocks located in RGB domain under current window in coefficient, qiEnhancing the image block for skin color; i iskTo steer the frame, e is the filter coefficient,
Figure BDA0003311339940000134
calculating a window-down variance for the current; a iskAnd bkIs the linear equation coefficient at the window K.
It should be understood that the original formula is: q. q.si=akIk+bk(ii) a When I isk=pkAnd then the formula is changed into the third formula, and a filter with the functions of edge protection and filtering is obtained.
It will be appreciated that at the edges of the image, the variance is larger, akTo 1, bkThe color of the obtained skin color enhanced image block is equal to that of a skin color image block, and edge preservation is realized; at flatness or texture of the image, the variance is small, akSmaller, with output closer to bkAnd b iskTends to the average value, thereby realizing the average filtering.
Optionally, after obtaining the skin color enhanced image block, an area corresponding to the skin color enhanced image block in the input image may be replaced with the skin color enhanced image block, and thus, an output image may be obtained.
It should be understood that, with the above image processing method, according to the expected set skin color reference range, only the skin color image block in the input image needs to be subjected to the skin beautifying effect, and other colors of the background are not changed, so that the overall effect is more natural.
The embodiment of the application also provides electronic equipment, and the electronic equipment is used for executing the image processing method provided by the embodiment.
The beneficial effects of the electronic device provided by the embodiment of the application are the same as the beneficial effects corresponding to the image processing method, and are not repeated herein.
The embodiment of the present application further provides a computer-readable storage medium, in which a computer program or an instruction is stored, and when the computer program or the instruction is read and executed by a computer, the computer is caused to execute the image processing method.
The beneficial effects of the computer-readable storage medium provided by the embodiment of the application are the same as the beneficial effects corresponding to the image processing method, and are not described herein again.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not cause the essential features of the corresponding technical solutions to depart from the spirit scope of the technical solutions of the embodiments of the present application, and are intended to be included within the scope of the present application.

Claims (10)

1. An image processing method applied to an electronic device, the image processing method comprising:
performing global brightness enhancement processing on an input image to obtain a first enhanced image, wherein the global brightness enhancement processing is used for indicating that a brightness value corresponding to each pixel in the input image is subjected to nonlinear enhancement, and the brightness value corresponding to each pixel in the first enhanced image is a first enhanced brightness value;
and performing local contrast enhancement processing on the first enhanced image to obtain a second enhanced image, wherein the local contrast enhancement processing is used for indicating that the first enhanced brightness value of a local area of the first enhanced image is enhanced.
2. The image processing method according to claim 1, wherein performing global luminance enhancement processing on the input image to obtain a first enhanced image comprises:
determining the maximum value of the three-primary-color pixel values corresponding to each pixel in the input image, and taking the maximum value as a corresponding initial brightness value;
carrying out normalization processing on the initial brightness value to obtain a middle brightness value;
and determining the corresponding first enhanced brightness value by utilizing a brightness enhancement formula according to the intermediate brightness value.
3. The image processing method according to claim 2, wherein the luminance enhancement formula is:
Figure FDA0003311339930000011
Figure FDA0003311339930000012
wherein, B is the first enhanced luminance value, V is the intermediate luminance value, and L is the initial luminance value corresponding to when the sum of the accumulated luminance values accounts for 10% of the sum of the luminances of all the pixel values when the accumulation is performed from small to large in the luminance accumulation histogram.
4. The image processing method according to any one of claims 1 to 3, wherein performing local contrast enhancement processing on the first enhanced image to obtain a second enhanced image includes:
determining an image standard deviation under a preset scale aiming at the first enhanced image;
determining a doubling index corresponding to the image standard deviation according to a doubling index formula;
determining a contrast enhancement index according to the first enhancement brightness value and the doubling index corresponding to each pixel in the first enhancement image;
and determining a second enhanced brightness value according to the first enhanced brightness value and the contrast enhancement index corresponding to each pixel in the first enhanced image, and generating the second enhanced image.
5. The image processing method according to claim 4, wherein determining an image standard deviation at a preset scale for the first enhanced image comprises:
and determining the image standard deviation under the preset scale by using Gaussian filtering, bilateral filtering or guided filtering aiming at the first enhanced image.
6. The image processing method according to claim 4, wherein the doubling index formula is:
ρ=(p1+p3*σ+p5*σ2+p7*σ3+p9*σ4)/(1+p2*σ+p4*σ2+p6*σ3+p8*σ4)
wherein σ is the image standard deviation under the preset scale, P1 to P9 are preset values, and ρ is the doubling index.
7. The method according to claim 4, wherein determining a contrast enhancement index from the first enhanced luminance value and the doubling index for each pixel in the first enhanced image comprises:
determining a corresponding convolution brightness value by utilizing self-adaptive scale bilateral filtering according to the first enhancement brightness value corresponding to each pixel in the first enhancement image;
and determining the contrast enhancement index according to the convolution brightness value, the first enhancement brightness value and the doubling index.
8. The image processing method according to claim 1 or 7, characterized in that the method further comprises:
and determining the corresponding enhancement saturation by using a saturation adjustment formula according to the brightness mean value of the second enhancement image.
9. The image processing method according to claim 1, characterized in that the method further comprises:
converting the input image from an RGB domain to an HSV domain, and determining a hue angle, saturation and brightness corresponding to each pixel of the input image;
determining a skin color image block in the input image in an HSV domain;
converting the skin tone image patch from an HSV domain to an RGB domain;
conducting guided filtering processing on the skin color image block located in the RGB domain to obtain a skin color enhanced image block;
and obtaining an output image according to the skin color enhanced image block and the input image.
10. An electronic device, characterized in that it is adapted to perform the image processing method according to any of claims 1-9.
CN202111217780.1A 2021-10-19 2021-10-19 Image processing method and electronic device Pending CN114066748A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111217780.1A CN114066748A (en) 2021-10-19 2021-10-19 Image processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111217780.1A CN114066748A (en) 2021-10-19 2021-10-19 Image processing method and electronic device

Publications (1)

Publication Number Publication Date
CN114066748A true CN114066748A (en) 2022-02-18

Family

ID=80234931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111217780.1A Pending CN114066748A (en) 2021-10-19 2021-10-19 Image processing method and electronic device

Country Status (1)

Country Link
CN (1) CN114066748A (en)

Similar Documents

Publication Publication Date Title
US8374430B2 (en) Apparatus and method for feature-based dynamic contrast enhancement
US10134359B2 (en) Device or method for displaying image
US8144985B2 (en) Method of high dynamic range compression with detail preservation and noise constraints
US7869649B2 (en) Image processing device, image processing method, program, storage medium and integrated circuit
JP4894595B2 (en) Image processing apparatus and method, and program
US7899267B2 (en) Dynamic range compensation by filter cascade
US8860744B2 (en) System for image enhancement
JP5121294B2 (en) Image processing method, image processing apparatus, program, recording medium, and integrated circuit
US20060062562A1 (en) Apparatus, program, and method for image tone transformation, and electronic camera
KR20070009681A (en) Method for processing image data
Jang et al. Adaptive color enhancement based on multi-scaled Retinex using local contrast of the input image
WO2023098251A1 (en) Image processing method, device, and readable storage medium
JP5014274B2 (en) Image processing apparatus, image processing method, image processing system, program, recording medium, and integrated circuit
US7453524B2 (en) Method and device for image contrast enhancement
WO2005055588A1 (en) Image processing device for controlling intensity of noise removal in a screen, image processing program, image processing method, and electronic camera
JP4758999B2 (en) Image processing program, image processing method, and image processing apparatus
CN110766622A (en) Underwater image enhancement method based on brightness discrimination and Gamma smoothing
Albu et al. One scan shadow compensation and visual enhancement of color images
WO2020093441A1 (en) Detail processing method and device for image saturation enhancement
CN114066748A (en) Image processing method and electronic device
KR101024058B1 (en) Method for controlling a image contrast for local dimming backlight and apparatus thereof
CN114999363A (en) Color shift correction method, device, equipment, storage medium and program product
US20090060327A1 (en) Image and Video Enhancement Algorithms
Jung et al. Detail-preserving tone mapping for low dynamic range displays with adaptive gamma correction
US20230162333A1 (en) Image processing apparatus and image processing method for contrast enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination