CN114723613A - Image processing method and device, electronic device and storage medium - Google Patents

Image processing method and device, electronic device and storage medium Download PDF

Info

Publication number
CN114723613A
CN114723613A CN202110007717.9A CN202110007717A CN114723613A CN 114723613 A CN114723613 A CN 114723613A CN 202110007717 A CN202110007717 A CN 202110007717A CN 114723613 A CN114723613 A CN 114723613A
Authority
CN
China
Prior art keywords
pixel
image
processed
pixels
chrominance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110007717.9A
Other languages
Chinese (zh)
Inventor
周群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110007717.9A priority Critical patent/CN114723613A/en
Publication of CN114723613A publication Critical patent/CN114723613A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to an image processing method and device, an electronic device and a storage medium. Wherein, the method comprises the following steps: acquiring a to-be-processed image of a brightness-chrominance space domain, and performing down-sampling operation on the to-be-processed image to obtain a low-frequency image of the to-be-processed image; respectively determining the chromaticity weight of each first pixel according to the brightness difference degree of each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree; and performing upsampling operation on the low-frequency image based on the chromaticity weight corresponding to each first pixel, and synthesizing the image obtained through the upsampling operation with the image to be processed to obtain a processed image. By the method, the chromatic value of the image can be adjusted in the image processing process, and the condition that color edge color overflows in the processed image is avoided.

Description

Image processing method and device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
In the process of collecting, transmitting and processing images, the electronic equipment is difficult to avoid interference of various factors, so that the obtained images are accompanied by noise signals. The accompanying noise signals in an image mainly contain two kinds of noise signals, namely luminance noise and color noise (also referred to as chrominance noise).
In the related art, there is a mature noise reduction technology for processing luminance noise. However, it is still difficult to perform effective noise reduction processing for color noise in an image, and particularly at color edges where the color span is large, a situation of color overflow or blurring often occurs.
Disclosure of Invention
In view of this, the present disclosure provides an image processing method and apparatus, an electronic device, and a storage medium, which can adjust a chrominance value of a low-frequency image according to a luminance distribution of a high-frequency image in an upsampling process of the low-frequency image, so as to avoid a color edge color overflow of the image obtained by the upsampling.
In order to achieve the above purpose, the present disclosure provides the following technical solutions:
according to a first aspect of the present disclosure, an image processing method is provided, including:
acquiring a to-be-processed image of a brightness-chrominance space domain, and performing down-sampling operation on the to-be-processed image to obtain a low-frequency image of the to-be-processed image;
respectively determining the chromaticity weight of each first pixel according to the brightness difference degree of each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree;
and performing upsampling operation on the low-frequency image based on the chromaticity weight corresponding to each first pixel, and synthesizing the image obtained through the upsampling operation with the image to be processed to obtain a processed image.
According to a second aspect of the present disclosure, there is provided an image processing apparatus comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be processed in a brightness-chromaticity space domain and performing down-sampling operation on the image to be processed to obtain a low-frequency image of the image to be processed;
the determining unit is used for respectively determining the chromaticity weight of each first pixel according to the brightness difference degree of each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree;
and the synthesizing unit is used for performing up-sampling operation on the low-frequency image based on the chrominance weight corresponding to each first pixel, and synthesizing the image obtained through the up-sampling operation with the image to be processed to obtain a processed image.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of the first aspect by executing the executable instructions.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the first aspect.
In the technical scheme of the disclosure, an image processing method is provided. On one hand, the method carries out down-sampling operation on the acquired to-be-processed image of the brightness-chromaticity spatial domain to obtain a corresponding low-frequency image; on the other hand, according to the brightness difference degree between each first pixel and each first neighborhood pixel in the image to be processed, the chroma weight is distributed to each first pixel, and the chroma weight of each first pixel is positively correlated with the corresponding brightness difference degree. On the basis, the up-sampling operation can be performed on the low-frequency image based on the chrominance weight corresponding to each first pixel in the image to be processed, so as to obtain an image with the size consistent with that of the image to be processed, and the image to be processed are combined, so as to obtain a processed image.
It is understood that in the field of images, the change of color in the image has an influence on the change of brightness, and the change of color is large and the change of brightness is generally large. In other words, a change in brightness in an image can reflect a change in color. For any pixel, if the degree of the brightness difference from the neighboring pixels is larger, it means that the chromaticity difference between the pixel and the neighboring pixels is generally larger. In the present disclosure, the chroma weight of any pixel is positively correlated with the corresponding brightness difference degree, which is equivalent to assigning a higher chroma weight to a pixel in a region (color edge) with large brightness change. On this basis, carry out the upsampling operation to the low frequency image through the chroma weight of confirming, be equivalent to the chroma value with the pixel at the colour edge in the low frequency image improvement for the colour change at colour edge is more outstanding, and then has avoided the problem that colour at colour edge spills over among the correlation technique.
In short, the color edge in the image is identified by analyzing the brightness change condition in the image, and then the difference between the chroma value of the pixel at the color edge and the chroma value of the pixel at the color flat position is increased by distributing a higher chroma weight to the pixel at the color edge, so that the difference between the color edge and the color flat position is highlighted, and the problem of color overflow at the color edge in the related art is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of image processing according to an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an image to be processed according to an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a low frequency image shown in an exemplary embodiment of the present disclosure;
FIG. 4A is a schematic diagram of an image obtained by an upsampling operation shown in an exemplary embodiment of the present disclosure;
FIG. 4B is a diagram illustrating a comparison between an upsampling operation by bilinear interpolation and a jointly guided upsampling operation according to an exemplary embodiment of the present disclosure;
FIG. 5 is a block diagram of an image processing apparatus shown in an exemplary embodiment of the present disclosure;
fig. 6 is a block diagram of another image processing apparatus shown in an exemplary embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the process of collecting, transmitting and processing images, the electronic equipment is difficult to avoid interference of various factors, so that the obtained images are accompanied by noise signals. The accompanying noise signals in an image mainly contain two kinds of noise signals, namely luminance noise and color noise (also referred to as chrominance noise).
In the related art, there is a mature noise reduction technology for processing luminance noise. However, it is still difficult to perform effective noise reduction processing for color noise in an image, and particularly at color edges where the color span is large, a situation of color overflow or blurring often occurs.
Specifically, since color noise is more obvious in a low-frequency image, a multi-scale noise reduction frame is generally adopted in the related art to perform noise reduction processing on the image. During the processing through the frame, at least one down-sampling operation needs to be performed on the image to be processed to obtain at least one low-frequency image corresponding to the image to be processed. Then, the noise reduction can be performed on the obtained at least one low-frequency image and the image to be processed, and then the up-sampling operation is performed on the low-frequency image after the noise reduction is completed, so as to obtain a final processed image.
In this example, an image to be processed is referred to as a high-frequency image, an image obtained by first downsampling is referred to as an intermediate-frequency image, and an image obtained by second downsampling is referred to as a low-frequency image. Specifically, the method comprises the following steps:
firstly, a down-sampling operation can be carried out on the high-frequency image once to obtain an intermediate-frequency image with a smaller size; and performing down-sampling operation on the obtained intermediate frequency image to obtain a low-frequency image with a smaller size. Then, the high frequency image, the intermediate frequency image, and the low frequency image may be subjected to noise reduction processing, respectively, to perform relatively fine noise reduction processing. After the noise reduction processing is finished, the up-sampling operation can be carried out on the low-frequency image to obtain an image with the size consistent with that of the intermediate-frequency image, and the image is used for being synthesized with the intermediate-frequency image after the noise reduction processing; further, the synthesized image may be subjected to an up-sampling operation to obtain an image having a size in accordance with the high-frequency image, which is used to synthesize the high-frequency image after the noise reduction processing to obtain a processed image.
In the process of performing noise reduction processing through the multi-scale noise reduction frame, in the process of upsampling a low-frequency image (including the above upsampling operation for the intermediate-frequency image), upsampling is performed only based on own chrominance information, and the problem of color overflow or blurring at the color edge cannot be solved.
More seriously, because of the above-mentioned series of operations of "downsampling operation → noise reduction → upsampling operation", involving a plurality of changes of image scale, the image resolution is decreased, and even the problem may be caused in the image which originally does not have the problem (for example, in the process of upsampling, the determined interpolation value is determined only by the self chromaticity value, and the chromaticity difference value between the adjacent pixels is reduced, thereby causing the color blurring phenomenon at the color edge).
Therefore, the present disclosure provides a method for identifying a color edge of an image and adjusting a chrominance value of a pixel at the color edge, so as to remove color noise at the color edge, thereby avoiding color overflow or blurring at the color edge in the related art.
Fig. 1 illustrates an image processing method according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the method may include the steps of:
102, acquiring a to-be-processed image of a brightness-chrominance space domain, and performing down-sampling operation on the to-be-processed image to obtain a low-frequency image of the to-be-processed image.
The technical scheme of the present disclosure can be applied to any type of electronic device, for example, the electronic device may be a mobile terminal such as a smart phone and a tablet Computer, and may also be a fixed terminal such as a smart television and a PC (Personal Computer). It should be understood that only electronic devices capable of image processing can be used as the electronic devices in the present disclosure, and particularly, which type of electronic device the technical solution of the present disclosure is applied to can be determined by those skilled in the art according to actual needs, and the present disclosure is not limited thereto.
It is understood that the phenomenon of color overflow or blurring at color edges in an image is caused by the difference in chromaticity values at color edges being not large enough from those at color flatness. For example, assuming that a border between the color flat area a and the color flat area B is a color edge C, if the chroma value of the pixel X at the color edge is 5, and the chroma value of the pixel Y in the color flat area a, which is closer to the color edge C, is 4, it is obvious that the chroma values of the two pixels are not very different, so that the color edge C is not very obvious, that is, in terms of visual effect, a color blur is likely to occur, or the color of the color edge C is spread toward the color flat area a. It can be seen that the reason for the color overflow phenomenon at the color edge in the image is: the difference in pixel chrominance values between the color edge and the color flat area is small.
In view of this, the present disclosure considers the rule that "the color change is large in the area of the image, and the luminance change is also large in general", and determines the color edge and the color flat area in the image by the luminance value of each pixel in the image, and then assigns the chroma weight for adjusting the chroma value to the pixels of different areas. The pixels at the color edge are assigned with higher chromaticity weight, and the pixels in the color flat area are assigned with relatively lower chromaticity weight, so that the chromaticity value of the pixels at the color edge of the adjusted image and the chromaticity difference value of the pixels in the color flat area are increased, and the problem of color overflow at the color edge is avoided.
Since the present disclosure needs to adjust the chrominance values of the pixels based on the luminance values of the pixels, it is obviously necessary that the image to be processed is an image in a luminance-chrominance spatial domain. Therefore, when the acquired initial image to be processed is an image of another spatial domain, it is necessary to preferentially convert the image of the other spatial domain into an image of a luminance-chrominance spatial domain. For example, when the initial image to be processed is an image in RGB space domain, the image in RGB space domain should be preferentially converted into an image in luminance-chrominance space domain.
The image to be processed acquired in the present disclosure may be any type of image in a luminance-chrominance spatial domain. For example, the image to be processed may be an image in YUV space domain; or, may be an image of the HIS spatial domain. The specific type of the luminance-chrominance space domain image to be processed can be determined by those skilled in the art according to practical situations, and the present disclosure does not limit the present disclosure.
Since the present disclosure is intended to remove color noise in the image to be processed, the color noise generally appears in a low frequency form. Therefore, after the to-be-processed image in the luminance-chrominance space domain is acquired, down-sampling operation may be preferentially performed on the to-be-processed image to obtain a low-frequency image of the to-be-processed image, and then the chrominance value adjustment operation may be performed on the low-frequency image.
Before the technical solution of the present disclosure is described in detail, it is stated that, because the concepts related to the present solution are many, for convenience of distinction, a pixel in an image to be processed is referred to as a "first pixel", neighborhood pixels of all pixels in the image to be processed are referred to as "first neighborhood pixels", a window region obtained from the image to be processed is referred to as a "first window region", a pixel in a low-frequency image is referred to as a "second pixel", neighborhood pixels of all pixels in the image to be low-frequency image are referred to as "second neighborhood pixels", and a window region obtained from the low-frequency image is referred to as a "second window region".
In fact, in the related art, in addition to a color overflow or a blurring phenomenon at a color edge of an image, color noise exists in a color flat region. Therefore, a step of performing noise reduction processing on the image to be processed and/or the low-frequency image may also be included in the present disclosure. When the image to be processed is subjected to noise reduction processing, the whole image to be processed can be traversed in a sliding window mode, the chromatic values of all the first pixels in any one first window region acquired through the sliding window are subjected to weighted calculation, and the calculated first target value corresponding to any one first window region is used as the chromatic value of the central pixel of any one first window region.
And similarly, when the noise reduction processing is performed on the low-frequency image. The whole low-frequency image can be traversed in a sliding window mode, the chrominance values of all second pixels in any second window area acquired through the sliding window are subjected to weighted calculation, and a second target value corresponding to any second window area obtained through calculation is used as the chrominance value of a central pixel of any second window area.
In actual operation, most of the images to be processed and the low-frequency images are subjected to noise reduction processing, but only one of the images may be subjected to noise reduction processing, and a person skilled in the art may determine whether to perform noise reduction processing on the images to be processed and/or the low-frequency images according to actual needs, which is not limited by the present disclosure.
It should be noted that the sliding window is a conventional value-taking manner in the image field, and the window with a fixed size is usually slid by a specific step size, so as to take the numerical value of the pixel in the image falling into the window as a value for operation (including the value of each channel such as a chrominance value and a luminance value). In the present disclosure, the entire image may be traversed with a step size of 1 pixel, so that each pixel in the image is taken as a central pixel for chrominance value adjustment.
In actual calculation, the chrominance value of the center pixel of any of the above-described first window regions may be calculated in various ways.
In an embodiment, after obtaining any first window region from the image to be processed by means of a sliding window, differences between all first pixels in the any first window region and central pixels in the any first window region may be calculated, so as to assign noise reduction weights to all first pixels in the any first window region according to the differences, and the noise reduction weight of any first pixel in the any first window region is negatively correlated with the difference corresponding to the any first pixel. On this basis, the chrominance values of all the first pixels in any first window region may be weighted and averaged based on the noise reduction weight of each first pixel in any first window region, so that the calculated value is used as the chrominance value of the center pixel of any first window region.
After any second window area is obtained from the low-frequency image in a sliding window mode, the chromatic value of the central pixel of any second window area can be adjusted in a similar mode. For example, the difference between all second pixels in any of the second window regions and the central pixel in any of the second window regions may be calculated to assign noise reduction weights to all second pixels in any of the second window regions according to the difference, and the noise reduction weight of any of the second pixels in any of the second window regions is inversely related to the difference corresponding to any of the second pixels. On this basis, the chrominance values of all the second pixels in any one of the second window regions may be weighted and averaged based on the noise reduction weight of each second pixel in the any one of the second window regions, so that the calculated value is used as the chrominance value of the center pixel of the any one of the second window regions.
For example, assuming that the chrominance values of the first pixels in the image to be processed are as shown in fig. 2, the entire image may be traversed in a sliding window manner to obtain a plurality of first window regions, and the chrominance value of the central pixel in any one of the first window regions is adjusted by the chrominance value of each of the first pixels in the first window region.
Taking the first window area a shown in fig. 2 as an example, how to adjust the chrominance value of the center pixel of any first window area in this embodiment is described:
as can be seen from fig. 2, the first window area a includes 9 first pixels, and the chrominance value of the central pixel X is 102. Then, noise reduction weights for the 9 first pixels in the first window area a are determined based on the chrominance values 102. The noise reduction weight of any first pixel in the first window region a is inversely related to the difference between the any first pixel and 102. For example, the chroma value of the first pixel M at the upper left corner in the first window area a is 95, and the difference value from 102 is 7; the chrominance value of the first pixel N in the lower left corner is 85, which differs from 102 by 17. Since 7 is less than 17, the noise reduction weight of the first pixel M is greater than the noise reduction weight of the first pixel N, for example, the noise reduction weight of the first pixel M may be 1.10, and then the noise reduction weight of the first pixel N may be 0.60. It is assumed that the noise reduction weights for all first pixels in the first window region a are determined in this manner as shown in table 1 below:
Figure BDA0002884169600000091
TABLE 1
Then, on the basis of table 1, the chrominance values of the center pixels after the noise reduction process can be calculated according to the noise reduction weights of all the first pixels in the first window area a. For example, the calculation may be performed by a weighted average algorithm, and the calculation process may be:
(95*1.1+94*1.0+85*0.60+83*0.55+91*0.80+93*0.95+99*1.4+108*1.2+102*1.45)/(1.1+1.0+0.60+0.55+0.80+0.95+1.4+1.2+1.45)=96.40。
at this time, 96.40 is used as the chrominance value of the center pixel of the first window area a, i.e. the original chrominance value 102 of the center pixel of the first window area a is adjusted to 96.40. It should be understood that, since the sliding window traverses the entire image, all of the first pixels in the image are taken as the center pixels, and thus undergo the above-described process of adjusting the chrominance values. When the chrominance values of all the first pixels are adjusted, the noise reduction processing of the graph to be processed is considered to be finished.
The process of adjusting the chromaticity of any second pixel in the low-frequency image may refer to the above example, wherein the processing manners of the "low-frequency image", "second pixel" and "second window region" may be adapted to the processing manners of the reference "image to be processed", "first pixel" and "first window region", and the examples are not repeated here.
According to the adjustment process, the noise reduction weight of the pixel with the larger chromatic value difference value with the central pixel in the sliding window is smaller, so that the chromatic value difference value between the adjusted central pixel and the adjacent pixel is reduced. In terms of visual effect, color noise points in the image can be screened out, and the noise reduction effect aiming at the color noise is realized.
However, as can be seen from table 1 above, since the noise reduction weight of each first pixel in first window area a is determined based on the chrominance value of the central pixel in the above embodiment, the noise reduction weight of the central pixel itself is always the largest, and thus it is difficult to achieve effective noise reduction when performing noise reduction for isolated color noise in the color flat area. Taking the first window region B in fig. 2 as an example: it can be seen that, in the first window region B, the chrominance value of the central pixel is 125, the chrominance value is much larger than that of the other first pixels in the first window region B, and the chrominance values of the first neighborhood pixels around the central pixel are all relatively close. Obviously, the central pixel belongs to isolated color noise in the color flat region from the visual effect. Since the noise reduction weight of the central pixel is always the largest in the above embodiment, after the noise reduction processing is performed on the first pixel with the chroma value of 125, the chroma value of the central pixel is still much larger than the chroma values of the first neighborhood pixels of the central pixel, that is, the isolated noise in the color flat area cannot be effectively removed.
In view of the above, the present disclosure proposes another noise reduction processing method.
In another embodiment, the noise reduction weights of all pixels in any one of the window regions are determined no longer based on the center pixel, but the noise reduction weights are assigned to the pixels in any one of the window regions based on the average of the chrominance values of all pixels in any one of the window regions.
For example, in the process of performing noise reduction processing on the image to be processed, the following operations may be performed on any first window region: firstly, calculating the chrominance mean value of all first pixels in any first window region, and determining the first noise reduction weight of each first pixel in any first window region according to the first difference value between the chrominance value of each first pixel in any first window region and the chrominance mean value, wherein the first noise reduction weight of any first pixel in any first window region is in negative correlation with the first difference value corresponding to any first pixel; then, based on the first noise reduction weight of each first pixel in any first window region, the weighted average calculation is performed on the chrominance values of all the first pixels in any first window region to obtain a first target value corresponding to any first window region, and the first target value is used as the chrominance value of the center pixel of any first window region.
In the process of performing noise reduction processing on a low-frequency image, the following is similar: the following operations may be performed for any of the second window regions: firstly, calculating the chrominance mean value of all second pixels in any second window region, and determining the second noise reduction weight of each second pixel in any second window region according to the second difference value between the chrominance value of each second pixel in any second window region and the chrominance mean value, wherein the second noise reduction weight of any second pixel in any second window region is in negative correlation with the second difference value corresponding to any second pixel; then, based on the second noise reduction weight of each second pixel in any second window region, the weighted average calculation is performed on the chrominance values of all the second pixels in any second window region to obtain a second target value corresponding to any second window region, and the second target value is used as the chrominance value of the center pixel of any second window region.
Taking the first window area B in the image to be processed shown in fig. 2 as an example, how to adjust the chrominance value of the center pixel of any first window area in this embodiment is described:
first the chrominance mean value of all first pixels in the first window region B is calculated: (98+86+99+89+85+91+66+84+125)/9 ═ 91.44 (for convenience of calculation, 91 is used as an example hereinafter). Then, a first difference between the chrominance value of each first pixel in the first window region B and the chrominance mean value is calculated, so as to allocate a noise reduction weight to the corresponding first pixel according to the first difference corresponding to each first pixel. For example, the assigned noise reduction weights may be as shown in table 2 below:
Figure BDA0002884169600000111
Figure BDA0002884169600000121
TABLE 2
Then, on the basis of table 2, the chrominance value of the center pixel after the noise reduction process can be calculated according to the noise reduction weights of all the first pixels in the first window region B. For example, the calculation may be performed by a weighted average algorithm, and the calculation process may be:
(98*1.1+86*1.3+99*1.0+89*1.35+85*1.20+91*1.45+66*0.40+84*1.10+125*0.30)/(1.1+1.3+1.0+1.35+1.20+1.45+0.40+1.10+0.30)=90.11。
at this time, the chrominance value of 90.11 as the center pixel, i.e. the original chrominance value 125 of the center pixel of the first window area B, is adjusted to 90.11. As in the above embodiment, since the sliding window traverses the entire image to be processed, all the first pixels in the image are taken as the central pixels, and the process of adjusting the chrominance values is further performed. When the chrominance values of all the first pixels are adjusted, the noise reduction processing of the image to be processed is considered to be finished.
In this embodiment, the process of adjusting the chromaticity of any second pixel in the low-frequency image may also refer to the above example, where the processing manners of the "low-frequency image", "second pixel" and "second window region" may be adapted to the processing manners of the reference "to-be-processed image", "first pixel" and "first window region", and no example is repeated here.
As can be seen from table 2 above, in this embodiment, no longer based on the chrominance value 125 of the central pixel, a weight is assigned to each first pixel in the first window region B, so that the first pixel with the largest weight is not usually the central pixel, for example, in table 2 above, since the calculated chrominance mean value is 91, it is obvious that the noise reduction weight of the first pixel with the pixel value of 91 is the largest in the window region B, and further, the technical problem that the isolated color noise in the color flat region cannot be removed due to the fact that the weight value of the central pixel is always the largest in the previous embodiment is avoided.
It is understood that, in the above embodiment, since the noise reduction weight is assigned to each pixel based on the central pixel, the noise reduction weight of the central pixel is always maximized. Obviously, if the central pixel is an isolated color noise in the color flat region (i.e. the chrominance value of the central pixel is greatly different from the chrominance values of the surrounding pixels), after the chrominance value of the central pixel is adjusted on the basis, the chrominance value of the central pixel still has a large difference from the chrominance values of the surrounding pixels, so that the color noise cannot be removed.
In the present embodiment, the noise reduction weight of the central pixel is usually not the maximum because the weight is assigned according to the average chrominance value of all pixels in the window region. On the contrary, if the central pixel is an isolated color noise in the color flat region, the difference between the chrominance values of the neighboring pixels of the central pixel is not large, which results in a larger chrominance difference value between the calculated chrominance mean value and the central pixel and a smaller chrominance difference value between the chrominance mean value and the neighboring pixels of the central pixel, so that the noise reduction weight of the central pixel is smaller, for example, in the first window region B in the above example, the noise reduction weight of the central pixel is the smallest. Obviously, after the noise reduction processing is performed on the image through the embodiment, the isolated color noise in the color flat area can be effectively eliminated. For example, after the chromaticity value of the center pixel of the window region B is adjusted, the adjusted chromaticity value 90.11 is obviously closer to the chromaticity value of most of the first pixels in the window region B, and there is no isolated color noise in the color flat region from the visual effect.
It should be noted that the values of the noise reduction weights of the first pixels in the above tables 1 and 2 are only schematic and are only used to represent: the noise reduction weight of any first pixel is inversely related to the difference between the first pixel and the central pixel (or the chrominance mean value). How to determine the noise reduction weight of each first pixel can be determined by those skilled in the art according to practical situations, and the present disclosure does not limit this. For example, the correspondence between the difference and the noise reduction weight may be set as a basis for determining the noise reduction weight of any first pixel; for example, the first pixels may be sorted according to the difference values corresponding to the first pixels, so as to allocate the noise reduction weight according to the order of the sorting. Similarly, the noise reduction processing is performed on the low-frequency image, and details are not described here.
Step 104, determining the chromaticity weight of each first pixel according to the brightness difference degree between each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree.
As can be seen from the above, the phenomenon of color overflow or blurring at the color edge in the image is caused by the difference between the chroma value of the pixel at the color edge and the chroma value of the pixel in the color flat area being too small. Therefore, in the present disclosure, a color edge and a color flat area in an image are determined based on luminance values of pixels in the image, and a higher chroma weight is assigned to the pixels at the color edge and a relatively lower chroma weight is assigned to the pixels in the color flat area, so that a difference between a chroma value of the pixels at the color edge and a chroma value of the pixels in the color flat area is increased, and further, a color overflow or blur problem at the color edge is solved.
It should be noted that, in the present disclosure, after the down-sampling operation is performed on the image to be processed to obtain the low-frequency image, the chrominance weight is assigned to each first pixel in the image to be processed according to the luminance value of each first pixel in the image to be processed. And adjusting the chromatic value of each second pixel in the low-frequency image in the process of performing up-sampling operation on the low-frequency image by means of the corresponding relation between each first pixel in the image to be processed and each second pixel in the low-frequency image. In other words, the chrominance values of the second pixels in the low-frequency image are adjusted based on the luminance distribution in the image to be processed.
This is so because the low-frequency image is obtained based on the image to be processed, the brightness distribution of the two images is almost consistent, and the image to be processed can be regarded as: compared with a high-frequency image of a low-frequency image, the high-frequency image is larger in size and contains more brightness details, and the chrominance weight of each first pixel is determined through the image to be processed, so that the high-frequency image is more accurate.
And 106, performing upsampling operation on the low-frequency image based on the chrominance weight corresponding to each pixel, and synthesizing the image obtained through the upsampling operation with the image to be processed to obtain a processed image.
In practical operation, the first neighborhood pixels of each first pixel may be determined and obtained in a sliding window manner, so as to determine the chrominance weight of each first pixel according to the luminance difference degree between each first pixel and the respective first neighborhood pixels. Taking the first window region a shown in fig. 2 as an example, all the first pixels except the central pixel may be regarded as the first neighborhood pixels of the central pixel (of course, in the calculation process, the central pixel may also be regarded as the first neighborhood pixels of itself). Of course, the first window region a only takes the sliding window including 9 first pixels as an example, and if the sliding window includes 25 first pixels, the number of first neighborhood pixels is relatively increased. The specific manner of determining the first neighborhood pixels can be determined by those skilled in the art according to practical needs, and the present disclosure does not limit this.
In one embodiment, when assigning the chroma weight to any first pixel, the assignment may be performed in combination with all first neighborhood pixels of the any first pixel. For example, the luminance difference value between any first pixel and any first neighborhood pixel may be preferentially calculated, and the weight parameter corresponding to any first neighborhood pixel may be calculated based on the luminance difference value. After the weight parameters corresponding to all the first neighborhood pixels of any one first pixel are obtained through calculation by the method, the chroma weight can be distributed to any one first pixel based on all the weight parameters obtained through calculation.
Still taking the first window region a shown in fig. 2 as an example, assuming that the numerical value of each first pixel shown in fig. 2 is the luminance value of the corresponding first pixel, then the following calculation may be performed for any first pixel in the first window region a: taking the first pixel L with the brightness value of 99 at the upper right corner as an example (which is a first neighboring pixel of the central pixel X in the first window region a and can be regarded as any one of the first neighboring pixels), the brightness difference between the first pixel L and the central pixel X is calculated to be 3, and a weight parameter is assigned to the first pixel L based on the brightness difference 3. And after the operations are completed on all the first neighborhood pixels of the central pixel X in the first window area A and the weight parameter of each first neighborhood pixel corresponding to the central pixel X is obtained, the chromaticity weight of the central pixel X in the first window area A can be determined based on all the obtained weight parameters.
It should be noted that, in this embodiment, although the luminance difference between any first pixel and its first neighboring pixel is based on, a weight parameter is assigned to each first neighboring pixel. However, in actual practice, any pixel can be regarded as its own neighborhood pixel, i.e. any first pixel itself is assigned a weight parameter. In other words, when the weight parameters are assigned to the first neighborhood pixels of the center pixel X, 9 weight parameters are finally assigned.
After the chrominance weights of all the first pixels in the image to be processed are obtained in the above manner, the chrominance values of the second pixels in the low-frequency image can be adjusted in the process of performing the up-sampling operation on the low-frequency image.
For example, when calculating the chrominance value of any pixel in the image obtained by the upsampling operation, the luminance value of a first pixel corresponding to the any pixel in the image to be processed and the chrominance value of a second pixel corresponding to the any pixel in the low-frequency image may be calculated as a basis. Since the size of the image obtained by the upsampling operation is consistent with that of the image to be processed, the upsampling operation can also be expressed as: and calculating the chroma value of a target pixel corresponding to any first pixel in the image obtained by the up-sampling operation according to the brightness value of any first pixel in the image to be processed and the chroma value of a second pixel corresponding to any first pixel in the low-frequency image.
In an actual operation, after the weight parameter of any first neighborhood pixel of any first pixel in the image to be processed is obtained in the above manner, a second pixel corresponding to any first pixel may be determined in the low-frequency image, and a second neighborhood pixel corresponding to any first neighborhood pixel may be further determined from a second neighborhood pixel of the second pixel. On the basis, the distance between the second neighborhood pixel and the second pixel can be calculated, and the chromaticity reference value corresponding to any first neighborhood pixel is determined based on the distance and the weight parameter of the any first neighborhood pixel; after a plurality of chrominance reference values corresponding to all first neighborhood pixels of any first pixel are determined, weighted average calculation can be performed on the plurality of chrominance reference values to obtain a chrominance value of a target pixel corresponding to any first pixel in an image obtained through upsampling operation.
For example, calculating the chroma value of any pixel in the image obtained by the upsampling operation may refer to the following formula:
Figure BDA0002884169600000161
wherein the content of the first and second substances,
Figure BDA0002884169600000162
the chroma value of any pixel in the image obtained by the up-sampling operation; p is a first pixel corresponding to any one of the pixels in the image to be processed, and q is any one of first neighborhood pixels of the first pixel; i ispIs the brightness value of the first pixel p, IqIs the brightness value of the first neighborhood pixel q; p is a radical ofFor the corresponding second pixel (actually the coordinates of the second pixel, for convenience of the following description, the second pixel p is used) of the first pixel p in the low-frequency imageRepresenting the second pixel), q)A second neighborhood pixel (actually, the coordinate of the second pixel, for convenience of subsequent description, the second pixel q is used) corresponding to the first neighborhood pixel q in the low-frequency imageRepresenting the second pixel); II p-qII is a second neighbourhood pixel qAnd a second pixel pThe distance of (d);
Figure BDA0002884169600000163
the chromatic value of a second pixel corresponding to the first pixel p in the low-frequency image is set; omega is given by the second pixel pA second window region being a central pixel; k is a radical ofpIs a normalized constant; f () is a coefficient function for calculating a chrominance reference value from the distance; g (| I)p-Iq|) is a weight function for calculating a weight parameter based on the luminance difference value, for calculating the above weight parameter,
Figure BDA0002884169600000164
the above-mentioned colorimetric reference value.
For convenience of understanding, the to-be-processed image shown in fig. 2 is still taken as an example, and it is assumed that each numerical value in fig. 2 is a luminance value corresponding to each first pixel. It is assumed that a low-frequency image obtained by performing a down-sampling operation via the chromaticity diagram corresponding to fig. 2 is as shown in fig. 3. Obviously, there is a certain correspondence between the first pixel in fig. 2 and the second pixel in fig. 3. For example, the size of the image to be processed shown in fig. 2 is 8 × 8, and the size of the low-frequency image shown in fig. 3 is 4 × 4, which means that any second pixel in the low-frequency image corresponds to 4 first pixels in the image to be processed, and the correspondence relationship is related to the position of the pixel in the image. For example, the second pixel with a chrominance value of 56 in the upper left corner in the low-frequency image should correspond to the luminance values of the upper left corner in the image to be processed, which are: 65. 56, 77, 58.
It is currently known that: the size of the image obtained by performing the up-sampling operation on the low-frequency image shown in fig. 3 is necessarily as shown in fig. 4A. So in the upsampling process, all that is needed is to know the chrominance values of the individual pixels in the image shown in fig. 4A. Taking the example of finding the chrominance value of the pixel X ″ in fig. 4A as an example, it is obvious that the pixel X ″ corresponds to the central pixel (i.e., the first pixel X in fig. 2) of the first window area a in fig. 2, and the first pixel X belongs to one of the 4 pixels at the top right corner in fig. 2, and then the second pixel corresponding to the first pixel X in fig. 3 is the second pixel X' with the chrominance value of 91 at the top right corner in fig. 3. Similarly to the first window region a, a second window region having the second pixel X' as a center pixel may be determined in the low frequency image by means of a sliding window. Assuming that it is currently necessary to assign a weight parameter to a first neighborhood pixel Y of the first pixel X, the weight parameter of the first neighborhood pixel Y may be determined based on a luminance difference value of the first pixel X and the first neighborhood pixel Y. Determining that the weight parameters of other neighborhood pixels of the pixel X are similar; after determining the weight parameter of the first neighborhood pixel Y, a second neighborhood pixel Y ' corresponding to the first neighborhood pixel Y may be determined from a plurality of second neighborhood pixels of the second pixel X ', and then a chromaticity reference value of the first neighborhood pixel Y is determined based on a distance between the second neighborhood pixel Y ' and the second pixel X ' and the weight parameter of the first neighborhood pixel Y (of course, it may also be regarded as the chromaticity reference value of the second neighborhood pixel Y '). The formula specifically used for calculating the chromaticity reference value can be determined by those skilled in the art according to practical needs, and the present disclosure does not limit this.
It is to be noted that the second window region corresponding to the second pixel X' shown in fig. 3 includes a region beyond the low-frequency image. In the partial area, the chroma value of a second pixel (which may be referred to as a blank pixel) that is not present in the original image may be complemented by a complementary value. The complementary value technique is well-established in the art, and any method can be used to complement the blank pixels in the second window region, which is not limited by the present disclosure. For example, for any blank pixel, the chrominance value of the second pixel closest to the blank pixel may be taken as its chrominance value.
After the chrominance reference values of all the first neighborhood pixels of the first pixel X are obtained in this way, the weighted average calculation may be performed on the plurality of chrominance reference values to obtain the chrominance value of the pixel X ″ corresponding to the first pixel X.
To embody the difference of the present disclosure from the related art, fig. 4B may be referred to. Fig. 4B shows a chromaticity diagram of the image to be processed, (B) a luminance diagram of the image to be processed, (c) a low-frequency image obtained by down-sampling the diagram (a), (d) a processed image obtained by up-sampling the diagram (a) by a bilinear interpolation method in the related art, and (e) a processed image obtained by up-sampling the diagram (a) in the present disclosure. Since the present disclosure utilizes a luminance map of the image to be processed to upsample the low frequency image, this upsampling operation may also be referred to as jointly guided upsampling.
Compare panel (d) and panel (e): in the image (e) obtained by jointly guiding up-sampling, the chroma value of the pixel with a chroma value 136 (136 with dark color in the image (d)) is 132, the chroma value of the neighborhood pixel on the left side of the pixel is 4; in the image (d) obtained by upsampling by the bilinear interpolation method, the chroma value of the pixel corresponding to the "pixel with chroma value 136" is 135, the chroma value of the pixel in the left area is 133, and the difference between the two is 2. Obviously, the processed image obtained by jointly guiding the upsampling has larger chroma difference value between the pixels at the color edge, and compared with the related technology, the chroma difference value between the pixels at the color edge can be improved, so that the problems of color overflow and blurring at the color edge are avoided.
As can be seen from the above description, in the present embodiment, the determination of the chroma weights and the adjustment of the chroma values by the chroma weights are performed during the upsampling operation. From the calculation perspective of the electronic device, the above steps are completed by one formula. Of course, in addition to the above steps being completed during the up-sampling operation on the low-frequency image, the above steps may be completed in a stepwise manner.
In another embodiment, the luminance average value of any first pixel and all the first neighborhood pixels in the image to be processed, and the second difference value between the luminance value of any first pixel and the luminance average value can be preferentially calculated; the chroma weight corresponding to the any first pixel is then determined based on the second difference, for example, the product of the second difference and a predefined coefficient may be used as the chroma weight of the any first pixel. The chrominance weights of all the first pixels in the image to be processed can be obtained in the same way.
Accordingly, a conventional upsampling operation may also be performed on the low frequency image. For example, the low-frequency image may be calculated by an up-sampling interpolation algorithm to obtain an image to be weighted in accordance with the size of the image to be processed. The upsampling interpolation algorithm may be any type of interpolation algorithm, such as a two-dimensional interpolation algorithm and a linear interpolation algorithm, which can be used as the interpolation algorithm in this embodiment. After the image to be weighted is obtained, the corresponding pixels in the image to be weighted can be weighted and calculated based on the chrominance weight of each first pixel in the image to be processed, and the image obtained through weighting calculation is used as the image obtained through the upsampling operation.
Still taking the to-be-processed image shown in fig. 2 as an example, assuming that the chrominance weight of the first pixel X needs to be determined currently, the luminance average of all the first pixels in the first window region a may be calculated, and the luminance value may be obtained as: if (95+94+85+83+91+93+99+108+102)/9 is 94.4, the calculated second difference corresponding to the first pixel X is 7.6, and the chrominance weight of the first pixel X is determined based on the second difference 7.6. In the same way, the chrominance weights of other first pixels in the image to be processed can be obtained.
On the other hand, it is still assumed that the image shown in fig. 3 is: and (3) performing downsampling on the chromaticity diagram of the image to be processed shown in fig. 2 to obtain the low-frequency image, wherein the numerical values shown in fig. 3 represent the chromaticity values of the second pixels in the low-frequency image. In this embodiment, a conventional upsampling operation may be performed on the low-frequency image to obtain an image to be weighted whose size is consistent with that of the image to be processed, for example, the image to be weighted may be as shown in fig. 4A, and it is required to state that although the chrominance values of the respective pixels are not shown in fig. 4A, the chrominance values of the respective pixels of the image to be weighted obtained in this embodiment are determined. On this basis, the chroma value of the corresponding pixel in the image to be weighted may be adjusted based on the chroma weight of each first pixel in the image to be processed, for example, the chroma value of the pixel X ″ may be adjusted based on the chroma weight of the first pixel X.
According to the technical scheme, on one hand, the downsampling operation is carried out on the acquired to-be-processed image of the luminance-chrominance space domain to obtain a corresponding low-frequency image; on the other hand, according to the brightness difference degree between each first pixel and each first neighborhood pixel in the image to be processed, the chroma weight is distributed to each first pixel, wherein the chroma weight of each first pixel is positively correlated with the corresponding brightness difference degree. On the basis, the up-sampling operation can be carried out on the low-frequency image based on the chromaticity weight corresponding to each first pixel in the image to be processed so as to obtain an image with the size consistent with that of the image to be processed, and the image to be processed are combined to obtain a processed image.
It should be understood that in the field of images, changes in color in the image can have an effect on the brightness changes, and in areas where the color changes are large, the brightness changes are also typically large. Therefore, for any pixel, if the brightness difference degree from the neighboring pixel is larger, it means that the pixel is in a region with a larger color change, i.e. at the color edge. In the present disclosure, the chrominance weight of any pixel is positively correlated with the corresponding chrominance weight, which is equivalent to allocating a higher chrominance weight to a pixel in a region (color edge) with large luminance variation. On this basis, carry out the upsampling operation to the low frequency image through the chroma weight of confirming, be equivalent to the chroma value with the pixel at the colour edge in the low frequency image improvement for the colour change at colour edge is more outstanding, and then has avoided the problem that colour at colour edge spills over among the correlation technique.
In short, the color edge in the image is identified by analyzing the brightness change condition in the image, and then the difference between the chroma value of the pixel at the color edge and the chroma value of the pixel at the color flat position is increased by distributing higher chroma weight to the pixel at the color edge, so that the difference between the color edge and the color flat position is highlighted, and the problem of color overflow at the color edge in the related art is solved.
Optionally, the present disclosure may also perform noise reduction processing on the image to be processed and/or the low-frequency image. For example, the image to be processed and/or the low-frequency image may be traversed by sliding a window, and the chrominance values of all pixels in any window region acquired by sliding the window are subjected to weighted calculation, so that the obtained value is used as the chrominance value of the center pixel of the any window region. In the process of weighting calculation, a noise reduction weight may be assigned to each pixel according to a mean value of all pixels in any window region, and the noise reduction weight of any pixel is inversely related to a difference value between the pixel and the mean value.
Since the noise reduction weight is assigned to each pixel by taking the average value of all pixels in any window region as a standard, the noise reduction weight of the central pixel is not necessarily the maximum. When the isolated color noise in the color flat area is denoised, the difference between the chromaticity difference values of the central pixel and the adjacent pixels corresponding to the color noise is large, so that the difference between the obtained chromaticity mean value and the chromaticity values of the adjacent pixels is small, the difference between the obtained chromaticity mean value and the chromaticity values of the central pixel is large, and the noise reduction weight of the central pixel is minimum. Therefore, by the method, the difference value between the chromatic value of the pixel corresponding to the color noise and the chromatic value of the adjacent pixel can be effectively reduced, and the isolated color noise in the color flat area can be effectively removed.
Corresponding to the embodiment of the image processing method, the disclosure also provides an embodiment of the image processing device.
Fig. 5 is a block diagram of an image processing apparatus shown in an exemplary embodiment of the present disclosure. Referring to fig. 5, the apparatus includes an acquisition unit 501, a determination unit 502, and a synthesis unit 503.
An obtaining unit 501, configured to obtain a to-be-processed image in a luminance-chrominance spatial domain, and perform downsampling operation on the to-be-processed image to obtain a low-frequency image of the to-be-processed image;
a determining unit 502, configured to determine a chromaticity weight of each first pixel according to a luminance difference degree between each first pixel and each first neighboring pixel in the image to be processed, where the chromaticity weight of each first pixel in each first pixel is positively correlated to the corresponding luminance difference degree;
a synthesizing unit 503, configured to perform an upsampling operation on the low-frequency image based on the chrominance weight corresponding to each first pixel, and synthesize an image obtained through the upsampling operation with the image to be processed, so as to obtain a processed image.
Optionally, the determining unit 502 is further configured to:
calculating the brightness difference value of any first pixel and any first neighborhood pixel thereof, and calculating a weight parameter corresponding to any first neighborhood pixel based on the brightness difference value;
and determining the chrominance weight of any first pixel based on the calculated weight parameters of all first neighborhood pixels of the any first pixel.
Optionally, the synthesis unit 503 is further configured to:
performing the following operations on all first pixels in the image to be processed: acquiring a second pixel corresponding to any first pixel in the low-frequency image, and determining a second neighborhood pixel corresponding to any first neighborhood pixel in all second neighborhood pixels of the second pixel;
calculating the distance between the second neighborhood pixels and the second pixels, and determining a chromaticity reference value corresponding to any one first neighborhood pixel based on the distance and the weight parameter of any one first neighborhood pixel;
performing weighted average calculation on a plurality of determined chromaticity reference values of all first neighborhood pixels corresponding to any one first pixel to obtain a chromaticity value of a target pixel corresponding to any one first pixel;
after obtaining the chrominance values of all target pixels corresponding to all first pixels in the image to be processed, generating an image obtained through the upsampling operation based on the chrominance values of all the target pixels.
Optionally, the determining unit 502 is further configured to:
calculating the brightness mean value of any first pixel and all first neighborhood pixels thereof and a second difference value of the brightness value of any first pixel and the brightness mean value;
and taking the product of the second difference value and a predefined coefficient as the chroma weight of any first pixel.
Optionally, the synthesis unit 503 is further configured to:
calculating the low-frequency image through an up-sampling interpolation algorithm to obtain an image to be weighted with the size consistent with that of the image to be processed;
based on the chroma weight of each first pixel in the image to be processed, carrying out weighted calculation on the chroma value of the corresponding pixel in the image to be weighted;
and taking the image obtained by the weighting calculation as the image obtained by the up-sampling operation.
As shown in fig. 6, fig. 6 is a block diagram of another image processing apparatus shown in an exemplary embodiment of the disclosure, which further includes, on the basis of the foregoing embodiment shown in fig. 5: a noise reduction unit 504.
Optionally, the method further includes:
the noise reduction unit 504 is configured to traverse the image to be processed in a sliding window manner, and perform weighted calculation on chrominance values of all first pixels in any first window region acquired through the sliding window to obtain a first target value corresponding to the any first window region; taking the first target value as a chrominance value of a center pixel of any one of the first window regions; and/or traversing the low-frequency image in a sliding window mode, and performing weighted calculation on chrominance values of all second pixels in any second window area acquired through the sliding window to obtain a second target value corresponding to any second window area; -taking said second target value as the chrominance value of the central pixel of said any second window area.
Optionally, the noise reduction unit 504 is further configured to:
calculating the chrominance mean value of all the first pixels in any one of the first window regions, and determining a first noise reduction weight of each first pixel according to a first difference value between the chrominance value of each first pixel in any one of the first window regions and the chrominance mean value; the noise reduction weight of any first pixel in any first window region is in negative correlation with a first difference value corresponding to any first pixel; based on the noise reduction weight of each first pixel in any first window region, performing weighted average calculation on chrominance values of all first pixels in any first window region to obtain a first target value corresponding to any first window region; and/or the presence of a gas in the gas,
calculating the chrominance mean value of all second pixels in any second window region, and determining a second noise reduction weight of each second pixel according to a second difference value between the chrominance mean value and each second pixel in any second window region; the noise reduction weight of any second pixel in any second window region is in negative correlation with a second difference value corresponding to any second pixel; and performing weighted average calculation on the chrominance values of all the second pixels in any second window region based on the noise reduction weight of each second pixel in any second window region to obtain a second target value corresponding to any second window region.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
Correspondingly, the present disclosure also provides an image processing apparatus, comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the image processing method as in any one of the above embodiments, such as the method may include: acquiring a to-be-processed image of a brightness-chrominance space domain, and performing down-sampling operation on the to-be-processed image to obtain a low-frequency image of the to-be-processed image; respectively determining the chromaticity weight of each first pixel according to the brightness difference degree of each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree; and performing upsampling operation on the low-frequency image based on the chromaticity weight corresponding to each first pixel, and synthesizing the image obtained through the upsampling operation with the image to be processed to obtain a processed image.
Accordingly, the present disclosure also provides an electronic device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and configured to be executed by the one or more processors, the one or more programs including instructions for implementing the image processing method according to any of the above embodiments, such as the method may include: acquiring a to-be-processed image of a brightness-chrominance space domain, and performing down-sampling operation on the to-be-processed image to obtain a low-frequency image of the to-be-processed image; respectively determining the chromaticity weight of each first pixel according to the brightness difference degree of each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree; and performing upsampling operation on the low-frequency image based on the chromaticity weight corresponding to each first pixel, and synthesizing the image obtained through the upsampling operation with the image to be processed to obtain a processed image. Fig. 7 is a block diagram illustrating an apparatus 700 for implementing a process scheduling method according to an example embodiment. For example, the apparatus 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 7, apparatus 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 702 may include one or more processors 720 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 702 may include one or more modules that facilitate interaction between processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 706 provides power to the various components of the device 700. The power components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 700.
The multimedia component 708 includes a screen that provides an output interface between the device 700 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 700 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 710 is configured to output and/or input audio signals. For example, audio component 710 includes a Microphone (MIC) configured to receive external audio signals when apparatus 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the apparatus 700. For example, the sensor assembly 714 may detect an open/closed state of the device 700, the relative positioning of components, such as a display and keypad of the device 700, the sensor assembly 714 may also detect a change in position of the device 700 or a component of the device 700, the presence or absence of user contact with the device 700, orientation or acceleration/deceleration of the device 700, and a change in temperature of the device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate wired or wireless communication between the apparatus 700 and other devices. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G LTE, 5G NR (New Radio), or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 704 comprising instructions, executable by the processor 720 of the device 700 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. An image processing method, characterized by comprising:
acquiring a to-be-processed image of a brightness-chrominance space domain, and performing down-sampling operation on the to-be-processed image to obtain a low-frequency image of the to-be-processed image;
respectively determining the chromaticity weight of each first pixel according to the brightness difference degree of each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree;
and performing upsampling operation on the low-frequency image based on the chromaticity weight corresponding to each first pixel, and synthesizing the image obtained through the upsampling operation with the image to be processed to obtain a processed image.
2. The method of claim 1, further comprising, prior to the upsampling operation on the low frequency image:
traversing the image to be processed in a sliding window mode, and performing weighted calculation on chrominance values of all first pixels in any first window area acquired through the sliding window to obtain a first target value corresponding to any first window area; taking the first target value as a chrominance value of a center pixel of any one of the first window regions; and/or the presence of a gas in the gas,
traversing the low-frequency image in a sliding window mode, and performing weighted calculation on chrominance values of all second pixels in any second window area acquired through the sliding window to obtain a second target value corresponding to any second window area; the second target value is taken as the chrominance value of the center pixel of said any second window area.
3. The method of claim 2,
the obtaining a first target value corresponding to any window region by performing weighted calculation on chrominance values of all first pixels in any first window region acquired through a sliding window includes:
calculating the chrominance mean value of all the first pixels in any one of the first window regions, and determining a first noise reduction weight of each first pixel according to a first difference value between the chrominance value of each first pixel in any one of the first window regions and the chrominance mean value; the noise reduction weight of any first pixel in any first window region is in negative correlation with a first difference value corresponding to any first pixel; based on the noise reduction weight of each first pixel in any first window region, performing weighted average calculation on chrominance values of all first pixels in any first window region to obtain a first target value corresponding to any first window region;
the performing weighted calculation on the chrominance values of all second pixels in any second window region acquired through the sliding window to obtain a second target value corresponding to any second window region includes:
calculating the chrominance mean value of all second pixels in any second window region, and determining a second noise reduction weight of each second pixel according to a second difference value between the chrominance mean value and each second pixel in any second window region; the noise reduction weight of any second pixel in any second window region is in negative correlation with a second difference value corresponding to any second pixel; and performing weighted average calculation on the chrominance values of all the second pixels in any second window region based on the noise reduction weight of each second pixel in any second window region to obtain a second target value corresponding to any second window region.
4. The method of claim 1, wherein determining the chroma weight of each first pixel according to the brightness difference degree between the first pixel and the first neighboring pixel in the image to be processed comprises:
calculating the brightness difference value of any first pixel and any first neighborhood pixel thereof, and calculating a weight parameter corresponding to any first neighborhood pixel based on the brightness difference value;
and determining the chrominance weight of any first pixel based on the calculated weight parameters of all first neighborhood pixels of the any first pixel.
5. The method of claim 4, wherein the upsampling the low-frequency image based on the chrominance weights corresponding to the first pixels comprises:
performing the following operations on all first pixels in the image to be processed: acquiring a second pixel corresponding to any first pixel in the low-frequency image, and determining a second neighborhood pixel corresponding to any first neighborhood pixel in all second neighborhood pixels of the second pixel;
calculating the distance between the second neighborhood pixels and the second pixels, and determining a chromaticity reference value corresponding to any one first neighborhood pixel based on the distance and the weight parameter of any one first neighborhood pixel;
performing weighted average calculation on a plurality of determined chromaticity reference values of all first neighborhood pixels corresponding to any one first pixel to obtain a chromaticity value of a target pixel corresponding to any one first pixel;
after obtaining the chrominance values of all target pixels corresponding to all first pixels in the image to be processed, generating an image obtained through the upsampling operation based on the chrominance values of all the target pixels.
6. The method of claim 1, wherein determining the chrominance weight of each first pixel according to the degree of the luminance difference between the first pixel and the respective first neighboring pixel in the image to be processed comprises:
calculating the brightness mean value of any first pixel and all first neighborhood pixels thereof and a second difference value of the brightness value of any first pixel and the brightness mean value;
and taking the product of the second difference value and a predefined coefficient as the chroma weight of any first pixel.
7. The method of claim 1, wherein the upsampling the low-frequency image based on the chrominance weights corresponding to the first pixels comprises:
calculating the low-frequency image through an up-sampling interpolation algorithm to obtain an image to be weighted with the size consistent with that of the image to be processed;
based on the chroma weight of each first pixel in the image to be processed, carrying out weighted calculation on the chroma value of the corresponding pixel in the image to be weighted;
and taking the image obtained by the weighting calculation as the image obtained by the up-sampling operation.
8. An image processing apparatus characterized by comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be processed in a brightness-chromaticity space domain and performing down-sampling operation on the image to be processed to obtain a low-frequency image of the image to be processed;
the determining unit is used for respectively determining the chromaticity weight of each first pixel according to the brightness difference degree of each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree;
and the synthesizing unit is used for performing up-sampling operation on the low-frequency image based on the chrominance weight corresponding to each first pixel, and synthesizing the image obtained through the up-sampling operation with the image to be processed to obtain a processed image.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-7 by executing the executable instructions.
10. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, perform the steps of the method according to any one of claims 1-7.
CN202110007717.9A 2021-01-05 2021-01-05 Image processing method and device, electronic device and storage medium Pending CN114723613A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110007717.9A CN114723613A (en) 2021-01-05 2021-01-05 Image processing method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110007717.9A CN114723613A (en) 2021-01-05 2021-01-05 Image processing method and device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN114723613A true CN114723613A (en) 2022-07-08

Family

ID=82234628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110007717.9A Pending CN114723613A (en) 2021-01-05 2021-01-05 Image processing method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114723613A (en)

Similar Documents

Publication Publication Date Title
CN107230182B (en) Image processing method and device and storage medium
CN111709890B (en) Training method and device for image enhancement model and storage medium
CN110958401B (en) Super night scene image color correction method and device and electronic equipment
CN106530252B (en) Image processing method and device
CN108932696B (en) Signal lamp halo suppression method and device
CN106713696B (en) Image processing method and device
CN104380727B (en) Image processing apparatus and image processing method
JP7136956B2 (en) Image processing method and device, terminal and storage medium
CN111625213B (en) Picture display method, device and storage medium
CN107507128B (en) Image processing method and apparatus
US10951816B2 (en) Method and apparatus for processing image, electronic device and storage medium
US11756167B2 (en) Method for processing image, electronic device and storage medium
CN115239570A (en) Image processing method, image processing apparatus, and storage medium
CN107730443B (en) Image processing method and device and user equipment
CN107613210B (en) Image display method and device, terminal and storage medium
CN111383166A (en) Method and device for processing image to be displayed, electronic equipment and readable storage medium
CN105472228B (en) Image processing method and device and terminal
CN110728180B (en) Image processing method, device and storage medium
CN111741187A (en) Image processing method, device and storage medium
CN114723613A (en) Image processing method and device, electronic device and storage medium
CN110807745B (en) Image processing method and device and electronic equipment
CN114338956A (en) Image processing method, image processing apparatus, and storage medium
CN114596232A (en) Image enhancement method and device, electronic equipment and computer readable storage medium
CN117455782A (en) Image enhancement method, image enhancement device and storage medium
CN112465721A (en) Image correction method and device, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination