WO2020186848A1 - 一种图像增强方法及装置 - Google Patents

一种图像增强方法及装置 Download PDF

Info

Publication number
WO2020186848A1
WO2020186848A1 PCT/CN2019/126273 CN2019126273W WO2020186848A1 WO 2020186848 A1 WO2020186848 A1 WO 2020186848A1 CN 2019126273 W CN2019126273 W CN 2019126273W WO 2020186848 A1 WO2020186848 A1 WO 2020186848A1
Authority
WO
WIPO (PCT)
Prior art keywords
component
layer
image
low
chrominance
Prior art date
Application number
PCT/CN2019/126273
Other languages
English (en)
French (fr)
Inventor
洪明
张蕊
林恒杰
Original Assignee
上海立可芯半导体科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海立可芯半导体科技有限公司 filed Critical 上海立可芯半导体科技有限公司
Priority to US17/440,261 priority Critical patent/US11915392B2/en
Publication of WO2020186848A1 publication Critical patent/WO2020186848A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Definitions

  • the present invention relates to the field of image processing, in particular to an image enhancement method and device.
  • the camera In order to increase the brightness, the camera needs to extend the exposure time or increase the signal magnification. Extending the exposure time will result in slower image formation and reduce user experience. To increase the signal magnification, while amplifying the normal signal, noise (especially the chromatic noise that is more sensitive to the human eye) will also be amplified, resulting in reduced image quality.
  • Wavelet transform has good time and frequency division characteristics, and is widely used in the field of denoising.
  • the commonly used wavelet domain denoising method is to perform threshold shrinkage on each high frequency subband after wavelet transform, that is, the wavelet threshold denoising method.
  • this method relies on the selection of the threshold. Because different decomposition scales and different high-frequency sub-bands require different thresholds, once the threshold is selected improperly, it is very easy to cause mosaics in the image after denoising due to the loss of a large amount of high-frequency components. Or edge oscillation and other phenomena.
  • the technical problem to be solved by the present invention is to provide an image enhancement method and device to eliminate the edge oscillation of the image and improve the denoising effect of the image.
  • one aspect of the present invention provides an image enhancement method.
  • the method includes: acquiring a YUV format image, separating the luminance component and chrominance component of the YUV format image; The components are subjected to Nmax layer wavelet decomposition to obtain the low-frequency and high-frequency sub-bands of each layer of luminance components, as well as the low-frequency and high-frequency sub-bands of chrominance components, where Nmax is a positive integer greater than 1, and the Nmax layer is taken as Initially, the low-frequency sub-bands of the chrominance components of each layer are edge-preserved and filtered according to the low-frequency sub-bands of the luminance components of the corresponding layer, and then the high-frequency sub-bands of the chrominance components are continuously wavelet reconstruction to the upper layer until the original size is obtained.
  • Image Perform edge preservation filtering on the chrominance component after wavelet reconstruction according to the luminance component; integrate the luminance component and the chrominance component
  • the step of performing edge preservation filtering on the low frequency subband of the chrominance component according to the low frequency subband of the luminance component of the corresponding layer includes: calculating the weight value according to the low frequency subband of the luminance component of the corresponding layer, and The weight value performs edge preservation filtering on the low frequency subband of the chrominance component.
  • the chrominance component includes a U component and a V component
  • the edge preservation filtering is bilateral filtering
  • the step of performing edge preservation filtering on the low frequency subband of the chrominance component according to the weight includes:
  • N is the current wavelet layer number, 1 ⁇ N ⁇ Nmax, and N is a natural number
  • (x,y) is the coordinate of the pixel
  • (i,j) is the offset in the window
  • WIN N is the Gaussian core radius of the Nth layer
  • U_LL N (x,y) is the low frequency subband of the U component pixel (x,y) of the Nth layer
  • V_LL N (x,y) is the Nth layer
  • the low-frequency subband of the Y component pixel (x, y), U_LL N (x+i,y+j) and V_LL N (x+i,y+j) are the U and V component pixels
  • U_LL N (x, y) is a set of pixels with Gaussian kernel size WIN N as the radius
  • W N (x+i,y+j) is the corresponding weight.
  • the step of calculating the weight value according to the low frequency subband of the brightness component of the corresponding layer includes:
  • W N (x+i,y+j) WD N (x+i,y+j)*WR N (x+i,y+j) (3)
  • ⁇ _G N N th layer bilateral filtering Gaussian kernel variance ⁇ _Y N, ⁇ _U N and ⁇ _V N are the N-th layer Y, U, V components of the variance
  • WD N (x + i, y + j) is a bilateral filtering WR N (x+i,y+j) is the range weight in bilateral filtering
  • EI N (x+i,y+j) is the integrated edge indicator
  • Y_LL N (x, y) is the low-frequency subband of the Y component pixel (x, y) of the Nth layer
  • Y_LL N (x+i, y+j) is the Y component pixel Y_LL N (x, y) with Gaussian kernel size WIN N A collection of pixels with a radius.
  • the step of obtaining a YUV format image includes: obtaining an RGB format image, and converting the RGB format image into a YUV format image.
  • the Haar wavelet function is used for wavelet decomposition and wavelet reconstruction.
  • the device includes: an image acquisition unit, which acquires a YUV format image, and separates the luminance component and chrominance component of the YUV format image; Perform N-layer wavelet decomposition with chrominance components to obtain the low-frequency and high-frequency sub-bands of each layer of luminance components, as well as the low-frequency and high-frequency sub-bands of chrominance components, where N is a positive integer greater than 1;
  • the construction unit starting with the Nmax layer, performs edge preserving filtering on the low-frequency sub-bands of the chrominance components of each layer according to the low-frequency sub-bands of the luminance components of the corresponding layer, and then continuously wavelet with the high-frequency sub-bands of the chrominance components to the upper layer Reconstruction until an image of the original size is obtained; an edge preservation filtering unit, which performs edge preservation filtering on the chrominance component after wavelet reconstruction according to the luminance component; an integration unit, which
  • the wavelet reconstruction unit calculates a weight value according to the low frequency subband of the luminance component of the corresponding layer, and performs edge preservation filtering on the low frequency subband of the chrominance component according to the weight value.
  • the image acquisition unit acquires an RGB format image, and converts the RGB format image into a YUV format image.
  • the wavelet decomposition unit uses a Haar wavelet function to perform wavelet decomposition
  • the wavelet reconstruction unit uses a Haar wavelet function to perform wavelet reconstruction
  • the present invention provides an image enhancement method and device, by performing wavelet decomposition on a YUV format image, the low-frequency sub-bands and high-frequency sub-bands of each layer of luminance and chrominance components are obtained.
  • Band according to the low-frequency sub-band of the brightness component of the corresponding layer, perform edge preservation filtering on the low-frequency sub-band of the chrominance component of each layer, and continuously perform wavelet reconstruction to the upper layer until the original size image is obtained, which can effectively retain and enhance the chroma
  • the edge information of the component low-frequency subband can avoid the dependence on the threshold, eliminate the edge oscillation of the image, and improve the denoising effect of the image.
  • Fig. 1 is a flowchart of an image enhancement method according to an embodiment of the present invention
  • Fig. 2 is a schematic structural diagram of an image enhancement device according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of step 130 in the image enhancement method shown in FIG. 1;
  • Fig. 4 is an image of the original luminance Y component according to an embodiment of the present invention.
  • Fig. 5A is an image of the original chrominance U component according to an embodiment of the invention.
  • 5B is an image of the chrominance U component after processing by the image enhancement method according to an embodiment of the present invention.
  • Fig. 6A is an image of the original chrominance V component according to an embodiment of the present invention.
  • Fig. 6B is an image of the chrominance V component after processing by the image enhancement method according to an embodiment of the present invention.
  • a flowchart is used in the present invention to illustrate the operations performed by the system according to the embodiment of the present invention. It should be understood that the preceding or following operations are not necessarily performed exactly in order. Instead, the various steps can be processed in reverse order or simultaneously. At the same time, or add other operations to these processes, or remove a certain step or several steps from these processes.
  • Fig. 1 is a flowchart of an image enhancement method according to an embodiment of the present invention.
  • the image enhancement method includes the following steps:
  • Step 110 Obtain a YUV format image.
  • Step 120 Perform Nmax layer wavelet decomposition on the luminance component and the chrominance component respectively.
  • Step 130 starting with the Nmax layer, perform edge preserving filtering on the low-frequency sub-bands of the chrominance components of each layer according to the low-frequency sub-bands of the luminance components of the corresponding layer. Until you get the original size image.
  • Step 140 Perform edge preserving filtering on the chrominance component after wavelet reconstruction according to the luminance component.
  • Step 150 Integrate the luminance component and the chrominance component after edge preservation filtering.
  • Fig. 2 is a schematic structural diagram of an image enhancement device according to an embodiment of the present invention.
  • the image enhancement device 200 of this embodiment includes an image acquisition unit 210, a wavelet decomposition unit 220, a wavelet reconstruction unit 230, an edge preservation filtering unit 240, and an integration unit 250.
  • the image enhancement device 200 can use the image enhancement method shown in FIG. 1 to perform image enhancement processing on the input image. The steps of the image enhancement method shown in FIG. 1 will be described in detail below in conjunction with FIG. 2.
  • step 110 a YUV format image is acquired.
  • step 110 may be performed by the image acquisition unit 210 in the image enhancement device 200.
  • YUV is a color coding method for images or video signals, where Y represents the brightness of the image (Luminance), and U and V represent the chroma of the image (Chroma).
  • the image obtained by the YUV color coding method is called the YUV format image.
  • the image input to the image acquisition unit 210 may be in other color space formats, such as but not limited to RGB, CMYK and other formats.
  • images in other formats need to be converted to YUV format, and this conversion step may be performed by the image acquisition unit 210.
  • Step 110 also includes performing separation processing on the acquired YUV format image to obtain the luminance component (Y component) and chrominance component (UV component) in the YUV format image respectively. This step may be performed by the image acquisition unit 210.
  • the method of separating YUV format images depends on the specific YUV format.
  • Common YUV formats such as but not limited to NV12, NV21 and YV12.
  • Different formats correspond to different data storage formats.
  • the luminance component and chrominance component in the image can be synchronously separated according to the storage format of the image data. And respectively store them as corresponding amounts for subsequent processing.
  • step 120 the luminance component and the chrominance component are respectively subjected to Nmax layer wavelet decomposition.
  • step 120 may be performed by the wavelet decomposition unit 220 in the image enhancement device 200.
  • the wavelet decomposition unit 220 After the wavelet decomposition unit 220 performs the Nmax-layer wavelet decomposition on the luminance component and the chrominance component, the low-frequency sub-band (Y_LL level ) and the high-frequency sub-band (Y_LH level , Y_HL level , Y_HH level ) of the luminance components of each wavelet layer can be obtained, And the low frequency subbands (U_LL level , V_LL level ) and high frequency subbands (U_LH level , U_HL level , U_HH level , V_LH level , V_HL level , V_HH level ) of the chrominance component, where level is the number of corresponding wavelet levels, For example, Y_LL N is the low frequency subband component of the luminance component on the Nth wavelet layer, and U_LL N and V_LL N are the low frequency subband components of the chrominance component on the Nth wavelet layer. N is a positive integer greater
  • the wavelet decomposition method can be implemented by using any function in the wavelet transform, such as but not limited to Haar wavelet, Shannon wavelet, Meyer wavelet, etc.
  • Haar wavelet is used to perform the wavelet decomposition in step 120.
  • the low-frequency sub-bands of the chrominance components of each layer are edge-preserved and filtered according to the low-frequency sub-bands of the luminance components of the corresponding layer.
  • the edge preserving filtering may be various filtering algorithms for preserving edges, for example, a bilateral filtering algorithm or a guided filtering algorithm.
  • the edge preserving filtering algorithm may be a bilateral filtering algorithm.
  • the image enhancement algorithm in the embodiment of the present invention will be described below by taking the bilateral filtering algorithm as an example.
  • step 130 may be performed by the wavelet reconstruction unit 230 in the image enhancement device 200.
  • the step of performing edge preservation filtering on the low frequency subband of the chrominance component according to the low frequency subband of the luminance component of the corresponding layer includes: calculating a weight value according to the low frequency subband of the luminance component of the corresponding layer, and according to the weight value Perform edge preservation filtering on the low frequency subband of the chrominance component.
  • the edge preserving filtering may be performed by the edge preserving filtering unit 240 in the image enhancement device 200.
  • step 130 Specifically, taking the bilateral filtering algorithm as an example, and taking the Nth layer (1 ⁇ N ⁇ Nmax, and N is a natural number) wavelet layer as an example, step 130 will be described.
  • the low-frequency sub-bands of the chrominance component of the pixel corresponding to the Nth wavelet layer and the coordinates (x, y) are U_LL N (x, y) and V_LL N (x, y), denoising after bilateral filtering
  • the process of bilateral filtering can be shown in the following formula:
  • N is the current wavelet layer number, 1 ⁇ N ⁇ Nmax, and N is a natural number
  • (x,y) is the coordinate of the pixel
  • (i,j) is the offset in the window
  • WIN N is the Gaussian core radius of the Nth layer
  • U_LL N (x,y) is the low frequency subband of the U component pixel (x,y) of the Nth layer
  • V_LL N (x,y) is the Nth layer
  • the low-frequency subband of the Y component pixel (x, y), U_LL N (x+i,y+j) and V_LL N (x+i,y+j) are the U and V component pixels
  • U_LL N (x, y) is a set of pixels with Gaussian kernel size WIN N as the radius
  • W N (x+i,y+j) is the corresponding weight.
  • the weight W N (x+i, y+j) can be calculated according to the domain weight and the value domain weight.
  • U_LL N (x, y), V_LL N (x, y) and Y_LL N (x, y) and U_LL N (x, y) + V_LL N (x, y) can also be calculated according to The ratio of +Y_LL N (x, y) gives the weight W N (x+i,y+j).
  • the low frequency sub-band is calculated according to the luminance component corresponding to the obtained layer weights W N (x + i, y
  • the steps of +j) can include:
  • W N (x+i,y+j) WD N (x+i,y+j)*WR N (x+i,y+j) (3)
  • WD N (x+i,y+j) is the domain weight in bilateral filtering, which can be calculated using the following formula:
  • ⁇ _G N is the Gaussian kernel variance in the Nth layer of bilateral filtering
  • WIN N is the Gaussian kernel radius of the Nth layer, which is generally a power of 2, such as 1, 2, 4, and 8.
  • WR N (x+i,y+j) is the value range weight in bilateral filtering, which can be calculated using the following formula:
  • EI N (x+i,y+j) is the integrated edge indicator, which can be calculated using the following formula:
  • ⁇ _Y N, ⁇ _U N and ⁇ _V N are the N-th layer Y, U, V components of the variance
  • Y_LL N (x, y) is the layer N Y component pixel (x, y) of the low frequency subband
  • Y_LL N (x+i, y+j) is the pixel point of Y component
  • Y_LL N (x, y) is a pixel point collection with Gaussian kernel size WIN N as the radius.
  • the above-described WIN N, ⁇ _G N, ⁇ _Y N , ⁇ _U N ⁇ _V N and the five parameters determine the intensity and edge-preserving smoothing of noise level, the noise and how much depends on the gain value used when the camera acquires an image (Gain level). Therefore, the value of these five parameters can be determined by establishing the mapping relationship between the five parameters and the gain value. After the calibration is completed, five mapping tables can be obtained as shown in the following formula:
  • ⁇ _G N ⁇ _G_Table(gain, N)
  • ⁇ _Y N ⁇ _Y_Table(gain, N)
  • ⁇ _V N ⁇ _V_Table(gain, N) (7)
  • the integrated edge indicator EI N (x+i,y+j) of the Nth layer is defined by the low-frequency sub-band Y_LL N of the luminance component and the low-frequency sub-band U_LL of the chrominance component on the N-th wavelet layer.
  • N and V_LL N are jointly determined, and used for bilateral filtering of the low-frequency subband of the corresponding chrominance component on the Nth wavelet layer.
  • the use of the integrated edge indicator EI N (x+i,y+j) can effectively retain and enhance the edge information of the low-frequency subband of the chrominance component.
  • all pixels in the low frequency subband of the Nth layer of chrominance components can be filtered.
  • Figure 3 shows that in step 130, starting with the Nmax layer, the low-frequency sub-bands of the chrominance components of each layer are bilaterally filtered according to the low-frequency sub-bands of the luminance components of the corresponding layer.
  • the flow chart of the method for performing wavelet reconstruction on the layer until the original size image is obtained.
  • the method includes: bilaterally filtering the low-frequency sub-band of the chrominance component of the Nmax layer according to the low-frequency sub-band of the luminance component of the Nmax layer, and the high-frequency sub-band of the Nmax chrominance component to the Nmax-1th Perform wavelet reconstruction to obtain the low-frequency sub-band of the chrominance component of the Nmax-1 layer.
  • the method specifically includes the following steps:
  • step 310 bilateral filtering is performed on the low frequency subband of the chroma component of the Nmax layer according to the low frequency subband of the luminance component of the Nmax layer.
  • step 320 wavelet reconstruction is performed on the filtered low frequency subband of the chroma component of the Nmax layer and the high frequency subband of the chroma component of the Nmax layer to obtain the low frequency subband of the chroma component of the Nmax-1 layer.
  • step 330 bilateral filtering is performed on the reconstructed Nmax-1th layer of chrominance component low frequency subbands according to the Nmax-1th layer luminance component low frequency subbands.
  • step 340 wavelet reconstruction is performed on the filtered low-frequency sub-band of the chrominance component of the Nmax-1 layer and the high-frequency sub-band of the chrominance component of the Nmax-1 layer.
  • N the number of wavelets in the Nth wavelet layer
  • step 350 bilateral filtering is performed on the reconstructed low-frequency sub-band of the chrominance component of the first layer according to the low-frequency sub-band of the luminance component of the first layer.
  • step 360 wavelet reconstruction is performed on the filtered low-frequency sub-band of the chrominance component of the first layer and the high-frequency sub-band of the chrominance component in the first layer.
  • FIGS. 4 to 6B respectively show an exemplary image before and after image enhancement processing.
  • Fig. 4 is an original luminance Y component image of an exemplary image.
  • Fig. 5A is the original chrominance U component image of the exemplary image;
  • Fig. 5B is the chrominance U component image after the image shown in Fig.
  • FIG. 5A is processed by the image enhancement method according to an embodiment of the present invention.
  • Fig. 6A is an image of the original chrominance V component of the exemplary image;
  • Fig. 6B is an image of the chrominance V component after processing the image enhancement method of an embodiment of the present invention on Fig. 6A.
  • FIGS. 5A-6B is subjected to gray-scale normalization processing. Comparing FIGS. 5A and 5B and FIGS. 6A and 6B, it can be seen that the point-like noise is obviously removed from the image processed by the image enhancement method of the present invention, and the image effect is significantly improved.
  • step 150 the luminance component and the chrominance component after edge preservation filtering are integrated.
  • step 150 may be performed by the integration unit 250 in the image enhancement device 200. Similar to step 110, the process of integrating the luminance component and the chrominance component after edge preservation filtering also depends on the specific YUV format. The integration unit 250 will integrate the luminance component and the chrominance component according to a specific format and output according to the YUV format to be output.
  • the integration unit 250 may first obtain the YUV format image, and then convert the YUV format image into the required image format.
  • the present invention provides an image enhancement method and device.
  • By performing wavelet decomposition on a YUV format image low-frequency sub-bands and high-frequency sub-bands of brightness components and chrominance components of each layer are obtained, and the low-frequency sub-bands of the brightness components of the corresponding layer.
  • the low-frequency sub-band of the chrominance component of the layer performs edge preserving filtering, and the wavelet reconstruction is continuously performed to the upper layer until the original size image is obtained, which can effectively retain and enhance the edge information of the low-frequency sub-band of the chrominance component, and avoid the threshold value. Rely on, eliminate the edge oscillation of the image, and improve the denoising effect of the image.
  • the present invention uses specific words to describe the embodiments of the present invention.
  • “one embodiment”, “an embodiment”, and/or “some embodiments” mean a certain feature, structure, or characteristic related to at least one embodiment of the present invention. Therefore, it should be emphasized and noted that “an embodiment” or “an embodiment” or “an alternative embodiment” mentioned twice or more in different positions in this specification does not necessarily refer to the same embodiment. .
  • certain features, structures, or characteristics in one or more embodiments of the present invention can be appropriately combined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

一种图像增强方法,该方法包括:获取YUV格式图像(110);分别对亮度分量和色度分量进行Nmax层小波分解(120),以Nmax层为起始,根据对应层亮度分量的低频子带对各层色度分量的低频子带进行边缘保留滤波后与色度分量的高频子带不断向上一层进行小波重构,直至得到原始尺寸的图像(130); 根据亮度分量对小波重构后的色度分量进行边缘保留滤波(140);整合所述亮度分量和经过边缘保留滤波的色度分量(150)。

Description

一种图像增强方法及装置 技术领域
本发明涉及图像处理领域,尤其涉及一种图像增强方法及装置。
背景技术
随着手机的逐步普及,人们对图像成像质量的要求越来越高。为了满足人们的需求,手机配备的摄像头分辨率在逐年提高,但图像成像质量并没有随着分辨率的增加而大幅度提高,特别是在光线较暗的环境中。
为了提高亮度,相机需要延长曝光时间或提高信号放大倍数。延长曝光时间将会导致图像成像较慢,降低了用户体验。对于提高信号放大倍数,在放大正常信号的同时,噪声(特别是人眼较为敏感的色度噪声)也会被放大,导致图像质量降低。
相机成像过程较为复杂,会同时引入不同频率的噪声,形成叠加噪声,单靠普通双边滤波方法很难将大部分噪声完全去除。小波变换具有较好的分时和分频特性,被广泛用于去噪领域。
常用的小波域去噪方法是对小波变换后的各高频子带进行阈值收缩,即小波阈值去噪方法。但该方法依赖于阈值的选取,由于不同的分解尺度、不同的高频子带所需的阈值均不相同,一旦阈值选取不当,极容易造成去噪后图像因大量高频分量丢失而出现马赛克或边缘振荡等现象。
发明内容
本发明要解决的技术问题是提供一种图像增强方法及装置,以消除图像的边缘震荡,提高图像的去噪效果。
为解决上述技术问题,本发明的一方面提供一种图像增强方法,该方法包括:获取YUV格式图像,分离所述YUV格式图像的亮度分量和色度分量;分别对所述亮度分量和色度分量进行Nmax层小波分解,得到各层亮度分量的低频子带和高频子带,以及色度分量的低频子带和高频子带,其中Nmax为大于1的正整数;以第Nmax层为起始,根据对应层亮度分量的低频子带对各层色 度分量的低频子带进行边缘保留滤波后与色度分量的高频子带不断向上一层进行小波重构,直至得到原始尺寸的图像;根据所述亮度分量对小波重构后的色度分量进行边缘保留滤波;整合所述亮度分量和经过边缘保留滤波的色度分量。
在本发明的一实施例中,根据对应层亮度分量的低频子带对色度分量的低频子带进行边缘保留滤波的步骤包括:根据对应层亮度分量的低频子带计算得到权值,以及根据所述权值对色度分量的低频子带进行边缘保留滤波。
在本发明的一实施例中,色度分量包括U分量和V分量,所述边缘保留滤波为双边滤波,根据所述权值对色度分量的低频子带进行边缘保留滤波的步骤包括:
Figure PCTCN2019126273-appb-000001
Figure PCTCN2019126273-appb-000002
其中,N为当前小波层数,1≤N≤Nmax,且N为自然数,(x,y)为像素点的坐标,(i,j)为窗口内偏移量,-WIN N≤i,j≤WIN N,WIN N为第N层高斯核半径,U_LL N(x,y)为第N层U分量像素点(x,y)的低频子带,V_LL N(x,y)为第N层Y分量像素点(x,y)的低频子带,U_LL N(x+i,y+j)、V_LL N(x+i,y+j)分别为U、V分量像素点U_LL N(x,y)、V_LL N(x,y)以高斯核尺寸WIN N为半径的像素点集合,W N(x+i,y+j)为对应的权值。
在本发明的一实施例中,根据对应层亮度分量的低频子带计算得到权值的步骤包括:
W N(x+i,y+j)=WD N(x+i,y+j)*WR N(x+i,y+j)   (3)
Figure PCTCN2019126273-appb-000003
Figure PCTCN2019126273-appb-000004
Figure PCTCN2019126273-appb-000005
其中,δ_G N为第N层双边滤波中高斯核方差,δ_Y N、δ_U N和δ_V N分别为第N层Y、U、V分量方差,WD N(x+i,y+j)为双边滤波中的定义域权值,WR N(x+i,y+j)为双边滤波中的值域权值,EI N(x+i,y+j)为综合边缘指示量,Y_LL N(x,y)为第N层Y分量像素点(x,y)的低频子带,Y_LL N(x+i,y+j)、为Y分量像素点Y_LL N(x,y)以高斯核尺寸WIN N为半径的像素点集合。
在本发明的一实施例中,获取YUV格式图像的步骤包括:获取RGB格式图像,将所述RGB格式图像转换为YUV格式图像。
在本发明的一实施例中,采用Haar小波函数进行小波分解和小波重构。
本发明的另一方面提供一种图像增强装置,该装置包括:图像获取单元,获取YUV格式图像,分离所述YUV格式图像的亮度分量和色度分量;小波分解单元,分别对所述亮度分量和色度分量进行N层小波分解,得到各层亮度分量的低频子带和高频子带,以及色度分量的低频子带和高频子带,其中N为大于1的正整数;小波重构单元,以第Nmax层为起始,根据对应层亮度分量的低频子带对各层色度分量的低频子带进行边缘保留滤波后与色度分量的高频子带不断向上一层进行小波重构,直至得到原始尺寸的图像;边缘保留滤波单元,根据所述亮度分量对小波重构后的色度分量进行边缘保留滤波;整合单元,整合所述亮度分量和经过边缘保留滤波的色度分量。
在本发明的一实施例中,所述小波重构单元根据对应层亮度分量的低频子带计算得到权值,以及根据所述权值对色度分量的低频子带进行边缘保留滤波。
在本发明的一实施例中,所述图像获取单元获取RGB格式图像,将所述RGB格式图像转换为YUV格式图像。
在本发明的一实施例中,所述小波分解单元采用哈尔小波函数进行小波分解,所述小波重构单元采用Haar小波函数进行小波重构。
与现有技术相比,本发明具有以下优点:本发明提供一种图像增强方法及装置,通过对YUV格式图像进行小波分解,得到各层亮度分量和色度分量的低频子带和高频子带,根据对应层亮度分量的低频子带对各层色度分量的低频子带进行边缘保留滤波,不断向上一层进行小波重构,直至得到原始尺寸的图像,能够有效地保留和增强色度分量低频子带的边缘信息,可以避免对阈值的依赖,消除图像的边缘震荡,提高图像的去噪效果。
附图概述
为让本发明的上述目的、特征和优点能更明显易懂,以下结合附图对本发明的具体实施方式作详细说明,其中:
图1是根据本发明一实施例的图像增强方法的流程图;
图2是根据本发明一实施例的图像增强装置的结构示意图;
图3是图1所示的图像增强方法中步骤130的流程图;
图4是根据本发明一实施例的原始亮度Y分量的图像;
图5A是根据本发明一实施例的原始色度U分量的图像;
图5B是根据本发明一实施例的图像增强方法处理之后的色度U分量的图像;
图6A是根据本发明一实施例的原始色度V分量的图像;
图6B是根据本发明一实施例的图像增强方法处理之后的色度V分量的图像。
本发明的较佳实施方式
为让本发明的上述目的、特征和优点能更明显易懂,以下结合附图对本发明的具体实施方式作详细说明。
在下面的描述中阐述了很多具体细节以便于充分理解本发明,但是本发明还可以采用其它不同于在此描述的其它方式来实施,因此本发明不受下面公开的具体实施例的限制。
如本申请和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来, 术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其他的步骤或元素。
虽然本发明对根据本发明的实施例的系统中的某些模块做出了各种引用,然而,任何数量的不同模块可以被使用并运行在成像系统和/或处理器上。所述模块仅是说明性的,并且所述系统和方法的不同方面可以使用不同模块。
本发明中使用了流程图用来说明根据本发明的实施例的系统所执行的操作。应当理解的是,前面或下面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各种步骤。同时,或将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
图1是根据本发明一实施例的图像增强方法的流程图。参考图1所示,该图像增强方法包括下列步骤:
步骤110,获取YUV格式图像。
步骤120,分别对亮度分量和色度分量进行Nmax层小波分解。
步骤130,以第Nmax层为起始,根据对应层亮度分量的低频子带对各层色度分量的低频子带进行边缘保留滤波后与色度分量的高频子带不断向上一次进行小波重构,直至得到原始尺寸的图像。
步骤140,根据亮度分量对小波重构后的色度分量进行边缘保留滤波。
步骤150,整合亮度分量和经过边缘保留滤波的色度分量。
图2是根据本发明一实施例的图像增强装置的结构示意图。参考图2所示,该实施例的图像增强装置200包括图像获取单元210、小波分解单元220、小波重构单元230、边缘保留滤波单元240和整合单元250。该图像增强装置200可以利用图1所示的图像增强方法来对输入的图像进行图像增强处理。下面结合图2对图1所示的图像增强方法的步骤进行详细说明。
在步骤110中,获取YUV格式图像。
在一些实施例中,步骤110可以由图像增强装置200中的图像获取单元210来执行。YUV是一种用于图像或视频信号的颜色编码方法,其中,Y表示图像的亮度(Luminance),U和V表示图像的色度(Chroma)。由YUV颜色编码方法获得的图像被称为YUV格式图像。
在一些实施例中,输入到图像获取单元210的图像可以是其他颜色空间的 格式,例如但不限于RGB、CMYK等格式。在这些实施例中,需要将其他格式的图像转换为YUV格式,该转化步骤可以由图像获取单元210来执行。
在步骤110中还包括对所获取的YUV格式图像进行分离处理,以分别获得该YUV格式图像中的亮度分量(Y分量)和色度分量(UV分量)。该步骤可以由图像获取单元210来执行。
对YUV格式图像进行分离的方法取决于具体的YUV格式。常见的YUV格式例如但不限于NV12、NV21和YV12等。不同的格式对应于不同的数据存储格式,在对输入到图像获取单元210的图像数据进行读取时,即可以根据该图像数据的存储格式将图像中的亮度分量和色度分量同步分离出来,并分别存储为相应的量,以用于后续的处理。
在步骤120中,分别对亮度分量和色度分量进行Nmax层小波分解。
在一些实施例中,步骤120可以由图像增强装置200中的小波分解单元220来执行。
小波分解单元220分别对亮度分量和色度分量进行Nmax层小波分解之后,可以得到各个小波层亮度分量的低频子带(Y_LL level)和高频子带(Y_LH level、Y_HL level、Y_HH level),以及色度分量的低频子带(U_LL level、V_LL level)和高频子带(U_LH level、U_HL level、U_HH level、V_LH level、V_HL level、V_HH level),其中,level为相应的小波层数,例如Y_LL N即为亮度分量在第N层小波层上的低频子带分量,U_LL N、V_LL N为色度分量在第N层小波层上的低频子带分量。N为大于1的正整数。
在一些实施例中,小波分解的方法可以采用小波变换中的任意函数来实现,例如但不限于Haar小波、Shannon小波、Meyer小波等。在本发明的优选实施例中,采用Haar小波进行步骤120的小波分解。
在步骤130中,以第Nmax层为起始,根据对应层亮度分量的低频子带对各层色度分量的低频子带进行边缘保留滤波后与色度分量的高频子带不断向上一层进行小波重构,直至得到原始尺寸的图像。边缘保留滤波可以是各种保留边缘的滤波算法,例如可以是双边滤波算法或者导向滤波算法等。在本发明的实施例中,边缘保留滤波算法可以是双边滤波算法,下文将以双边滤波算法为例对本发明的实施例中的图像增强算法进行说明。
在一些实施例中,步骤130可以由图像增强装置200中的小波重构单元230来执行。
在一些实施例中,根据对应层亮度分量的低频子带对色度分量的低频子带进行边缘保留滤波的步骤包括:根据对应层亮度分量的低频子带计算得到权值,以及根据该权值对色度分量的低频子带进行边缘保留滤波。
在一些实施例中,该边缘保留滤波可以由图像增强装置200中的边缘保留滤波单元240来执行。
具体地,以双边滤波算法为例,以第N层(1≤N≤Nmax,且N为自然数)小波层为例对步骤130进行说明。设对应于第N层小波层,且坐标为(x,y)的像素点的色度分量的低频子带为U_LL N(x,y)和V_LL N(x,y),经过双边滤波去噪后分别得到
Figure PCTCN2019126273-appb-000006
Figure PCTCN2019126273-appb-000007
该双边滤波的过程可以如下列公式所示:
Figure PCTCN2019126273-appb-000008
Figure PCTCN2019126273-appb-000009
其中,N为当前小波层数,1≤N≤Nmax,且N为自然数,(x,y)为像素点的坐标,(i,j)为窗口内偏移量,-WIN N≤i,j≤WIN N,WIN N为第N层高斯核半径,U_LL N(x,y)为第N层U分量像素点(x,y)的低频子带,V_LL N(x,y)为第N层Y分量像素点(x,y)的低频子带,U_LL N(x+i,y+j)、V_LL N(x+i,y+j)分别为U、V分量像素点U_LL N(x,y)、V_LL N(x,y)以高斯核尺寸WIN N为半径的像素点集合,W N(x+i,y+j)为对应的权值。
在一些实施例中,可以根据定义域权值和值域权值计算得到权值W N(x+i,y+j)。在另一些实施例中,也可以根据计算U_LL N(x,y)、V_LL N(x,y)和Y_LL N(x,y)与U_LL N(x,y)+V_LL N(x,y)+Y_LL N(x,y)的比值得到权值W N(x+i,y+j)。以根据定义域权值和值域权值计算得到权值W N(x+i,y+j)为例,根据对应层亮度分量的低频子带计算得到权值W N(x+i,y+j)的步骤可以包括:
W N(x+i,y+j)=WD N(x+i,y+j)*WR N(x+i,y+j)     (3)
其中,WD N(x+i,y+j)为双边滤波中的定义域权值,可以采用如下公式进行计算:
Figure PCTCN2019126273-appb-000010
其中,δ_G N为第N层双边滤波中的高斯核方差,WIN N为第N层高斯核半径,一般取值为1、2、4、8等2的幂。
WR N(x+i,y+j)为双边滤波中的值域权值,可以采用如下公式进行计算:
Figure PCTCN2019126273-appb-000011
其中,EI N(x+i,y+j)为综合边缘指示量,可以采用如下公式进行计算:
Figure PCTCN2019126273-appb-000012
其中,δ_Y N、δ_U N和δ_V N分别为第N层Y、U、V分量方差,Y_LL N(x,y)为第N层Y分量像素点(x,y)的低频子带,Y_LL N(x+i,y+j)、为Y分量像素点Y_LL N(x,y)以高斯核尺寸WIN N为半径的像素点集合。
需要说明的是,上述的WIN N、δ_G N、δ_Y N、δ_U N和δ_V N这五个参数决定了噪点平滑强度和边缘保留水平,而噪点的多少又取决于相机获取图像时使用的gain值(增益水平)。因此通过建立五个参数与gain值之间的映射关系可以确定这五个参数的值。标定完成之后,可以得到如下面的公式所示的五个映射表:
WIN N=WIN_Table(gain,N)
δ_G N=δ_G_Table(gain,N)
δ_Y N=δ_Y_Table(gain,N)
δ_U N=δ_U_Table(gain,N)
δ_V N=δ_V_Table(gain,N)   (7)
在对第N层小波层的色度分量的低频子带进行双边滤波时,就根据该五个映射表来查找对应于第N层小波层的WIN N、δ_G N、δ_Y N、δ_U N和δ_V N这五个参数。
可以理解的是,第N层综合边缘指示量EI N(x+i,y+j)是由第N层小波层上对应的亮度分量的低频子带Y_LL N、色度分量的低频子带U_LL N、V_LL N共同确定的,并且用于对第N层小波层上对应的色度分量的低频子带的双边滤波。综合边缘指示量EI N(x+i,y+j)的使用能够有效的保留和增强色度分量低频子带的边缘信息。
利用上述的滤波步骤,可以对第N层色度分量的低频子带的所有像素进行滤波。当完成对第N层色度分量的低频子带的双边滤波之后,利用
Figure PCTCN2019126273-appb-000013
Figure PCTCN2019126273-appb-000014
与步骤120中所获得的第N层色度分量的高频子带(U_LH N、U_HL N、U_HH N、V_LH N、V_HL N、V_HH N)进行小波重构,以得到第N-1层的色度分量的低频子带U_LL N-1′和V_LL N-1′。
图3是步骤130中,以第Nmax层为起始,根据对应层亮度分量的低频子带对各层色度分量的低频子带进行双边滤波后与色度分量的高频子带不断向上一层进行小波重构,直至得到原始尺寸的图像的方法流程图。总的来说,该方法包括:根据第Nmax层亮度分量的低频子带对第Nmax层色度分量的低频子带进行双边滤波,与第Nmax色度分量的高频子带向第Nmax-1进行小波重构,得到第Nmax-1层色度分量的低频子带,根据对应第Nmax-1层亮度分量的低频子带对重构得到的色度分量的低频子带进行双边滤波后,与对应第Nmax-1层色度分量的高频子带继续向第Nmax-2层进行小波重构,直至得到原始尺寸的图像。可以理解的是,这里的双边滤波及小波重构的方法与前文所述相同。在本发明的优选实施例中,采用Haar小波进行小波重构。
参考图3所示,该方法具体的包括下面的步骤:
在步骤310中,根据第Nmax层亮度分量的低频子带对第Nmax层色度分 量的低频子带进行双边滤波。
在步骤320中,对滤波后的第Nmax层色度分量的低频子带以及第Nmax层色度分量的高频子带进行小波重构,得到第Nmax-1层色度分量的低频子带。
在步骤330中,根据第Nmax-1层亮度分量低频子带对重构得到的第Nmax-1层色度分量低频子带进行双边滤波。
在步骤340中,对滤波后的第Nmax-1层色度分量低频子带以及第Nmax-1层色度分量的高频子带进行小波重构。对于第N层小波层,当N>1时,使N=N-1并重复执行步骤330和340,直到N=1时才继续执行步骤350。
在步骤350中,根据第1层亮度分量的低频子带对重构得到的第1层色度分量的低频子带进行双边滤波。
在步骤360中,对滤波后的第1层色度分量的低频子带以及第1层色度分量的高频子带进行小波重构。
经过上述步骤即完成了对YUV格式图像在Nmax个尺度上的色度分量的低频子带所进行的双边滤波及小波重构。
接下来,继续在图1所示的步骤140中,根据亮度分量对小波重构后的色度分量进行边缘保留滤波。这里的亮度分量是指由步骤110所获的原始YUV格式图像的亮度分量。这里的小波重构后的色度分量为步骤360中所获得的重构后的色度分量。为了形象的说明本发明的图像增强方法的效果,图4-图6B分别给出了一示例性图像进行图像增强处理前后的图像。图4是一示例性图像的原始亮度Y分量图像。图5A是该示例性图像的原始色度U分量图像;图5B是对图5A所示的图像进行本发明一实施例的图像增强方法处理之后的色度U分量的图像。图6A是该示例性图像的原始色度V分量的图像;图6B是对图6A进行本发明一实施例的图像增强方法处理之后的色度V分量的图像。需要说明的是,为了更清楚的展示点状噪声的去除效果,对图5A-6B所示的图像进行了灰度归一化处理。比较图5A、5B以及图6A、6B可知,经过本发明的图像增强方法处理之后的图像中明显去除了点状噪声,图像效果得到了显著的改善。
在步骤150中,整合亮度分量和经过边缘保留滤波的色度分量。
在一些实施例中,步骤150可以由图像增强装置200中的整合单元250来 执行。与步骤110类似地,对亮度分量和经过边缘保留滤波的色度分量进行整合的过程也取决于具体的YUV格式。整合单元250会根据需要输出的YUV格式将亮度分量和色度分量按照具体的格式进行整合并输出。
如果需要输出其他颜色空间的图像格式,例如但不限于RGB、CMYK等格式,则整合单元250可以先得到YUV格式图像,再将该YUV格式图像转换为所需要的图像格式。
本发明提供一种图像增强方法及装置,通过对YUV格式图像进行小波分解,得到各层亮度分量和色度分量的低频子带和高频子带,根据对应层亮度分量的低频子带对各层色度分量的低频子带进行边缘保留滤波,不断向上一层进行小波重构,直至得到原始尺寸的图像,能够有效地保留和增强色度分量低频子带的边缘信息,可以避免对阈值的依赖,消除图像的边缘震荡,提高图像的去噪效果。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述发明披露仅仅作为示例,而并不构成对本发明的限定。虽然此处并没有明确说明,本领域技术人员可能会对本发明进行各种修改、改进和修正。该类修改、改进和修正在本发明中被建议,所以该类修改、改进、修正仍属于本发明示范实施例的精神和范围。
同时,本发明使用了特定词语来描述本发明的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本发明至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一替代性实施例”并不一定是指同一实施例。此外,本发明的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
虽然本发明已参照当前的具体实施例来描述,但是本技术领域中的普通技术人员应当认识到,以上的实施例仅是用来说明本发明,在没有脱离本发明精神的情况下还可做出各种等效的变化或替换,因此,只要在本发明的实质精神范围内对上述实施例的变化、变型都将落在本申请的权利要求书的范围内。

Claims (10)

  1. 一种图像增强方法,该方法包括:
    获取YUV格式图像,分离所述YUV格式图像的亮度分量和色度分量;
    分别对所述亮度分量和色度分量进行Nmax层小波分解,得到各层亮度分量的低频子带和高频子带,以及色度分量的低频子带和高频子带,其中Nmax为大于1的正整数;
    以第Nmax层为起始,根据对应层亮度分量的低频子带对各层色度分量的低频子带进行边缘保留滤波后与色度分量的高频子带不断向上一层进行小波重构,直至得到原始尺寸的图像;
    根据所述亮度分量对小波重构后的色度分量进行边缘保留滤波;
    整合所述亮度分量和经过边缘保留滤波的色度分量。
  2. 根据权利要求1所述的图像增强方法,其特征在于,根据对应层亮度分量的低频子带对色度分量的低频子带进行边缘保留滤波的步骤包括:根据对应层亮度分量的低频子带计算得到权值,以及根据所述权值对色度分量的低频子带进行边缘保留滤波。
  3. 根据权利要求2所述的图像增强方法,其特征在于,色度分量包括U分量和V分量,所述边缘保留滤波为双边滤波,根据所述权值对色度分量的低频子带进行边缘保留滤波的步骤包括:
    Figure PCTCN2019126273-appb-100001
    Figure PCTCN2019126273-appb-100002
    其中,N为当前小波层数,1≤N≤Nmax,且N为自然数,(x,y)为像素点的坐标,(i,j)为窗口内偏移量,-WIN N≤i,j≤WIN N,WIN N为第N层高斯核半径,U_LL N(x,y)为第N层U分量像素点(x,y)的低频子带,V_LL N(x,y) 为第N层Y分量像素点(x,y)的低频子带,U_LL N(x+i,y+j)、V_LL N(x+i,y+j)分别为U、V分量像素点U_LL N(x,y)、V_LL N(x,y)以高斯核尺寸WIN N为半径的像素点集合,W N(x+i,y+j)为对应的权值。
  4. 根据权利要求3所述的图像增强方法,其特征在于,根据对应层亮度分量的低频子带计算得到权值的步骤包括:
    W N(x+i,y+j)=WD N(x+i,y+j)*WR N(x+i,y+j)  (3)
    Figure PCTCN2019126273-appb-100003
    Figure PCTCN2019126273-appb-100004
    Figure PCTCN2019126273-appb-100005
    其中,δ_G N为第N层双边滤波中高斯核方差,δ_Y N、δ_U N和δ_V N分别为第N层Y、U、V分量方差,WD N(x+i,y+j)为双边滤波中的定义域权值,WR N(x+i,y+j)为双边滤波中的值域权值,EI N(x+i,y+j)为综合边缘指示量,Y_LL N(x,y)为第N层Y分量像素点(x,y)的低频子带,Y_LL N(x+i,y+j)、为Y分量像素点Y_LL N(x,y)以高斯核尺寸WIN N为半径的像素点集合。
  5. 根据权利要求1所述的图像增强方法,其特征在于,获取YUV格式图像的步骤包括:获取RGB格式图像,将所述RGB格式图像转换为YUV格式图像。
  6. 根据权利要求1所述的图像增强方法,其特征在于,采用Haar小波函 数进行小波分解和小波重构。
  7. 一种图像增强装置,该装置包括:
    图像获取单元,获取YUV格式图像,分离所述YUV格式图像的亮度分量和色度分量;
    小波分解单元,分别对所述亮度分量和色度分量进行N层小波分解,得到各层亮度分量的低频子带和高频子带,以及色度分量的低频子带和高频子带,其中N为大于1的正整数;
    小波重构单元,以第Nmax层为起始,根据对应层亮度分量的低频子带对各层色度分量的低频子带进行边缘保留滤波后与色度分量的高频子带不断向上一层进行小波重构,直至得到原始尺寸的图像;
    边缘保留滤波单元,根据所述亮度分量对小波重构后的色度分量进行边缘保留滤波;
    整合单元,整合所述亮度分量和经过边缘保留滤波的色度分量。
  8. 根据权利要求7所述的图像增强装置,其特征在于,所述小波重构单元根据对应层亮度分量的低频子带计算得到权值,以及根据所述权值对色度分量的低频子带进行边缘保留滤波。
  9. 根据权利要求7所述的图像增强装置,其特征在于,所述图像获取单元获取RGB格式图像,将所述RGB格式图像转换为YUV格式图像。
  10. 根据权利要求7所述的图像增强装置,其特征在于,所述小波分解单元采用哈尔小波函数进行小波分解,所述小波重构单元采用Haar小波函数进行小波重构。
PCT/CN2019/126273 2019-03-18 2019-12-18 一种图像增强方法及装置 WO2020186848A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/440,261 US11915392B2 (en) 2019-03-18 2019-12-18 Image enhancement method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910203085.6A CN110599406B (zh) 2019-03-18 2019-03-18 一种图像增强方法及装置
CN201910203085.6 2019-03-18

Publications (1)

Publication Number Publication Date
WO2020186848A1 true WO2020186848A1 (zh) 2020-09-24

Family

ID=68852458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126273 WO2020186848A1 (zh) 2019-03-18 2019-12-18 一种图像增强方法及装置

Country Status (3)

Country Link
US (1) US11915392B2 (zh)
CN (1) CN110599406B (zh)
WO (1) WO2020186848A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023222769A1 (en) * 2022-05-18 2023-11-23 Xavier Baele Method for processing image data
CN117834925A (zh) * 2023-12-27 2024-04-05 北京中星天视科技有限公司 压缩后视频质量增强方法、装置、电子设备和可读介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614471B (zh) * 2020-12-24 2022-04-22 上海立可芯半导体科技有限公司 色调映射方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156964A (zh) * 2011-03-31 2011-08-17 杭州海康威视软件有限公司 彩色图像去噪声的方法及其系统
CN104680485A (zh) * 2013-11-27 2015-06-03 展讯通信(上海)有限公司 一种基于多分辨率的图像去噪方法及装置
CN105243641A (zh) * 2015-08-18 2016-01-13 西安电子科技大学 一种基于双树复小波变换的低光照图像增强方法
US20160173884A1 (en) * 2014-12-16 2016-06-16 Thomson Licensing Device and a method for encoding an image and corresponding decoding method and decoding device
CN108259873A (zh) * 2018-02-01 2018-07-06 电子科技大学 一种梯度域视频对比度增强方法
CN109389560A (zh) * 2018-09-27 2019-02-26 深圳开阳电子股份有限公司 一种自适应加权滤波图像降噪方法、装置及图像处理设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7257273B2 (en) * 2001-04-09 2007-08-14 Mingjing Li Hierarchical scheme for blur detection in digital image using wavelet transform
US7676103B2 (en) * 2005-06-20 2010-03-09 Intel Corporation Enhancing video sharpness and contrast by luminance and chrominance transient improvement
KR101828411B1 (ko) * 2011-09-21 2018-02-13 삼성전자주식회사 영상 처리 방법 및 영상 처리 장치
US9111339B1 (en) * 2013-07-31 2015-08-18 Marvell International Ltd. System and method for reducing noise from an image
CN105096280B (zh) * 2015-06-17 2019-01-11 浙江宇视科技有限公司 处理图像噪声的方法及装置
CN105427257A (zh) * 2015-11-18 2016-03-23 四川汇源光通信有限公司 图像增强方法及装置
CN106851399B (zh) * 2015-12-03 2021-01-22 阿里巴巴(中国)有限公司 视频分辨率提升方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156964A (zh) * 2011-03-31 2011-08-17 杭州海康威视软件有限公司 彩色图像去噪声的方法及其系统
CN104680485A (zh) * 2013-11-27 2015-06-03 展讯通信(上海)有限公司 一种基于多分辨率的图像去噪方法及装置
US20160173884A1 (en) * 2014-12-16 2016-06-16 Thomson Licensing Device and a method for encoding an image and corresponding decoding method and decoding device
CN105243641A (zh) * 2015-08-18 2016-01-13 西安电子科技大学 一种基于双树复小波变换的低光照图像增强方法
CN108259873A (zh) * 2018-02-01 2018-07-06 电子科技大学 一种梯度域视频对比度增强方法
CN109389560A (zh) * 2018-09-27 2019-02-26 深圳开阳电子股份有限公司 一种自适应加权滤波图像降噪方法、装置及图像处理设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023222769A1 (en) * 2022-05-18 2023-11-23 Xavier Baele Method for processing image data
CN117834925A (zh) * 2023-12-27 2024-04-05 北京中星天视科技有限公司 压缩后视频质量增强方法、装置、电子设备和可读介质
CN117834925B (zh) * 2023-12-27 2024-08-06 北京中星天视科技有限公司 压缩后视频质量增强方法、装置、电子设备和可读介质

Also Published As

Publication number Publication date
CN110599406B (zh) 2022-05-03
CN110599406A (zh) 2019-12-20
US20220156890A1 (en) 2022-05-19
US11915392B2 (en) 2024-02-27

Similar Documents

Publication Publication Date Title
EP2852152B1 (en) Image processing method, apparatus and shooting terminal
EP3087730B1 (en) Method for inverse tone mapping of an image
WO2020186848A1 (zh) 一种图像增强方法及装置
CN105096280B (zh) 处理图像噪声的方法及装置
CN106846276B (zh) 一种图像增强方法及装置
US8878963B2 (en) Apparatus and method for noise removal in a digital photograph
CN104021532B (zh) 一种红外图像的图像细节增强方法
TWI596573B (zh) 影像處理裝置及其影像雜訊抑制方法
CN110246088B (zh) 基于小波变换的图像亮度降噪方法及其图像降噪系统
CN104157003B (zh) 一种基于正态分布调节的热图像细节增强方法
CN109978775B (zh) 颜色去噪方法及装置
CN111260580A (zh) 一种基于图像金字塔的图像去噪方法、计算机装置及计算机可读存储介质
US9111339B1 (en) System and method for reducing noise from an image
Karunakar et al. Discrete wavelet transform-based satellite image resolution enhancement
CN104504659B (zh) 一种基于提升小波变换的快速iso去噪方法及系统
Sun et al. Readability enhancement of low light videos based on discrete wavelet transform
CN109064413A (zh) 图像对比度增强方法及采用其的图像采集医疗设备
CN106447616A (zh) 一种实现小波去噪的方法和装置
Ge et al. A hybrid DCT-CLAHE approach for brightness enhancement of uneven-illumination underwater images
Kawasaki et al. A multiscale retinex with low computational cost
US20140118585A1 (en) Color processing of digital images
Momoh LWT-CLAHE Based Color Image Enhancement Technique: An Improved Design
CN105894456A (zh) 一种基于正规化分层的高动态范围图像阶调映射方法
Chae et al. Frequency-domain analysis of discrete wavelet transform coefficients and their adaptive shrinkage for anti-aliasing
Wang et al. Image denoising algorithms for dofp polarization image sensors with non-gaussian noises

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19919779

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19919779

Country of ref document: EP

Kind code of ref document: A1