WO2021128498A1 - 图像自适应降噪方法及装置 - Google Patents

图像自适应降噪方法及装置 Download PDF

Info

Publication number
WO2021128498A1
WO2021128498A1 PCT/CN2020/071307 CN2020071307W WO2021128498A1 WO 2021128498 A1 WO2021128498 A1 WO 2021128498A1 CN 2020071307 W CN2020071307 W CN 2020071307W WO 2021128498 A1 WO2021128498 A1 WO 2021128498A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sub
salient
block
value
Prior art date
Application number
PCT/CN2020/071307
Other languages
English (en)
French (fr)
Inventor
陈云娜
Original Assignee
Tcl华星光电技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tcl华星光电技术有限公司 filed Critical Tcl华星光电技术有限公司
Priority to US16/646,054 priority Critical patent/US11348204B2/en
Publication of WO2021128498A1 publication Critical patent/WO2021128498A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • This application relates to the field of image processing technology, and in particular to an image adaptive noise reduction method and device.
  • Block-based Discrete Cosine Transform (BDCT) coding is widely used in the compression field, including image and video compression standards such as JPEG, H.264, etc.
  • BDCT Discrete Cosine Transform
  • image and video compression standards such as JPEG, H.264, etc.
  • BDCT ignores the correlation between neighboring blocks, it may not appear at the boundary of the block. Continuous phenomenon.
  • one purpose of this application is to provide a new technical solution to solve one of the above-mentioned problems.
  • This application proposes a method based on the saliency analysis of the human eye, which can reduce hardware resource consumption to a certain extent when the human eye is difficult to detect, and obtain a detailed map through local entropy, which can preserve the details as much as possible while denoising .
  • this application provides an image adaptive noise reduction method, including the following steps: (1) the original RGB image containing noise is divided into multiple sub-blocks; (2) all the sub-blocks are extracted from the RGB space Image space conversion to YCbCr space; (3) Perform a saliency analysis on each of the sub-blocks after conversion to obtain a saliency characteristic map; (4) Determine whether the weight threshold of the saliency characteristic map of the current operation sub-block is greater than A significant standard value.
  • this application also provides an image adaptive noise reduction method, the specific steps include: (1) the original RGB image containing noise is divided into a plurality of sub-blocks; (2) all the sub-blocks from RGB Image space conversion from space to YCbCr space; (3) Perform a saliency analysis on each of the sub-blocks after conversion, and obtain a salient feature map separately; (4) Perform a significant standard value on all the salient feature maps
  • the saliency threshold segmentation is used to obtain the salient characteristic area and the non-salient characteristic area; (5) the pixel value of each sub-block of the salient characteristic area is adaptively denoised and then the first image is output, and the sub-blocks of the non-salient characteristic area
  • the block outputs the second image with the original pixel value, and merges all the first images and the second images to obtain the fused image; (6) Convert the fused image from YCbCr space to RGB space image space Reverse conversion and output the final image.
  • an image adaptive noise reduction device including: an image division module for dividing the original RGB image containing noise into multiple sub-blocks, and an image space conversion module for analyzing all the The sub-block performs image space conversion from the RGB space to the YCbCr space; the saliency analysis module is used to perform a saliency analysis on each of the sub-blocks after conversion to obtain a saliency weight map; the saliency segmentation module is used to pass a The salient standard value performs threshold segmentation on all the salient feature maps to obtain salient feature areas and non salient feature areas; image output module for adaptively denoising the pixel values of each sub-block of the salient feature area and output A first image, controlling each sub-block of the non-salient characteristic area to output a second image with the original pixel value, and fusing the first image and the second image to obtain a fused image; an image space inverse conversion module, It is used to perform image space inverse conversion from YCb
  • the beneficial effects of the present application are: the image is divided into blocks, based on the salient characteristics of the image, the salient area is adaptively denoised, and the denoising of the non-salient area is reduced. Without reducing the human perception quality, While improving image display quality, it saves algorithm running time and saves hardware resources; the image detail map is calculated through local entropy, and the weight of bilateral filtering is adaptively adjusted according to the amount of detail, the details are preserved, and the phenomenon of blurring of the detail area caused by the filtering method is solved to achieve The effect of sufficient noise reduction; the saliency analysis method of image block can be applied to other noise reduction algorithms and has universal applicability.
  • Fig. 1 is a flowchart of an image adaptive noise reduction method of this application.
  • Figure 2 shows the processing result of the original image after being divided into blocks.
  • FIG. 3 is a flowchart of sub-steps of an embodiment of the saliency analysis of this application.
  • Fig. 4 is a salient characteristic diagram obtained after processing a sub-block in this application.
  • FIG. 5 is a schematic diagram of an algorithm for processing a salient characteristic region according to an embodiment of the present application.
  • Fig. 6 is a structural block diagram of an image adaptive noise reduction device according to the present application.
  • connection should be understood in a broad sense, unless otherwise clearly specified and limited.
  • it can be a support connection or a detachable connection. Connected or integrally connected; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, and it can be the internal communication between two components.
  • connection can be a support connection or a detachable connection. Connected or integrally connected; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, and it can be the internal communication between two components.
  • Figure 1 is a flowchart of the image adaptive noise reduction method of this application
  • Figure 2 is the processing result of the original image after being divided into blocks
  • Figure 3 is a saliency analysis of this application.
  • FIG. 4 is a saliency characteristic diagram obtained after processing a sub-block in this application
  • FIG. 5 is a schematic diagram of an algorithm of an embodiment of processing a saliency characteristic region in this application.
  • the present application provides an image adaptive noise reduction method.
  • the specific steps of the method include: S11: divide the original RGB image containing noise into multiple sub-blocks; S12: perform sub-blocks from RGB space on all the sub-blocks. Image space conversion to YCbCr space; S13: Perform a saliency analysis on each of the sub-blocks after conversion to obtain a salient feature map separately; S14: Perform significant threshold segmentation on all the salient feature maps by a significant standard value , Acquiring the salient characteristic area and the non-salient characteristic area; S15: performing adaptive noise reduction on the pixel value of each sub-block of the salient characteristic area and outputting the first image, and each sub-block of the non-salient characteristic area is based on the original pixel Value output second image, and all the first images and the second image are merged to obtain a fused image; S16: Perform image space inverse conversion from YCbCr space to RGB space on the fused image, and output The final image.
  • step S11 the original RGB image containing noise is divided into a plurality of sub-blocks.
  • the image is divided into blocks according to a certain size, and the noisy original RGB image is divided into multiple disjoint sub-blocks with the same size, as shown in FIG. 2.
  • the size of the sub-block can be set according to the actual image size and hardware resources. For example, for a 768x512 resolution image, it can be divided into 36 96x128 sub-blocks for post-processing, but it is not limited to the embodiment.
  • step S12 performing image space conversion from RGB space to YCbCr space for all the sub-blocks.
  • the adopted conversion parameters can be set as:
  • step S13 performing a saliency analysis on each of the sub-blocks after conversion, and obtaining a saliency characteristic map respectively.
  • the step S13 further includes:
  • step S32 According to the channel value obtained in step S31, calculate the channel mean values of the three channels Y, Cb, and Cr by the following formula (1)-formula (3), denoted as Y ave , Cb ave , Cr ave , calculation formula for:
  • N is the total number of sub-blocks.
  • w Max represents the maximum value among all the saliency weights
  • x is the center coordinate of the sub-block.
  • the current sub-block is processed into a salient characteristic map. It can be seen from the salient feature map of the image that it extracts the area that the human eye pays more attention to. Due to hardware resource limitations, noise reduction processing can be performed on areas of strong interest, which can save hardware resources by reducing noise reduction in areas of weak interest.
  • step S14 perform a significant threshold segmentation on all the salient characteristic maps by a salient standard value, to obtain a salient characteristic area and a non-salient characteristic area.
  • all the salient feature maps are segmented by a salient standard value ⁇ .
  • the salient threshold segmentation is to determine whether the weight threshold of the salient feature map of the current operation sub-block is greater than the salient standard value, thereby generating and recording all the salient feature maps.
  • the salient value of the sub-block traverse the salient characteristic graphs of all the sub-blocks to obtain the salient characteristic area and the insignificant characteristic area.
  • the weight threshold of the salient feature map of the current operation sub-block is greater than the salient standard value ⁇ , it is classified into the salient feature area, and the salient value R(x) is recorded as 1; otherwise, it is classified into the insignificant feature area, and the salient value R (x) is marked as 0, as shown in the following formula:
  • step S15 the first image is output after adaptive noise reduction is performed on the pixel values of the sub-blocks of the salient characteristic area, and the second image is outputted by the original pixel values of the sub-blocks of the non-salient characteristic area, and all The first image and the second image are fused to obtain a fused image.
  • the step S15 further includes: S51: obtain a detailed map through local entropy calculation for each sub-block of the salient characteristic area; S52: perform a calculation on all sub-blocks of the salient characteristic area Perform bilateral filtering on the Y channel of, and use the detail map to adaptively adjust the edge filtering result, and output the first denoised image; S53: use the detail map to perform Y of all sub-blocks of the salient feature region The channel value of the channel is adjusted to output a second noise-reduced image; S54: fuse the first noise-reduced image and the second noise-reduced image to obtain the first image.
  • the local entropy is calculated by formula (6):
  • the local entropy can indicate how much detail, the higher the value, the more texture or detail in the area;
  • x is a coordinate of the center of the sub-blocks;
  • P i is the current local window ⁇ pixel grayscale pixel number representing the total number of local probability;
  • I is the current pixel gray value;
  • J is other gradation values;
  • Hist [i] It is the histogram of the i gray value, that is, the number of i gray levels in the local window ⁇ ; the size of the local window ⁇ can be set to 5 ⁇ 5 or 7 ⁇ 7.
  • step S52 the bilateral filtering is calculated according to formula (7):
  • x is the center coordinate of the sub-block
  • y is the other coefficient coordinates of the template window
  • I(x) and I(y) represent the pixel values corresponding to the coordinates
  • N(x) is the neighborhood of the pixel (x)
  • C is Normalization constant
  • ⁇ d is the standard deviation of the geometric distance
  • ⁇ r is the standard deviation of the gray-scale distance, respectively controlling the attenuation rate of the two geometric distances and the gray-scale distance.
  • Bilateral filtering has a small pixel difference in a flat area, and the corresponding value range weight is close to 1.
  • the spatial weight plays a major role, which is equivalent to directly performing Gaussian blurring on this area; in the edge area, the pixel difference is large, and the value range coefficient decreases.
  • the kernel function drops here, the less the current pixel is affected, so that the detailed information of the edge is maintained.
  • x is the center coordinate of the sub-block
  • R(x) is the saliency value of the corresponding sub-block
  • E(x) is the local entropy of the corresponding sub-block
  • Y in (x) is the channel value of the input Y channel.
  • the detail map of the output image obtained by the local entropy adaptively adjusts the bilateral filtering.
  • the output image mainly depends on the Y channel passing through the original image;
  • the output image mainly depends on the filtered image; for areas with insignificant characteristics, the output is the original pixel value.
  • step S16 Perform image space inverse conversion from YCbCr space to RGB space on the fused image, and output a final image.
  • the adopted conversion parameters can be set as:
  • the saliency analysis method of image block can be applied to other noise reduction algorithms and has universal applicability.
  • FIG. 6 a structural block diagram of the image adaptive noise reduction device of the present application.
  • the present application also discloses an image adaptive noise reduction device, including: an image division module 61, an image space conversion module 62, a saliency analysis module 63, a saliency division module 64, an image output module 65, and an image space Reverse conversion module 66.
  • the image dividing module 61 is used to divide the original RGB image containing noise into multiple sub-blocks. Specifically, the image dividing module 61 divides the image into blocks according to a certain size, and divides the noisy original RGB image into a plurality of disjoint sub-blocks with the same size. The size of the sub-block can be set according to the actual image size and hardware resources.
  • the image space conversion module 62 is configured to perform image space conversion from RGB space to YCbCr space for all the sub-blocks.
  • the image space conversion module 62 may perform matrix operations, and the conversion parameters used may be set as follows:
  • the saliency analysis module 63 is configured to perform a saliency analysis on each of the converted sub-blocks, and obtain a saliency weight map respectively.
  • the saliency analysis module 63 obtains the channel values of the three channels of Y, Cb, and Cr for each of the sub-blocks, and calculates the channel averages of the three channels of Y, Cb, and Cr according to the obtained channel values. Then calculate the Euclidean distance between the Y, Cb, Cr channel value of each sub-block and the corresponding channel mean value, and record it as the saliency weight; normalize each saliency weight to obtain the corresponding weight normalization Value, and then obtain the saliency characteristic map.
  • the salient segmentation module 64 is configured to perform threshold segmentation on all salient feature maps by a salient standard value to obtain salient feature regions and non-salient feature regions.
  • the saliency segmentation module 64 determines whether the weight threshold of the saliency feature map of the current operation sub-block is greater than the saliency standard value, if it is greater than the saliency standard value, it is classified as a salient feature area, otherwise it is classified as an insignificant feature Area, generating and recording the salient value of the sub-block; traversing all salient feature maps to obtain the salient feature area and the insignificant feature area.
  • the image output module 65 is configured to perform adaptive noise reduction on the pixel value of each sub-block of the salient characteristic area and output the first image, and control each sub-block of the non-salient characteristic area to output the first image with the original pixel value. Two images, and fusing the first image and the second image to obtain a fused image.
  • the image output module 65 is further configured to: perform local entropy calculation on each sub-block of the salient characteristic area to obtain a detailed map; perform bilateral filtering on the Y channels of all the sub-blocks of the salient characteristic area, And use the detail map to adaptively adjust the edge filtering result to output the first denoised image; through the detail map to adjust the Y channel values of all the sub-blocks of the salient feature area, and output the first Two noise-reduced images; fuse the first noise-reduced image and the second noise-reduced image to obtain the first image.
  • the image space inverse conversion module 66 is configured to perform image space inverse conversion from YCbCr space to RGB space on the fused image, and output a final image.
  • the image space inverse conversion module 66 can perform matrix operations, and the conversion parameters used can be set as follows:
  • the image adaptive noise reduction method of this application can be applied to a display terminal, which can be a smart phone, a tablet computer, a TV, or other devices.
  • the display terminal includes a processor and a memory that are electrically connected.
  • the processor is the control center of the display terminal. It uses various interfaces and lines to connect the various parts of the entire display terminal. By running or loading the application program stored in the memory, and calling the data stored in the memory, it executes various functions of the display terminal. Function and processing data, so as to monitor the display terminal as a whole.
  • the processor in the display terminal will follow the steps in the image adaptive noise reduction method of this application to load the instructions corresponding to the process of one or more applications into the memory, and the processor will run the instructions.
  • Application programs stored in the memory to realize various functions.
  • an embodiment of the present application provides a storage medium in which multiple instructions are stored, and the instructions can be loaded by a processor to execute the steps in any image adaptive noise reduction method provided in the embodiments of the present application. .
  • the storage medium may include: read only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Picture Signal Circuits (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

本申请公开了一种图像自适应降噪方法及装置。方法包括:将原始图像均分成多个子块;对每个子块进行空间转换;进行显著性分析获取显著特性图;通过显著标准值对显著特性图按照显著特性区域及非显著特性区域进行显著阈值分割;对显著特性区域进行自适应滤波并与非显著特性区域的原图像融合;最后将融合后的图像进行空间反转换,并输出最终图像。本申请采用对图像分块的方式,基于图像的显著特性,减少非显著特性区域的降噪,节省算法运行时间和硬件资源。

Description

图像自适应降噪方法及装置 技术领域
本申请涉及图像处理技术领域,具体涉及一种图像自适应降噪方法及装置。
背景技术
基于块的离散余弦变换(BDCT)编码在压缩领域广泛应用,包括图像和视频的压缩标准如JPEG、H.264等,但BDCT由于忽略了相邻块的相关性,可能在块的边界出现不连续的现象。
技术问题
传统的去压缩引起的块效应、蚊式噪声等方式采用全局降噪方式,比如对于整幅图像采用统一参数的双边滤波降噪方式。如果对图像采用统一的双边滤波方式对于细节较多的区域还是会模糊,降低图像质量。而且由于硬件资源有限,复杂的算法,会导致处理时间较长,不能实时处理。
技术解决方案
针对上述问题中的不足,本申请的一个目的是提供解决上述问题之一的新的技术方案。本申请提出一种根据人眼显著性分析,可以在人眼难以察觉的情况下,一定程度上减少硬件资源消耗,且通过局部熵得到细节映射图,可在去噪的同时尽可能的保留细节。
为实现上述目的,本申请提供了一种图像自适应降噪方法,包括以下步骤:(1)将含噪声的原始RGB图像分成多个子块;(2)对所有所述子块进行从RGB空间转换到YCbCr空间的图像空间转换;(3)对转换后的每一所述子块进行显著性分析,分别获取显著特性图;(4)判断当前操作子块的显著特性图的权重阈值是否大于一显著标准值,若大于所述显著标准值,则归入显著特性区域,否则归入非显著特性区域,生成并记录所述子块的显著值,遍历所有显著特性图,获取显著特性区域及非显著特性区域;(5)对所述显著特性区域的各子块通过局部熵计算,获取细节映射图;对所有所述显著特性区域的子块的Y通道进行双边滤波,并采用所述细节映射图对双边滤波结果进行自适应调节,输 出第一降噪图像;通过所述细节映射图对所有所述显著特性区域的子块的Y通道的通道值进行调节,输出第二降噪图像;融合所述第一降噪图像与所述第二降噪图像,获得第一图像,所述非显著特性区域的各子块以原像素值输出第二图像,并将所有所述第一图像与所述第二图像进行融合,获得融合图像;(6)对所述融合图像进行从YCbCr空间转换到RGB空间的图像空间反转换,并输出最终图像。
为实现上述目的,本申请还提供了一种图像自适应降噪方法,具体步骤包括:(1)将含噪声的原始RGB图像分成多个子块;(2)对所有所述子块进行从RGB空间转换到YCbCr空间的图像空间转换;(3)对转换后的每一所述子块进行显著性分析,分别获取显著特性图;(4)通过一显著标准值对所有所述显著特性图进行显著阈值分割,获取显著特性区域及非显著特性区域;(5)对所述显著特性区域的各子块的像素值进行自适应降噪后输出第一图像,所述非显著特性区域的各子块以原像素值输出第二图像,并将所有所述第一图像与所述第二图像进行融合,获得融合图像;(6)对所述融合图像进行从YCbCr空间转换到RGB空间的图像空间反转换,并输出最终图像。
为实现上述目的,本申请还提供了一种图像自适应降噪装置,包括:图像划分模块,用于将含噪声的原始RGB图像分成多个子块,图像空间转换模块,用于对所有所述子块进行从RGB空间转换到YCbCr空间的图像空间转换;显著分析模块,用于对转换后的每一所述子块进行显著性分析,分别获取显著权重图;显著分割模块,用于通过一显著标准值对所有所述显著特性图进行阈值分割,获取显著特性区域及非显著特性区域;图像输出模块,用于对所述显著特性区域的各子块的像素值进行自适应降噪后输出第一图像,控制所述非显著特性区域的各子块以原像素值输出第二图像,并将所述第一图像与所述第二图像进行融合,获得融合图像;图像空间反转换模块,用于对所述融合图像进行从YCbCr空间转换到RGB空间的图像空间反转换,并输出最终图像。
有益效果
本申请的有益效果为:采用对图像分块的方式,基于图像的显著特性,对显著区域进行自适应降噪,减少非显著性区域的降噪,在不减少人的感知质量的情况下,提升图像显示质量的同时、节省算法运行时间,节约硬件资源;通 过局部熵计算图像细节映射图,根据细节多少自适应调节双边滤波的权重,保留细节,解决滤波方式导致细节区模糊的现象,达到充分降噪的效果;图像分块的显著性分析方法可适用于其他的降噪算法,具有普适性。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本申请图像自适应降噪方法的流程图。
图2为原始图像经过分块后的处理结果。
图3为本申请显著性分析的一实施例的子步骤流程图。
图4为本申请对一个子块处理后得到的显著特性图。
图5为本申请对显著特性区域进行处理的一实施例的算法示意图。
图6为本申请图像自适应降噪装置的结构框图。
本发明的实施方式
这里所公开的具体结构和功能细节仅仅是代表性的,并且是用于描述本申请的示例性实施例的目的。但是本申请可以通过许多替换形式来具体实现,并且不应当被解释成仅仅受限于这里所阐述的实施例。
在本申请的描述中,需要理解的是,术语“中心”、“横向”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,除非另有说明,“多个”的含义是两个或两个以上。另外,术语“包括”及其任何变形,意图在于覆盖不排他的包含。
在本申请的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是支撑连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本申请中的具体含义。
这里所使用的术语仅仅是为了描述具体实施例而不意图限制示例性实施例。除非上下文明确地另有所指,否则这里所使用的单数形式“一个”、“一项”还意图包括复数。还应当理解的是,这里所使用的术语“包括”和/或“包含”规定所陈述的特征、整数、步骤、操作、单元和/或组件的存在,而不排除存在或添加一个或更多其他特征、整数、步骤、操作、单元、组件和/或其组合。
下面结合附图和实施例对本申请作进一步说明。
请一并参照图1-图5,其中,图1为本申请图像自适应降噪方法的流程图,图2为原始图像经过分块后的处理结果,图3为本申请显著性分析的一种实施例的子步骤流程图,图4为本申请对一个子块处理后得到的显著特性图,图5为本申请对显著特性区域进行处理的一实施例的算法示意图。
如图1所示,本申请提供一种图像自适应降噪方法,该方法具体步骤包括:S11:将含噪声的原始RGB图像分成多个子块;S12:对所有所述子块进行从RGB空间转换到YCbCr空间的图像空间转换;S13:对转换后的每一所述子块进行显著性分析,分别获取显著特性图;S14:通过一显著标准值对所有所述显著特性图进行显著阈值分割,获取显著特性区域及非显著特性区域;S15:对所述显著特性区域的各子块的像素值进行自适应降噪后输出第一图像,所述非显著特性区域的各子块以原像素值输出第二图像,并将所有所述第一图像与所述第二图像进行融合,获得融合图像;S16:对所述融合图像进行从YCbCr空间转换到RGB空间的图像空间反转换,并输出最终图像。以下给出详细解释。
关于步骤S11:将含噪声的原始RGB图像分成多个子块。
具体地,对图像按照一定大小进行分块,将含噪声的原始RGB图像分成多个互不相交、且大小相同的子块,如图2所示。子块的大小可根据实际图像大小与硬件资源设定,例如,针对768x512分辨率图像,可以分成36个96x128 大小的子块,进行后处理,但不局限于实施例。
关于步骤S12:对所有所述子块进行从RGB空间转换到YCbCr空间的图像空间转换。
具体地,采用的转换参数可以设置为:
Figure PCTCN2020071307-appb-000001
关于步骤S13:对转换后的每一所述子块进行显著性分析,分别获取显著特性图。
具体地,如图3所示,所述步骤S13进一步包括:
S31:分别获得每一所述子块的Y,Cb,Cr三个通道的通道值,记为Y(x),Cb(x),Cr(x),其中,x为所述子块的中心坐标。
S32:根据步骤S31所获取的通道值,通过下述公式(1)-公式(3)分别计算Y,Cb,Cr三个通道的通道均值,记为Y ave,Cb ave,Cr ave,计算公式为:
Figure PCTCN2020071307-appb-000002
Figure PCTCN2020071307-appb-000003
Figure PCTCN2020071307-appb-000004
其中,N为子块总数。
S33:通过公式(4)分别计算每一所述子块的Y,Cb,Cr的通道值与相应通道均值的欧式距离,记为显著权重w:
w(x)=||(Y(x),Cb(x),Cr(x))-(Y ave,Cb ave,Cr ave)|| 2    (4)
S34:对每一所述显著权重通过公式(5)进行归一化,得到相应的权重归一化值,进而得到所述显著特性图:
Figure PCTCN2020071307-appb-000005
其中,w Max表示所有所述显著权重中的最大值,x为所述子块的中心坐标。
如图4所示,将当前子块处理成显著特性图。通过图像的显著特性图可看 出,其提取出了人眼较关注的区域。由于硬件资源限制,可对强关注区可进行降噪处理,可通过减少弱关注区的降噪,节省硬件资源。
关于步骤S14:通过一显著标准值对所有所述显著特性图进行显著阈值分割,获取显著特性区域及非显著特性区域。
具体地,通过一显著标准值α对所有所述显著特性图进行显著阈值分割,显著阈值分割即判断当前操作子块的显著特性图的权重阈值是否大于所述显著标准值,从而生成并记录所述子块的显著值;遍历所有子块的显著特性图,获取所述显著特性区域及所述非显著特性区域。其中,若当前操作子块的显著特性图的权重阈值大于所述显著标准值α,则归入显著特性区域,显著值R(x)记为1;否则归入非显著特性区域,显著值R(x)记为0,如下述公式所示:
Figure PCTCN2020071307-appb-000006
其中,α∈[0,1],α取值与具体的硬件资源有关,在本实施例中,α=0.6。
关于步骤S15:对所述显著特性区域的各子块的像素值进行自适应降噪后输出第一图像,所述非显著特性区域的各子块以原像素值输出第二图像,并将所有所述第一图像与所述第二图像进行融合,获得融合图像。
具体地,如图5所示,所述步骤S15进一步包括:S51:对所述显著特性区域的各子块通过局部熵计算,获取细节映射图;S52:对所有所述显著特性区域的子块的Y通道进行双边滤波,并采用所述细节映射图对边滤波结果进行自适应调节,输出第一降噪图像;S53:通过所述细节映射图对所有所述显著特性区域的子块的Y通道的通道值进行调节,输出第二降噪图像;S54:融合所述第一降噪图像与所述第二降噪图像,获得所述第一图像。
进一步,所述步骤S51中,通过公式(6)计算局部熵:
Figure PCTCN2020071307-appb-000007
局部熵的可表示细节的多少,值越高表示则该区域有较多的纹理或细节;其中,
Figure PCTCN2020071307-appb-000008
x为所述子块的中心坐标;P i是局部窗口Ω当前像素 灰度个数占局部像素总个数的概率;i是当前像素灰度值;j是其他灰度值;Hist[i]为i灰度值的直方图,即局部窗口Ω内i灰阶的个数;局部窗口Ω的尺寸可设置为5×5或7×7。
进一步,在步骤S52中,按照公式(7)计算双边滤波:
Figure PCTCN2020071307-appb-000009
其中,
Figure PCTCN2020071307-appb-000010
x为所述子块的中心坐标,y为模板窗口的其他系数坐标;I(x)和I(y)表示坐标对应的像素值;N(x)为像素(x)的邻域;C为归一化常数;σ d为几何距离的标准差,σ r为灰度距离的标准差,分别控制两个几何距离与灰度距离的衰减率。双边滤波在平坦区域像素差值较小,对应值域权重接近于1,空域权重起主要作用,相当于直接对此区域进行高斯模糊;在边缘区域,像素差值较大,值域系数下降,导致此处核函数下降当前像素受到的影响就越小,从而保持了边缘的细节信息。
进一步,所述步骤S15中采用公式(8)获得融合图像:
Figure PCTCN2020071307-appb-000011
其中,x为所述子块的中心坐标,R(x)为相应子块的显著值,E(x)为相应子块的局部熵,
Figure PCTCN2020071307-appb-000012
为双边滤波的运算结果,Y in(x)为输入的Y通道的通道值。
显然,输出图像由局部熵求得的细节映射图对双边滤波进行自适应调节,对于细节较多的区域,为了避免细节的丢失,输出图像主要取决于通过原图的Y通道;而对于细节较少的区域,输出图像主要取决于滤波后的图像;对于非显著特性的区域输出则为原像素值。
关于步骤S16:对所述融合图像进行从YCbCr空间转换到RGB空间的图像空间反转换,并输出最终图像。
具体地,采用的转换参数可以设置为:
Figure PCTCN2020071307-appb-000013
本申请具有如下优点:
1.采用对图像分块的方式,基于图像的显著特性,对显著区域进行自适应降噪,减少非显著性区域的降噪,在不减少人的感知质量的情况下,提升图像显示质量的同时、节省算法运行时间,节约硬件资源。
2.通过局部熵计算图像细节映射图,根据细节多少自适应调节双边滤波的权重,保留细节,解决滤波方式导致细节区模糊的现象,达到充分降噪的效果。
3.图像分块的显著性分析方法可适用于其他的降噪算法,具有普适性。
请参阅图6,本申请图像自适应降噪装置的结构框图。如图6所示,本申请还公开了一种图像自适应降噪装置,包括:图像划分模块61、图像空间转换模块62、显著分析模块63、显著分割模块64、图像输出模块65和图像空间反转换模块66。
所述图像划分模块61,用于将含噪声的原始RGB图像分成多个子块。具体地,所述图像划分模块61对图像按照一定大小进行分块,将含噪声的原始RGB图像分成多个互不相交、且大小相同的子块。子块的大小可根据实际图像大小与硬件资源设定。
所述图像空间转换模块62,用于对所有所述子块进行从RGB空间转换到YCbCr空间的图像空间转换。
具体地,所述图像空间转换模块62可以进行矩阵运算,采用的转换参数可以设置为:
Figure PCTCN2020071307-appb-000014
所述显著分析模块63,用于对转换后的每一所述子块进行显著性分析,分别获取显著权重图。
具体地,所述显著分析模块63分别获得每一所述子块的Y,Cb,Cr三个 通道的通道值,根据获取的通道值,分别计算Y,Cb,Cr三个通道的通道均值,再分别计算每一所述子块的Y,Cb,Cr的通道值与相应通道均值的欧式距离,记为显著权重;对每一所述显著权重进行归一化,得到相应的权重归一化值,进而得到所述显著特性图。
所述显著分割模块64,用于通过一显著标准值对所有所述显著特性图进行阈值分割,获取显著特性区域及非显著特性区域。
具体地,所述显著分割模块64判断当前操作子块的显著特性图的权重阈值是否大于所述显著标准值,若大于所述显著标准值,则归入显著特性区域,否则归入非显著特性区域,生成并记录所述子块的显著值;遍历所有显著特性图,获取所述显著特性区域及所述非显著特性区域。
所述图像输出模块65,用于对所述显著特性区域的各子块的像素值进行自适应降噪后输出第一图像,控制所述非显著特性区域的各子块以原像素值输出第二图像,并将所述第一图像与所述第二图像进行融合,获得融合图像。
具体地,所述图像输出模块65进一步用于:对所述显著特性区域的各子块通过局部熵计算,获取细节映射图;对所有所述显著特性区域的子块的Y通道进行双边滤波,并采用所述细节映射图对边滤波结果进行自适应调节,输出第一降噪图像;通过所述细节映射图对所有所述显著特性区域的子块的Y通道的通道值进行调节,输出第二降噪图像;融合所述第一降噪图像与所述第二降噪图像,获得所述第一图像。
所述图像空间反转换模块66,用于对所述融合图像进行从YCbCr空间转换到RGB空间的图像空间反转换,并输出最终图像。
具体地,所述图像空间反转换模块66可以进行矩阵运算,采用的转换参数可以设置为:
Figure PCTCN2020071307-appb-000015
本申请图像自适应降噪方法可以应用于显示终端,该显示终端可以是智能手机、平板电脑、电视等设备。具体的,显示终端包括电性连接的处理器以及存储器。处理器是显示终端的控制中心,利用各种接口和线路连接整个显示终 端的各个部分,通过运行或加载存储在存储器内的应用程序,以及调用存储在存储器内的数据,执行显示终端的各种功能和处理数据,从而对显示终端进行整体监控。
在本实施例中,显示终端中的处理器会按照本申请图像自适应降噪方法中的步骤,将一个或一个以上的应用程序的进程对应的指令加载到存储器中,并由处理器来运行存储在存储器中的应用程序,从而实现各种功能。
本领域普通技术人员可以理解,上述实施例的各种方法中的全部或部分步骤可以通过指令来完成,或通过指令控制相关的硬件来完成,该指令可以存储于一计算机可读存储介质中,并由处理器进行加载和执行。为此,本申请实施例提供一种存储介质,其中存储有多条指令,该指令能够被处理器进行加载,以执行本申请实施例所提供的任一种图像自适应降噪方法中的步骤。
其中,该存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
综上所述,虽然本申请已以优选实施例揭露如上,但上述优选实施例并非用以限制本申请,本领域的普通技术人员,在不脱离本申请的精神和范围内,均可作各种更动与润饰,因此本申请的保护范围以权利要求界定的范围为准。

Claims (17)

  1. 一种图像自适应降噪方法,其中,包括以下步骤:
    (1)将含噪声的原始RGB图像分成多个子块;
    (2)对所有所述子块进行从RGB空间转换到YCbCr空间的图像空间转换;
    (3)对转换后的每一所述子块进行显著性分析,分别获取显著特性图;
    (4)判断当前操作子块的显著特性图的权重阈值是否大于一显著标准值,若大于所述显著标准值,则归入显著特性区域,否则归入非显著特性区域,生成并记录所述子块的显著值,遍历所有显著特性图,获取显著特性区域及非显著特性区域;
    (5)对所述显著特性区域的各子块通过局部熵计算,获取细节映射图;对所有所述显著特性区域的子块的Y通道进行双边滤波,并采用所述细节映射图对双边滤波结果进行自适应调节,输出第一降噪图像;通过所述细节映射图对所有所述显著特性区域的子块的Y通道的通道值进行调节,输出第二降噪图像;融合所述第一降噪图像与所述第二降噪图像,获得第一图像,所述非显著特性区域的各子块以原像素值输出第二图像,并将所有所述第一图像与所述第二图像进行融合,获得融合图像;
    (6)对所述融合图像进行从YCbCr空间转换到RGB空间的图像空间反转换,并输出最终图像。
  2. 如权利要求1所述的图像自适应降噪方法,其中,所述步骤(3)进一步包括:
    (31)分别获得每一所述子块的Y,Cb,Cr三个通道的通道值,记为Y(x),Cb(x),Cr(x),其中,x为所述子块的中心坐标;
    (32)根据步骤(31)所获取的通道值,通过公式(1)-公式(3)分别计算Y,Cb,Cr三个通道的通道均值,记为Y ave,Cb ave,Cr ave,计算公式为:
    Figure PCTCN2020071307-appb-100001
    Figure PCTCN2020071307-appb-100002
    Figure PCTCN2020071307-appb-100003
    并且其中,N为子块总数;
    (33)通过公式(4)分别计算每一所述子块的Y,Cb,Cr的通道值与相应通道均值的欧式距离,记为显著权重w:
    w(x)=||(Y(x),Cb(x),Cr(x))-(Y ave,Cb ave,Cr ave)|| 2  (4)
    (34)对每一所述显著权重通过公式(5)进行归一化,得到相应的权重归一化值,进而得到所述显著特性图:
    Figure PCTCN2020071307-appb-100004
    并且其中,w Max表示所有所述显著权重中的最大值,x为所述子块的中心坐标。
  3. 如权利要求1所述的图像的自适应降噪方法,其中,所述步骤(51)中,通过公式(6)计算局部熵:
    Figure PCTCN2020071307-appb-100005
    并且其中,
    Figure PCTCN2020071307-appb-100006
    x为所述子块的中心坐标;P i是局部窗口Ω当前像素灰度个数占局部像素总个数的概率;i是当前像素灰度值;j是其他灰度值;Hist[i]为i灰度值的直方图。
  4. 如权利要求1所述的图像自适应降噪方法,其中,所述步骤(52)中,通过公式(7)计算双边滤波:
    Figure PCTCN2020071307-appb-100007
    并且其中,
    Figure PCTCN2020071307-appb-100008
    x为所述子块的中心坐标,y为模板窗口的其他系数坐标;I(x)和I(y)表示坐标对应的像素值;N(x)为像素(x) 的邻域;C为归一化常数;σ d为几何距离的标准差,σ r为灰度距离的标准差。
  5. 如权利要求1所述的图像自适应降噪方法,其中,所述步骤(5)通过公式(8)计算滤波输出结果:
    Figure PCTCN2020071307-appb-100009
    并且其中,x为所述子块的中心坐标,R(x)为相应子块的显著值,E(x)为相应子块的局部熵,
    Figure PCTCN2020071307-appb-100010
    为双边滤波的运算结果,Y in(x)为输入的Y通道的通道值。
  6. 如权利要求1所述的图像自适应降噪方法,其中,所述步骤(1)中的图像空间转换,采用的转换参数设置为:
    Figure PCTCN2020071307-appb-100011
  7. 如权利要求1所述的图像自适应降噪方法,其中,所述步骤(6)中的图像空间反转换,采用的转换参数设置为:
    Figure PCTCN2020071307-appb-100012
  8. 一种图像自适应降噪方法,其中,包括以下步骤:
    (1)将含噪声的原始RGB图像分成多个子块;
    (2)对所有所述子块进行从RGB空间转换到YCbCr空间的图像空间转换;
    (3)对转换后的每一所述子块进行显著性分析,分别获取显著特性图;
    (4)通过一显著标准值对所有所述显著特性图进行显著阈值分割,获取显著特性区域及非显著特性区域;
    (5)对所述显著特性区域的各子块的像素值进行自适应降噪后输出第一图像,所述非显著特性区域的各子块以原像素值输出第二图像,并将所有所述第一图像与所述第二图像进行融合,获得融合图像;
    (6)对所述融合图像进行从YCbCr空间转换到RGB空间的图像空间反转换,并输出最终图像。
  9. 如权利要求8所述的图像自适应降噪方法,其中,所述步骤(3)进一步包括:
    (31)分别获得每一所述子块的Y,Cb,Cr三个通道的通道值,记为Y(x),Cb(x),Cr(x),其中,x为所述子块的中心坐标;
    (32)根据步骤(31)所获取的通道值,通过公式(1)-公式(3)分别计算Y,Cb,Cr三个通道的通道均值,记为Y ave,Cb ave,Cr ave,计算公式为:
    Figure PCTCN2020071307-appb-100013
    Figure PCTCN2020071307-appb-100014
    Figure PCTCN2020071307-appb-100015
    并且其中,N为子块总数;
    (33)通过公式(4)分别计算每一所述子块的Y,Cb,Cr的通道值与相应通道均值的欧式距离,记为显著权重w:
    w(x)=||(Y(x),Cb(x),Cr(x))-(Y ave,Cb ave,Cr ave)|| 2  (4)
    (34)对每一所述显著权重通过公式(5)进行归一化,得到相应的权重归一化值,进而得到所述显著特性图:
    Figure PCTCN2020071307-appb-100016
    并且其中,w Max表示所有所述显著权重中的最大值,x为所述子块的中心坐标。
  10. 如权利要求8所述的图像自适应降噪方法,其中,所述步骤(4)进一步包括:
    (41)判断当前操作子块的显著特性图的权重阈值是否大于所述显著标准值,若大于所述显著标准值,则归入显著特性区域,否则归入非显著特性区域,生成并记录所述子块的显著值;
    (42)遍历所有显著特性图,获取所述显著特性区域及所述非显著特性区域。
  11. 如权利要求8所述的图像自适应降噪方法,其中,所述步骤(5)进一步包括:
    (51)对所述显著特性区域的各子块通过局部熵计算,获取细节映射图;
    (52)对所有所述显著特性区域的子块的Y通道进行双边滤波,并采用所述细节映射图对双边滤波结果进行自适应调节,输出第一降噪图像;
    (53)通过所述细节映射图对所有所述显著特性区域的子块的Y通道的通道值进行调节,输出第二降噪图像;
    (54)融合所述第一降噪图像与所述第二降噪图像,获得所述第一图像。
  12. 如权利要求11所述的图像的自适应降噪方法,其中,所述步骤(51)中,通过公式(6)计算局部熵:
    Figure PCTCN2020071307-appb-100017
    并且其中,
    Figure PCTCN2020071307-appb-100018
    x为所述子块的中心坐标;P i是局部窗口Ω当前像素灰度个数占局部像素总个数的概率;i是当前像素灰度值;j是其他灰度值;Hist[i]为i灰度值的直方图。
  13. 如权利要求11所述的图像自适应降噪方法,其中,所述步骤(52)中,通过公式(7)计算双边滤波:
    Figure PCTCN2020071307-appb-100019
    并且其中,
    Figure PCTCN2020071307-appb-100020
    x为所述子块的中心坐标,y为模板窗口的其他系数坐标;I(x)和I(y)表示坐标对应的像素值;N(x)为像素(x)的邻域;C为归一化常数;σ d为几何距离的标准差,σ r为灰度距离的标准差。
  14. 如权利要求8所述的图像自适应降噪方法,其中,所述步骤(5)通过公式(8)计算滤波输出结果:
    Figure PCTCN2020071307-appb-100021
    并且其中,x为所述子块的中心坐标,R(x)为相应子块的显著值,E(x)为相应子块的局部熵,
    Figure PCTCN2020071307-appb-100022
    为双边滤波的运算结果,Y in(x)为输入的Y通道的通道值。
  15. 如权利要求8所述的图像自适应降噪方法,其中,所述步骤(1)中的图像空间转换,采用的转换参数设置为:
    Figure PCTCN2020071307-appb-100023
  16. 如权利要求8所述的图像自适应降噪方法,其中,所述步骤(6)中的图像空间反转换,采用的转换参数设置为:
    Figure PCTCN2020071307-appb-100024
  17. 一种图像自适应降噪装置,其中,包括:
    图像划分模块,用于将含噪声的原始RGB图像分成多个子块,
    图像空间转换模块,用于对所有所述子块进行从RGB空间转换到YCbCr空间的图像空间转换;
    显著分析模块,用于对转换后的每一所述子块进行显著性分析,分别获取显著权重图;
    显著分割模块,用于通过一显著标准值对所有所述显著特性图进行阈值分割,获取显著特性区域及非显著特性区域;
    图像输出模块,用于对所述显著特性区域的各子块的像素值进行自适应降噪后输出第一图像,控制所述非显著特性区域的各子块以原像素值输出第二图像,并将所述第一图像与所述第二图像进行融合,获得融合图像;
    图像空间反转换模块,用于对所述融合图像进行从YCbCr空间转换到RGB空间的图像空间反转换,并输出最终图像。
PCT/CN2020/071307 2019-12-25 2020-01-10 图像自适应降噪方法及装置 WO2021128498A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/646,054 US11348204B2 (en) 2019-12-25 2020-01-10 Image adaptive noise reduction method and device thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911352759.5 2019-12-25
CN201911352759.5A CN111161177B (zh) 2019-12-25 2019-12-25 图像自适应降噪方法和装置

Publications (1)

Publication Number Publication Date
WO2021128498A1 true WO2021128498A1 (zh) 2021-07-01

Family

ID=70556545

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071307 WO2021128498A1 (zh) 2019-12-25 2020-01-10 图像自适应降噪方法及装置

Country Status (3)

Country Link
US (1) US11348204B2 (zh)
CN (1) CN111161177B (zh)
WO (1) WO2021128498A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435182B (zh) * 2020-11-17 2024-05-10 浙江大华技术股份有限公司 图像降噪方法及装置
CN113763275A (zh) * 2021-09-09 2021-12-07 深圳市文立科技有限公司 一种自适应图像降噪方法、系统及可读存储介质
CN117576139B (zh) * 2024-01-17 2024-04-05 深圳市致佳仪器设备有限公司 一种基于双边滤波的边缘及角点检测方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100104180A1 (en) * 2008-10-28 2010-04-29 Novatek Microelectronics Corp. Image noise reduction method and image processing apparatus using the same
CN103679661A (zh) * 2013-12-25 2014-03-26 北京师范大学 一种基于显著性分析的自适应遥感图像融合方法
CN104751415A (zh) * 2013-12-31 2015-07-01 展讯通信(上海)有限公司 一种图像去噪和增强的方法、装置及图像处理系统
CN109754374A (zh) * 2018-12-20 2019-05-14 深圳市资福医疗技术有限公司 一种去除图像亮度噪声的方法及装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5034003B2 (ja) * 2007-06-25 2012-09-26 オリンパス株式会社 画像処理装置
US8457433B2 (en) * 2010-01-28 2013-06-04 Texas Instruments Incorporated Methods and systems for image noise filtering
US7983511B1 (en) * 2010-11-29 2011-07-19 Adobe Systems Incorporated Methods and apparatus for noise reduction in digital images
CN102663714B (zh) * 2012-03-28 2014-06-25 中国人民解放军国防科学技术大学 基于显著性的红外图像强固定模式噪声抑制方法
US9911179B2 (en) * 2014-07-18 2018-03-06 Dolby Laboratories Licensing Corporation Image decontouring in high dynamic range video processing
US9852353B2 (en) * 2014-11-12 2017-12-26 Adobe Systems Incorporated Structure aware image denoising and noise variance estimation
CN106296638A (zh) * 2015-06-04 2017-01-04 欧姆龙株式会社 显著性信息取得装置以及显著性信息取得方法
CN104978718A (zh) * 2015-06-12 2015-10-14 中国科学院深圳先进技术研究院 一种基于图像熵的视频雨滴去除方法及系统
CN104978720A (zh) * 2015-07-01 2015-10-14 深圳先进技术研究院 一种视频图像雨滴去除方法及装置
US9747514B2 (en) * 2015-08-31 2017-08-29 Apple Inc. Noise filtering and image sharpening utilizing common spatial support
US10467496B2 (en) * 2015-08-31 2019-11-05 Apple Inc. Temporal filtering of independent color channels in image data
US9626745B2 (en) * 2015-09-04 2017-04-18 Apple Inc. Temporal multi-band noise reduction
CN105243652B (zh) * 2015-11-19 2019-06-07 Tcl集团股份有限公司 图像降噪的方法及装置
CN105957054B (zh) * 2016-04-20 2019-03-19 北京航空航天大学 一种图像变化检测方法
US10038862B2 (en) * 2016-05-02 2018-07-31 Qualcomm Incorporated Methods and apparatus for automated noise and texture optimization of digital image sensors
CN109389560B (zh) * 2018-09-27 2022-07-01 深圳开阳电子股份有限公司 一种自适应加权滤波图像降噪方法、装置及图像处理设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100104180A1 (en) * 2008-10-28 2010-04-29 Novatek Microelectronics Corp. Image noise reduction method and image processing apparatus using the same
CN103679661A (zh) * 2013-12-25 2014-03-26 北京师范大学 一种基于显著性分析的自适应遥感图像融合方法
CN104751415A (zh) * 2013-12-31 2015-07-01 展讯通信(上海)有限公司 一种图像去噪和增强的方法、装置及图像处理系统
CN109754374A (zh) * 2018-12-20 2019-05-14 深圳市资福医疗技术有限公司 一种去除图像亮度噪声的方法及装置

Also Published As

Publication number Publication date
US11348204B2 (en) 2022-05-31
CN111161177A (zh) 2020-05-15
US20220051368A1 (en) 2022-02-17
CN111161177B (zh) 2023-09-26

Similar Documents

Publication Publication Date Title
WO2021128498A1 (zh) 图像自适应降噪方法及装置
CN108921800B (zh) 基于形状自适应搜索窗口的非局部均值去噪方法
Kim et al. Optimized contrast enhancement for real-time image and video dehazing
CN104156921B (zh) 一种低照度或亮度不均图像的自适应图像增强方法
WO2020125631A1 (zh) 视频压缩方法、装置和计算机可读存储介质
KR101437195B1 (ko) 코딩된 화상 및 영상에서 블록 아티팩트 검출
US9495582B2 (en) Digital makeup
WO2016206087A1 (zh) 一种低照度图像处理方法和装置
Li et al. Visual-salience-based tone mapping for high dynamic range images
JP3465226B2 (ja) 画像濃度変換処理方法
WO2018082185A1 (zh) 图像处理方法和装置
KR102523505B1 (ko) 역 톤 매핑을 위한 방법 및 장치
US8249380B2 (en) Image processor and program
WO2023123927A1 (zh) 图像增强方法、装置、设备和存储介质
WO2022179335A1 (zh) 视频处理方法、装置、电子设备以及存储介质
US20100278423A1 (en) Methods and systems for contrast enhancement
CN109767408B (zh) 图像处理方法、装置、存储介质及计算机设备
US10235741B2 (en) Image correction apparatus and image correction method
WO2020124873A1 (zh) 图像处理方法
CN109255752B (zh) 图像自适应压缩方法、装置、终端及存储介质
Hou et al. Underwater image dehazing and denoising via curvature variation regularization
CN107292834B (zh) 红外图像细节增强方法
CN112150368A (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
Mu et al. Low and non-uniform illumination color image enhancement using weighted guided image filtering
WO2019223428A1 (zh) 有损压缩编码方法、装置和系统级芯片

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20908088

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20908088

Country of ref document: EP

Kind code of ref document: A1