WO2016165112A1 - 一种对比度自适应的视频去噪系统 - Google Patents

一种对比度自适应的视频去噪系统 Download PDF

Info

Publication number
WO2016165112A1
WO2016165112A1 PCT/CN2015/076783 CN2015076783W WO2016165112A1 WO 2016165112 A1 WO2016165112 A1 WO 2016165112A1 CN 2015076783 W CN2015076783 W CN 2015076783W WO 2016165112 A1 WO2016165112 A1 WO 2016165112A1
Authority
WO
WIPO (PCT)
Prior art keywords
contrast
pixel
motion
mean
window
Prior art date
Application number
PCT/CN2015/076783
Other languages
English (en)
French (fr)
Inventor
郭若杉
叶璐
韩睿
汤仁君
罗杨
颜奉丽
汤晓莉
Original Assignee
中国科学院自动化研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院自动化研究所 filed Critical 中国科学院自动化研究所
Priority to PCT/CN2015/076783 priority Critical patent/WO2016165112A1/zh
Priority to US15/557,082 priority patent/US10614554B2/en
Publication of WO2016165112A1 publication Critical patent/WO2016165112A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction

Definitions

  • the present invention relates to the field of video processing technologies, and in particular, to the technical field of time domain noise reduction for video.
  • CMOS complementary metal-oxide-semiconductor
  • CCD complementary metal-oxide-semiconductor
  • video denoising technology it is necessary to use video denoising technology to perform noise. Remove.
  • various video sources including Internet video captured by handheld devices, need to be played and displayed on display terminals such as televisions. Because of its limited sensor area, handheld video cameras have poorer image quality and more serious noise than large-area sensors for professional camera equipment, so video denoising technology has become especially important.
  • Video noise reduction techniques include airspace noise reduction and time domain noise reduction techniques.
  • the airspace noise reduction technology has simple spatial filtering such as mean filtering and median filtering, which often brings details of blur.
  • Time domain noise reduction technology is more widely used by industry because of its better protection of details.
  • the traditional time domain noise reduction method is shown in FIG. 1 , and the difference between the frames is calculated by using the current input frame and the previous filter frame, and then the motion difference detection is performed by comparing the difference between the frames and the threshold, that is, the pixels whose inter-frame difference is greater than the threshold. For moving pixels, the pixels whose inter-frame difference is smaller than the threshold are still pixels, and then utilized.
  • the result of the motion detection guides the time domain filtering of the current input frame and the previous filter frame. If it is a still region, multi-frame weighted time domain filtering is performed to achieve the denoising effect. If it is a motion region, the time domain is not performed. Filtering.
  • Motion detection generally produces two types of errors.
  • the first type of error is missed detection, that is, the motion pixel is judged as a still pixel.
  • This detection error also uses multi-frame weighted time domain filtering on the motion region, resulting in the drag of the moving target. Distortion of tail or inter-frame blur.
  • the second type of error is false alarm, and the stationary pixels are misclassified into moving pixels. Such detection errors will cause the static region to be not subjected to time domain filtering, so that the noise existing in the stationary region cannot be removed. If the threshold of motion detection is high, an error of missed detection is liable to occur. If the threshold of motion detection is low, false alarms are prone to occur.
  • the noise When the noise satisfies the Gaussian white noise distribution, it will ensure that more than 95% of the still pixels are not detected as motion pixels, that is, the first type of error is false alarm occurrence. The rate is below 5%, but the error rate of the second type of error and missed detection cannot be controlled.
  • this threshold selection method causes a large number of missed detection errors, that is, the motion area is not detected, thereby filtering in the time domain. Motion target smearing and interframe blurring occur, which is more serious in subjective image quality than noise removal.
  • the contrast region does not cause the moving target tailing and the interframe blurring problem, and the present invention proposes a contrast adaptive video denoising system, thereby achieving a better deconstruction effect. To ensure the clarity of the video.
  • a contrast adaptive video denoising system includes a frame storage, an interframe difference feature calculation module, a motion detection module, a motion adaptive time domain filtering module, and a contrast calculation module, which is low.
  • a contrast area detecting module the contrast calculating module outputs a local contrast C of the current input frame I according to the current input frame count I;
  • the low contrast area detecting module calculates an output low contrast area confidence R_LC according to the local contrast C of the current input frame I;
  • motion detection The module calculates the motion probability R_Motion of the output pixel according to the difference between the low contrast area confidence R_LC and the interframe difference feature calculation module output.
  • the contrast calculation module includes a horizontal gradient calculation unit, a gradient threshold calculation unit, a transition zone detection unit, a left mean calculation unit, a right mean calculation unit, and an absolute value difference calculation unit;
  • the horizontal gradient calculation unit is configured to transform the current input frame into The horizontal gradient image G;
  • the gradient threshold calculation unit calculates the output gradient threshold Gt according to the horizontal gradient image G;
  • the transition zone detection unit calculates the transition zone flag ⁇ of the pixel to be detected according to the horizontal gradient image G and the gradient threshold Gt, and the point to be detected is to be detected.
  • the local window around the pixel is divided into a left window and a right window; the left mean calculating unit calculates the gray level mean left_mean of the left window non-transition band pixel according to the current input frame and the non-transition band flag ⁇ ; the right mean calculating unit is based on the current input frame and the transition With the mark ⁇ calculation output right-window non-transition band pixels gray mean mean_mean; absolute value difference meter The calculation unit calculates the absolute value of the difference between the gray-scale mean left_mean of the left-window non-transition band pixel and the gray-scale mean right_mean of the pixel of the right-window non-transition band as the local contrast C of the current input frame I.
  • the present invention can adaptively determine the parameters of motion detection according to the contrast by providing a motion detection system that calculates local contrast and contrast adaptation, thereby achieving the following beneficial effects:
  • Figure 1 is a schematic diagram of a conventional video time domain noise reduction system
  • 2A is a schematic diagram of a moving object in a statistical analysis of motion pixels in the embodiment
  • 2B is a schematic diagram of an image of a moving object at time t in statistical analysis of motion pixels in the embodiment
  • FIG. 3A is a MAE characteristic distribution curve of a still pixel and a moving pixel
  • FIG. 3B is a MAE characteristic distribution curve 2 of a still pixel and a moving pixel
  • FIG. 4 is a schematic diagram of a contrast adaptive video denoising system of the present embodiment
  • FIG. 5A is a schematic diagram of a contrast calculation module of the embodiment.
  • Figure 5B is a column of pixels sitting on a horizontal line passing through the boundary between the moving object and the background.
  • the standard j and the pixel gray value are corresponding to the map;
  • FIG. 5C is a schematic diagram of a contrast calculation method of the embodiment.
  • Figure 6A shows the MAE characteristic distribution curve of the still pixel and the moving pixel
  • 6B is a MAE characteristic distribution curve 2 of a still pixel and a moving pixel
  • Figure 7A is a schematic diagram 1 of the relationship between R_LC and X;
  • Figure 7B Schematic diagram 2 of the relationship between R_LC and X;
  • Figure 7C Schematic diagram 3 of the relationship between R_LC and X;
  • Figure 8 is a schematic diagram of a soft threshold motion detection curve.
  • the motion detection method adopted only considers the statistical distribution of the still pixels, so that the occurrence of the miss detection error cannot be controlled.
  • the present invention analyzes the statistical distribution of the moving pixels as follows, and can quantitatively see the influence of the difference in brightness (i.e., contrast) between the moving object and the background on the motion detection.
  • the moving object is a circle in FIG. 2A.
  • the gray value of the background is B.
  • the difference between the moving object and the background is that the contrast is C and the noise is n.
  • the gray value of the moving object is C+B+n, and the surrounding white
  • the area is the background
  • the gray value is B+n
  • the noise n is subject to zero mean Gauss
  • the noise variance is which is 2A is an image of a video at time t-1
  • image 2B is an image of a video at time t.
  • the moving object has moved.
  • the gray value of the image set at time t is g t
  • the gray value of the image at time t-1 is g t-1
  • the distribution of the inter-pixel pixel difference d in the motion region, that is, the A3 region in FIG. 2B is as follows:
  • a Mean Absolute Error (MAE) of the absolute value difference y of the inter-frame pixels is generally used as a feature of motion detection, and the calculation of the MAE feature is as shown in Equation (8).
  • Y1,...,yk are the absolute value difference y,m of the inter-frame pixels of the locally adjacent k pixels, respectively, and the mean value is the same as y, and the variance is 1/k of the y variance, ie
  • the MAE features of the moving pixel and the still pixel have substantially no overlapping regions, and have better separability.
  • the contrast C is lowered to 4
  • the moving pixel MAE feature has already had a large overlap with the still pixel MAE feature.
  • x is the contrast-to-noise ratio. After a large amount of data analysis, when x>3, that is, the contrast ratio is >3 times the noise level, the MAE feature overlap area of the moving pixel and the still pixel is smaller, and has better separability, when x ⁇ At 3 o'clock, the overlap region gradually increases with the decrease of x, and the separability decreases. 2. Contrast adaptive video denoising system
  • the contrast adaptive video denoising system of this embodiment is shown in FIG.
  • the system comprises: a frame storage, an interframe difference feature calculation module, a contrast calculation module, a low contrast region detection module, a motion detection module, a motion adaptive time domain filtering module; a frame buffer for buffering filtering results; and an interframe difference feature calculation
  • the module calculates an output frame difference feature according to the current input frame of the video and the previous filter frame in the frame storage;
  • the contrast calculation module calculates the local contrast of the current input frame according to the current input frame;
  • the low contrast region detection module calculates the local contrast according to the current input frame.
  • the low contrast area confidence is output; the motion detection module calculates the motion probability of the output pixel according to the inter-frame difference outputted by the low contrast area confidence and the interframe difference feature calculation module; the motion adaptive time domain filtering module is based on the current input frame and frame of the video. The motion filter adaptive time domain filtering is performed on the previous filter frame and the motion probability of each pixel, and finally the current filter frame is stored in the frame memory.
  • the inter-frame difference feature calculation module calculates the inter-frame difference feature of the current input frame and the previous filter frame in the frame memory, and there are many inter-frame difference feature calculation methods, such as simple Difference, absolute value difference, absolute value difference and SAD (Sum of Absolute Difference), mean absolute value difference MAE (Mean Absolute Error).
  • the interframe difference feature used in this embodiment is a MAE feature, as defined by equation (8).
  • the contrast calculation module includes a horizontal gradient calculation unit, a gradient threshold calculation unit, a transition zone detection unit, a left mean calculation unit, a right mean calculation unit, and an absolute value difference calculation unit;
  • the horizontal gradient calculating unit is configured to transform the current input frame into the horizontal gradient image G; the gradient threshold calculating unit calculates the output gradient threshold Gt according to the horizontal gradient image G; the transition band detecting unit calculates the output to be detected according to the horizontal gradient image G and the gradient threshold Gt.
  • the transition band of the pixel marks ⁇ , and divides the local window around the pixel to be detected into a left window and a right window;
  • the left mean calculating unit calculates the gray mean of the non-interlaced pixel of the left window according to the current input frame and the transition band flag ⁇ left_mean
  • the right mean calculating unit calculates the gray mean value right_mean of the output right window non-transition band pixel according to the current input frame and the transition band flag ⁇ ;
  • the absolute value difference calculating unit calculates the gray level mean left_mean and the right window non of the left window non-transition band pixel
  • the absolute value of the difference of the grayscale mean right_mean of the transition zone pixel is taken as the local contrast C of the current input frame.
  • the pixel coordinates of a pixel on a horizontal line passing through the boundary between the moving object and the background (j represents the pixel in The coordinates in the horizontal direction and the pixel gray value are corresponding to the map, the gray value of the moving object is V1, and the gray value of the gray level of the background is V2, and there is a certain transition band between the moving object and the background.
  • the pixel (i, j) contrast calculation window of the pixel to be detected of the previous input frame I is (2N+1)x1, the pixel coordinates in the window are represented as (i, j+n), and the pixel of -N ⁇ n ⁇ 0 is the left window.
  • pixels of 1 ⁇ n ⁇ N are pixels of the right window; then the average of the pixels in the left window is used to estimate V1, and the mean of the pixels in the right window is used to estimate V2, since both left and right windows exist
  • the transition band the gray level of the pixels of these transition bands will affect the correct estimation of V1, V2, so the calculated mean value will be closer to the correct values of V1 and V2 after the transition band pixels are removed.
  • the specific calculation step of calculating the local contrast C of the current input frame I in the contrast calculation module is: Step 11, calculating the horizontal gradient image G
  • the horizontal gradient can be calculated using the gradient operator and image convolution.
  • This embodiment uses a 3x3 Sobel gradient operator.
  • Step 12 calculates the gradient threshold Gt
  • the gradient threshold Gt is calculated at the position (i, j) of the point to be detected: taking a local (2N+1)x1 window around the pixel (i,j) to be detected, and calculating the gradient max_grad and gradient of the local window.
  • the threshold Gt is as shown in equations (14) and (15).
  • Max_grad(i,j) max jN ⁇ 1 ⁇ j+N G(i,n) (14)
  • W is the proportionality coefficient of the gradient threshold value as the maximum value of the gradient, and W can be taken as 0.7.
  • Step 13 using the horizontal gradient image G, the gradient threshold Gt, performing transition zone detection, and calculating the non-transition zone flag ⁇ of the (2N+1)x1 partial window of the pixel (i,j) to be detected
  • the pixel is a non-transitional band pixel.
  • Step 14 using the pre-input frame I, and the non-transition band flag ⁇ to calculate the gray-scale mean left_mean of the left-window non-transitional pixel
  • the left_sum in equation (17) is the sum of the gray values of the non-transitional pixels in the left window
  • the left_count in equation (18) is the number of non-transitional pixels in the left window
  • the non-transition band is calculated in equation (19).
  • Step 15 using the image I, and the non-transition band flag ⁇ to calculate the gray mean of the non-transitional pixels in the right window
  • the pixel whose coordinates are (i, j+n) and 1 ⁇ n ⁇ N is the pixel of the right window.
  • the calculation of the gray-scale mean right_mean of the non-transitional pixels of the right window is as shown in equations (20), (21), and (22), and the right_sum in equation (20) is the sum of the gray values of the non-transitional pixels in the right window.
  • the right_count in equation (21) is the number of non-transitional pixels in the left window. Equation (22) calculates the gray mean of the non-transitional pixels of the right window.
  • Step 16 Calculate the contrast C by using the gray-scale mean left_mean of the non-transitional pixels of the left window and the gray-level mean right_mean of the non-transitional pixels of the right window, as shown in equation (23).
  • the low contrast area detection module accepts the contrast C input calculated by the contrast module and calculates a low contrast area confidence R_LC.
  • the purpose of low-contrast area detection is to use the traditional motion detection method to generate missed detection errors in low contrast areas, resulting in distortion of moving target tailing.
  • the MAE features of the moving pixel and the still pixel have substantially no overlapping regions, and conventional noise adaptive motion detection can be used.
  • the method takes 2 times or 3 times the noise level as the motion detection threshold without causing detection error.
  • the contrast C is lowered to 4
  • the moving pixel MAE feature has already had a large overlap with the still pixel MAE feature. If the conventional 2x noise level is used for detection, a large number of missed detection errors will be caused.
  • This patent provides a method for detecting low contrast areas. After detection, the patented contrast adaptive motion detection can be used to avoid missed detection errors.
  • the low-contrast area detecting method adopted in this embodiment not only uses the contrast for detection, but also uses the noise level for detection, so it is a noise-adaptive low-contrast area detecting method.
  • the same contrast ratio can have different effects on motion detection in low noise and high noise.
  • Step 21 calculating a contrast-to-noise ratio X, as shown in equation (25)
  • the low contrast region confidence R_LC is calculated based on the contrast noise ratio X.
  • Figures 7A, 7B show two curves for calculating R_LC. Set the contrast area and high contrast area critical threshold X1 and low confidence attenuation threshold X2, where X2 ⁇ X1, when X(i, j) ⁇ X2, the confidence R_LC is 1, when X2 ⁇ X (i, j) ⁇ X1
  • the time confidence R_LC monotonically decreases from 1 to 0 as the contrast-to-noise ratio X(i,j) increases, and the confidence R_LC is 0 when X1 ⁇ X(i,j). .
  • Fig. 7C shows another curve for calculating R_LC, excluding the influence of the smoothing region, specifically setting thresholds X1, X2, X3, X4, where X4 ⁇ X3 ⁇ X2 ⁇ X1, when X(i, j) ⁇ X4 Confidence R_LC is 0.
  • the motion detection module accepts the interframe difference feature m calculated by the interframe difference feature and the low contrast region confidence R_LC input of the low contrast region detection, performs contrast adaptive motion detection, and outputs a motion probability.
  • areas with low contrast confidence there is a large overlap between the moving pixels and the still pixels.
  • the distortion of the moving target due to the missed detection error is less than the noise caused by the false alarm error.
  • the phenomenon of complete removal is more serious, so it is low Contrast area, this patent uses low-contrast area confidence to adjust the parameters of motion detection to control the occurrence of missed detection errors.
  • FIG. 8 shows a motion detection method, where T1 and T2 are the soft thresholds for motion detection.
  • T1 and T2 are the soft thresholds for motion detection.
  • the motion probability of the pixel is R_Motion. 1.
  • T1 ⁇ m(i,j) ⁇ T2 the motion probability of the pixel R_Motion decreases monotonously from 1 to 0 with the increase of the inter-frame difference feature m.
  • the motion probability of the pixel when T2 ⁇ m(i,j) is R_Motion Is 0.
  • the parameters of the motion detection are adjusted by using the low-contrast area confidence. In the low-contrast area, the parameters of the motion detection are adjusted to encourage the pixels to be detected as motion pixels, thereby reducing the missed detection rate. If the motion detection method of FIG. 8 is adopted, the adjustment method is to reduce the motion detection threshold as shown in equations (26) and (27).
  • T1 Preset and T2 Preset are preset motion detection parameters, which can be set according to the traditional motion detection method.
  • the threshold is lowered, so that the occurrence of control miss detection error can be made.
  • the motion adaptive time domain filtering module accepts the motion probability R_Motion input calculated by the motion detection module, and the input of the current input frame image and the previous filter frame in the frame memory, and the operation
  • the dynamic probability directs the weighting filtering of the current image and the previous filtered frame.
  • the pixels with high motion probability do not use time domain weighted filtering, and the pixels with small motion probability adopt time domain weighted filtering. Therefore, while the denoising of the stationary region is achieved, no moving target smearing and time domain ambiguity occur in the moving region.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Picture Signal Circuits (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种对比度自适应的视频去噪系统,包括一个帧存用于缓存滤波结果,一个帧间差异特征计算模块用于计算视频当前输入帧和帧存中的前一滤波帧的帧间差异特征,一个对比度计算模块用于计算当前输入帧的局部对比度,然后输入低对比度区域检测模块,计算出的低对比度区域置信度与帧间差异特征共同输入运动检测模块,计算出每个像素的运动概率。运动自适应时域滤波模块利用视频当前输入帧和帧存中的前一滤波帧输入,以及每个像素的运动概率来进行运动自适应时域滤波,最后输出当前滤波帧存入帧存。利用该系统可以解决传统视频去噪系统在处理低对比度运动视频会产生的运动拖尾和模糊问题。

Description

一种对比度自适应的视频去噪系统 技术领域
本发明涉及视频处理技术领域,尤其涉及对视频进行时域降噪的技术领域。
背景技术
由于摄像设备(CMOS,CCD传感器)在采集过程往往受到噪声的影响,导致视频往往存在着随机噪声,尤其是在低照度下噪声的现象的更为严重,所以需要利用视频去噪技术对噪声进行去除。另外随着移动互联和视频源越来越多源化,在电视等显示终端设备上需要播放和显示各种视频源,包括利用手持设备拍摄的互联网视频。手持移动设备的摄像头由于其传感器面积有限,比起专业摄像设备的大面积传感器而言,其成像质量较差,噪声更为严重,所以视频去噪技术变得尤其重要。
视频降噪技术包括空域降噪和时域降噪技术。其中,空域降噪技术有简单的空域滤波如均值滤波,中值滤波,往往会带来细节的模糊。时域降噪技术由于其对细节的保护更好,而更多地被工业界采用。传统的时域降噪方法如图1所示,利用当前输入帧和前一滤波帧计算出帧间差异,然后用帧间差异和阈值比较,来进行运动检测,即帧间差异大于阈值的像素为运动像素,帧间差异小于阈值的像素为静止像素,然后利用 运动检测的结果来指导对当前输入帧和前一滤波帧的时域滤波,如果是静止区域,进行多帧加权的时域滤波,达到去噪的效果,如果是运动区域,则不进行时域滤波。
运动检测一般会产生两类错误,第一类错误为漏检,即运动像素被判断为静止像素,这种检测错误会对运动区域也采用多帧加权的时域滤波,从而导致运动目标的拖尾或帧间模糊的失真现象。第二类错误为虚警,静止的像素被误分为运动像素,这类检测错误会使静止区域不进行时域滤波,从而无法去除静止区域存在的噪声。如果运动检测的阈值高,则容易发生漏检的错误。如果运动检测的阈值低,容易发生虚警的错误。
传统的运动检测方法,如专利US7903179B2和US6061100,及专利US 2006/0139494A1中提出的方法,往往采用事先指定的全局性阈值或者噪声水平自适应的全局性阈值来进行运动检测,如专利US6061100,采用2倍噪声水平作为运动检测的阈值,如果帧间差异小于2倍噪声水平,则为静止像素,否则则为运动像素。这类运动检测方法往往只考虑静止像素的特征统计分布,在噪声满足高斯白噪分布时,会保证95%以上的静止像素不被检测为运动像素,即第一类错误即虚警的错误发生率低于5%,但无法控制第二类错误及漏检的错误率。对于低对比度运动视频(即运动目标和背景之间的亮度差异较小的视频),这种阈值选取的方法会造成大量的漏检错误,即运动区域没有被检测出来,由此在时域滤波时会产生运动目标拖尾和帧间模糊现象,这在主观图像质量上比噪声没去除问题更严重。
如何同时控制低对比度区域的漏检错误,使得对比度区域不会产生 运动目标拖尾和帧间模糊问题,是需要解决的问题。
发明内容
为了解决低对比度区域的漏检错误,使得对比度区域不会产生运动目标拖尾和帧间模糊问题,本发明提出了一种对比度自适应的视频去噪系统,从而达到了更好的去造效果,保证了视频的清晰度。
为达到上述目的,本发明提出的一种对比度自适应的视频去噪系统,包括帧存、帧间差异特征计算模块、运动检测模块、运动自适应时域滤波模块,还包括对比度计算模块、低对比度区域检测模块;对比度计算模块依据当前输入帧计I算输出当前输入帧I的局部对比度C;低对比度区域检测模块依据当前输入帧I的局部对比度C计算输出低对比度区域置信度R_LC;运动检测模块依据低对比度区域置信度R_LC与帧间差异特征计算模块输出的帧间差异计算输出像素的运动概率R_Motion。
所述的对比度计算模块包括水平梯度计算单元、梯度阈值计算单元、过渡带检测单元、左均值计算单元、右均值计算单元、绝对值差计算单元;水平梯度计算单元用于将当前输入帧变换为水平梯度图像G;梯度阈值计算单元依据水平梯度图像G计算输出梯度阈值Gt;过渡带检测单元依据水平梯度图像G、梯度阈值Gt计算输出待检测点像素的过渡带标志α,并将待检测点像素周围局部窗口划分为左窗口和右窗口;左均值计算单元依据当前输入帧和非过渡带标志α计算输出左窗口非过渡带像素的灰度均值left_mean;右均值计算单元依据当前输入帧和过渡带标志α计算输出右窗口非过渡带像素的灰度均值right_mean;绝对值差计 算单元计算输出左窗口非过渡带像素的灰度均值left_mean和右窗口非过渡带的像素的灰度均值right_mean的差值的绝对值作为当前输入帧I的局部对比度C。
本发明通过提供计算局部对比度和对比度自适应的运动检测系统,可以根据对比度自适应地确定运动检测的参数,从而达到以下有益效果:
(1)对于低对比度运动视频或视频中的低对比度运动物体区域,可以有效控制漏检错误的发生,从而避免低对比度下运动物体拖尾失真现象的发生。
(2)对于高对比度运动视频或视频中的高对比度区域,可以有效控制控制虚警错误的发生,从而保证该类视频或区域具有良好的去噪效果。
附图说明
图1传统的视频时域降噪系统示意图;
图2A本实施例运动像素统计分析中运动物体示意图;
图2B本实施例运动像素统计分析中运动物体在t时刻的图像示意图;
图3A静止像素和运动像素的MAE特征分布曲线一;
图3B静止像素和运动像素的MAE特征分布曲线二;
图4本实施例对比度自适应的视频去噪系统示意图;
图5A本实施例对比度计算模块示意图;
图5B为穿过运动物体和背景交界处的一条水平线上的像素其列坐 标j和像素灰度值得对应图;
图5C本实施例对比度计算方法示意图;
图6A静止像素和运动像素的MAE特征分布曲线一;
图6B静止像素和运动像素的MAE特征分布曲线二;
图7A R_LC和X的关系曲线示意图一;
图7B:R_LC和X的关系曲线示意图二;
图7C:R_LC和X的关系曲线示意图三;
图8软阈值运动检测曲线示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本发明进一步详细说明。
1,运动像素和静止像素的统计分布分析
在传统技术如专利US 2006/0139494A1,US6061100中,所采取的运动检测方法只考虑了静止像素的统计分布,从而无法控制漏检错误的发生。为了控制漏检错误,本发明对运动像素的统计分布做了如下分析,可以定量地看出运动物体和背景的亮度差异(即对比度)对运动检测的影响。
如图2A、图2B所示,运动物体为图2A中的圆圈。假设背景的灰度值为B,运动物体和背景的灰度差即对比度为C,噪声为n,则在有噪声情况下,运动物体的其灰度值为C+B+n,周围白色的区域为背景,灰度值为B+n,噪声n服从零均值高斯,噪声方差为
Figure PCTCN2015076783-appb-000001
Figure PCTCN2015076783-appb-000002
图2A为视频在t-1时刻的图像,图像2B为视频在t时刻的图像。运动物体发生了运动。
设在t时刻的图像灰度值为gt,t-1时刻的图像灰度值为gt-1
在运动区域,即图2B中的A3区域的帧间像素差d的分布如下:
gt-1=C+B+n      (1)
gt=B+n             (2)
d=gt-gt-1         (3)
则d服从的分布为:
Figure PCTCN2015076783-appb-000003
令y=|d|,则帧间像素的绝对值差y服从的分布为
Figure PCTCN2015076783-appb-000004
y的均值为:
Figure PCTCN2015076783-appb-000005
y的方差为:
Figure PCTCN2015076783-appb-000006
在进行运动检测时,通常采用帧间像素的绝对值差y的局部平均值(Mean Absolute Error,简称为MAE)来作为运动检测的特征,该MAE特征的计算如式(8)所示,
Figure PCTCN2015076783-appb-000007
y1,…,yk分别为局部相邻的k个像素的帧间像素的绝对值差y,m服从的分布,其均值与y相同,方差为y方差的1/k,即
Figure PCTCN2015076783-appb-000008
Figure PCTCN2015076783-appb-000009
静止区域的像素(如图2B的在t时刻的图像的A2区域),其MAE特征服从的分布,为式(9)和(10)在C=0的特殊情况
Figure PCTCN2015076783-appb-000010
Figure PCTCN2015076783-appb-000011
根据式(9)和式(10)决定的运动像素的MAE特征分布,和式(11)和式(12)决定的静止像素的MAE特征分布,可以得出C=8,σg=2时的分布曲线,如图3A,和C=4,σg=2时的分布曲线,如图3B。
从图3A和3B可以看出,当噪声水平相同时(σg=2),对比度C为8时,运动像素和静止像素的MAE特征基本没有重叠区域,具有比较好的可分性,而当对比度C降低到4时,运动像素MAE特征已经和静止像素MAE特征已经有较大的重叠区域。
设对比度C和噪声水平σg的关系为
C=xσg       (13)
x为对比度噪声比,经过大量的数据分析,当x>3,即对比度〉3倍噪声水平时,运动像素和静止像素的MAE特征重叠区域较小,具有较好的可分性,当x<3时,重叠区域随着x的减小逐渐增大,可分性下降。 2、对比度自适应的视频去噪系统
从上面的分析可以看出,当对比度噪声比x较小时,如果依靠阈值来进行运动检测,无论阈值怎么选取,都会有检测错误,如果阈值较小,会发生漏检,如果阈值较大,会发生虚警,但由于漏检产生的运动目标拖尾的失真现象比虚警产生的噪声没有去除的现象对人眼视觉更为严重,所以在低对比度情况下,为了达到比较好的视觉效果,运动检测应该尽量控制漏检的发生。为此,本专利提供了对比度自适应视频去噪系统以解决上述问题。
2,对比度自适应视频去噪系统说明
本实施例的对比度自适应视频去噪系统如图4所示。该系统包括:包括帧存、帧间差异特征计算模块、对比度计算模块、低对比度区域检测模块、运动检测模块、运动自适应时域滤波模块;帧存用于缓存滤波结果;帧间差异特征计算模块依据视频当前输入帧和帧存中的前一滤波帧计算输出帧间差异特征;对比度计算模块依据当前输入帧计算输出当前输入帧的局部对比度;低对比度区域检测模块依据当前输入帧局部对比度计算输出低对比度区域置信度;运动检测模块依据低对比度区域置信度与帧间差异特征计算模块输出的帧间差异计算输出像素的运动概率;运动自适应时域滤波模块依据视频当前输入帧和帧存中的前一滤波帧以及每个像素的运动概率来进行运动自适应时域滤波,最后输出当前滤波帧存入帧存。
帧间差异特征计算模块计算视频当前输入帧和帧存中的前一滤波帧的帧间差异特征,有很多帧间差异特征计算方法可以采用,如简单的 差,绝对值差,绝对值差和SAD(Sum of Absolute Difference),平均绝对值差MAE(Mean Absolute Error)。本实施例中采用的帧间差异特征为MAE特征,如式(8)所定义。
对比度计算模块如图5A所示,对比度计算模块包括水平梯度计算单元、梯度阈值计算单元、过渡带检测单元、左均值计算单元、右均值计算单元、绝对值差计算单元;
水平梯度计算单元用于将当前输入帧变换为水平梯度图像G;梯度阈值计算单元依据水平梯度图像G计算输出梯度阈值Gt;过渡带检测单元依据水平梯度图像G、梯度阈值Gt计算输出待检测点像素的过渡带标志α,并将待检测点像素周围局部窗口划分为左窗口和右窗口;左均值计算单元依据当前输入帧和过渡带标志α计算输出左窗口非过渡带像素的灰度均值left_mean;右均值计算单元依据当前输入帧和过渡带标志α计算输出右窗口非过渡带像素的灰度均值right_mean;绝对值差计算单元计算输出左窗口非过渡带像素的灰度均值left_mean和右窗口非过渡带像素的灰度均值right_mean的差值的绝对值作为当前输入帧的局部对比度C。
为了方便理解本专利的对比度计算方法,现对该模块对应的方法进行说明,如图5B所示为穿过运动物体和背景交界处的一条水平线上的像素其列坐标j(j代表像素的在水平方向上的坐标)和像素灰度值得对应图,运动物体的灰度值为V1,背景的灰度的灰度值为V2,在运动物体和背景之间存在一定的过渡带。则运动物体和背景的对比度C=V2-V1。
为了对C计算,需对V1和V2进行估计,具体如图5(C)所示,设当 前输入帧I的待检测点像素(i,j)对比度计算窗口为(2N+1)x1,窗口中像素坐标表示为(i,j+n),-N≤n≤0的像素为左窗口的像素,1≤n≤N的像素为右窗口的像素;则利用左窗口中的像素的均值来估计V1,利用右窗口中的像素的均值来估计V2,由于左、右窗口均存在着一定的过渡带,这些过渡带的像素的灰度会影响V1,V2的正确估计,所以将过渡带像素去除之后计算均值会更接近V1和V2的正确值。对比度计算模块中计算当前输入帧I的局部对比度C的具体计算步骤为:步骤11,计算水平梯度图像G
可以用梯度算子和图像卷积计算出水平梯度,本实施例采用3x3的Sobel梯度算子。
步骤12计算梯度阈值Gt
在待检测点位置(i,j)计算梯度阈值Gt方法为:取在待检测点像素(i,j)的周围取局部(2N+1)x1窗口,计算局部窗口的梯度最大值max_grad和梯度阈值Gt,如式(14)和(15)所示。
max_grad(i,j)=maxj-N≤1≤j+NG(i,n)      (14)
Gt(i,j)=W*max_grad(i,j)               (15)
其中,W为梯度阈值为梯度最大值的比例系数,W可取做0.7。
步骤13,利用水平梯度图像G,梯度阈值Gt,进行过渡带检测,计算待检测点像素(i,j)的(2N+1)x1局部窗口的非过渡带标志α
Figure PCTCN2015076783-appb-000012
即当局部窗口中的像素的梯度G小于梯度阈值Gt时,该像素为非过渡带像素。
步骤14,利用前输入帧I,和非过渡带标志α计算左窗口非过渡带像素的灰度均值left_mean
在待检测像素(i,j)周围的(2N+1)x1的局部窗口中,坐标为(i,j+n)且-N≤n≤0的像素为左窗口的像素。计算左窗口非过渡带像素的灰度均值left_mean,具体公式为式(17)、(18)、(19)。
Figure PCTCN2015076783-appb-000013
Figure PCTCN2015076783-appb-000014
left_mean(i,j)=left_sum(i,j)/left_count(i,j)        (19)
式(17)中的left_sum为左窗口中非过渡带像素的灰度值的和,式(18)中的left_count为左窗口中非过渡带像素的个数,式(19)计算出非过渡带像素的灰度均值。
步骤15,利用图像I,和非过渡带标志α计算右窗口非过渡带像素的灰度均值
在待检测像素(i,j)周围的(2N+1)x1的局部窗口中,坐标为(i,j+n)且1≤n≤N的像素为右窗口的像素。右窗口非过渡带像素的灰度均值right_mean的计算如式(20),(21),(22)所示,式(20)中的right_sum为右窗口中非过渡带像素的灰度值的和。式(21)中的right_count为左窗口中非过渡带像素的个数。式(22)计算出右窗口非过渡带像素的灰度均值。
Figure PCTCN2015076783-appb-000015
Figure PCTCN2015076783-appb-000016
right_mean(i,j)=right_sum(i,j)/right_count(i,j)    (22)
步骤16,利用左窗口非过渡带像素的灰度均值left_mean和右窗口非过渡带像素的灰度均值right_mean计算对比度C,如式(23)所示
C(i,j)=|right_mean(i,j)-left_mean(i,j)|           (23)
低对比度区域检测模块接受对比度模块计算的对比度C输入,计算出低对比度区域置信度R_LC。进行低对比度区域检测的目的是,采用传统的运动检测方法会在低对比区域产生漏检错误,产生运动目标拖尾的失真现象。从图4A和4B可以看出,当噪声水平相同时(σg=2),对比度C为8时,运动像素和静止像素的MAE特征基本没有重叠区域,可以采用传统的噪声自适应的运动检测方法即取2倍或3倍噪声水平为运动检测阈值,而不会带来检测误差。而当对比度C降低到4时,运动像素MAE特征已经和静止像素MAE特征已经有较大的重叠区域,如果采用传统的2倍噪声水平来做检测,会造成大量的漏检错误。本专利提供将低对比度区域检测出来的方法。检测出来后就可以用本专利的对比度自适应运动检测来避免漏检错误的发生。
本实施例采用的低对比度区域检测方法不仅利用对比度来做检测,还利用噪声水平来做检测,所以是一种噪声自适应的低对比度区域检测方法。同样的对比度,在低噪声的时候和高噪声的时候可以对运动检测带来不同的影响,利用噪声水平自适应的检测方法好处可以消除噪声的影响。如图6A和图6B所示,在对比度C=4,噪声水平为1时,静止像素 和运动像素之间没有重叠区域,但在同样的对比度下,噪声水平为2时,如图6B所示,静止像素和运动像素之间有重叠区域,如果采用传统的2倍噪声水平来做检测,会造成大量的漏检错误。
噪声自适应的低比度区域检测的具体的计算方法
步骤21,计算对比度噪声比X,如公式(25)所示
Figure PCTCN2015076783-appb-000017
步骤22,根据对比度噪声比X计算低对比度区域置信度R_LC。图7A,7B显示了两种计算R_LC的曲线。设置对比度区域和高对比度区域临界阈值X1、低置信度衰减阈值X2,其中X2<X1,当X(i,j)≤X2时置信度R_LC为1,当X2<X(i,j)<X1时置信度R_LC随对比度噪声比X(i,j)的增大由1到0单调递减,X1≤X(i,j)时置信度R_LC为0。。
图7C显示了另一种计算R_LC的曲线,排除了平滑区的影响,具体为设置阈值X1、X2、X3、X4,其中X4<X3<X2<X1,当X(i,j)≤X4时置信度R_LC为0,当X4<X(i,j)<X3时置信度R_LC随对比度噪声比X的增大由0到1单调递增,X3≤X(i,j)≤X2时置信度R_LC为1,当X2<X(i,j)<X1时置信度随R_LC对比度噪声比X的增大由1到0单调递减,X1≤X(i,j)时置信度为0。
运动检测模块接受帧间差异特征计算的帧间差异特征m和低对比度区域检测的低对比区域置信度R_LC输入,进行对比度自适应的运动检测,输出运动概率。在低对比度置信度较高的区域,运动像素和静止像素之间有较大的重叠区域,此时,由于漏检错误带来的运动目标拖尾的失真现象比虚警错误带来的噪声未完全去除的现象更为严重,所以在低 对比度区域,本专利用低对比度区域置信度来调整运动检测的参数,以此来控制漏检错误的发生。
设运动检测的输出为运动概率R_Motion,图8显示了一种运动检测的方法,其中T1,T2为进行运动检测的软阈值,:当m(i,j)<T1时像素的运动概率R_Motion为1,当T1≤m(i,j)≤T2时像素的运动概率R_Motion随帧间差异特征m的增大由1到0单调递减,当T2<m(i,j)时像素的运动概率R_Motion为0。
利用低对比度区域置信度对运动检测的参数进行调整,在低对比度区域,将运动检测的参数调整到更鼓励像素被检测为运动像素,从而减小漏检率。如果采用图8的运动检测方法,调整方法为减少运动检阈值,如式(26)、(27)所示
T1(i,j)=(1-αR_LC(i,j))*T1Preset        (26)
T2(i,j)=(1-βR_LC(i,j))*T2Preset        (27)
T1Preset和T2Preset为预设的运动检测参数,可以按照传统的运动检测方法来设定,α和β为事先设定的参数。可以设为α=0.5,β=0.5。则在高对比度区域时,低对比度置信度R_LC(i,j)=0,则T1(i,j)=T1Preset,T2(i,j)=T2Preset,可以退化为传统的运动检测方法,保证高对比度区域既没有虚警错误,也没有漏检错误,从而保证去噪效果,在低对比度区域,低对比度置信度R_LC(i,j)=0,则T1(i,j)=(1-α)*T1Preset,T2(i,j)=(1-β)*T2Preset,阈值降低了,从而可以使控制漏检错误的发生。
运动自适应时域滤波模块接受运动检测模块计算的运动概率R_Motion输入,以及当前输入帧图像和帧存中前一滤波帧的输入,用运 动概率指导当前图像和前一滤波帧的加权滤波。运动概率大的像素不采用时域加权滤波,运动概率小的像素采取时域加权滤波。从而在达到静止区域去噪的同时,在运动区域不会产生运动目标拖尾和时域模糊现象。

Claims (11)

  1. 一种对比度自适应的视频去噪系统,包括帧存、帧间差异特征计算模块、运动检测模块、运动自适应时域滤波模块,其特征在于,还包括对比度计算模块、低对比度区域检测模块;对比度计算模块依据当前输入帧I计算输出当前输入帧I的局部对比度C;低对比度区域检测模块依据当前输入帧I的局部对比度C计算输出低对比度区域置信度R_LC;运动检测模块依据低对比度区域置信度R_LC与帧间差异特征计算模块输出的帧间差异计算输出像素的运动概率R_Motion。
  2. 如权利要求1所述的一种对比度自适应的视频去噪系统,其特征在于,所述的对比度计算模块包括水平梯度计算单元、梯度阈值计算单元、过渡带检测单元、左均值计算单元、右均值计算单元、绝对值差计算单元;
    水平梯度计算单元用于将当前输入帧变换为水平梯度图像G;梯度阈值计算单元依据水平梯度图像G计算输出梯度阈值Gt;过渡带检测单元依据水平梯度图像G、梯度阈值Gt计算输出待检测点像素的过渡带标志α,并将待检测点像素周围局部窗口划分为左窗口和右窗口;左均值计算单元依据当前输入帧和过渡带标志α计算输出左窗口非过渡带像素的灰度均值left_mean;右均值计算单元依据当前输入帧和过渡带标志α计算输出右窗口非过渡带像素的灰度均值right_mean;绝对值差计算单元计算输出左窗口非过渡带像素的灰度均值left_mean和右窗口非过渡带像素的灰度均值right_mean的差值的绝对值作为当前输入帧I的局 部对比度C。
  3. 一种如权利要求2所述对比度自适应的视频去噪系统,其特征在于,水平梯度计算单采用梯度算子或图像卷积计算水平梯度图像G。
  4. 一种如权利要求3所述对比度自适应的视频去噪系统,其特征在于,梯度阈值Gt的计算方法为:在待检测点像素(i,j)的周围取局部(2N+1)x1窗口,计算局部窗口的梯度最大值max_grad,公式为
    max_grad(i,j)=maxj-N≤l≤j+NG(i,n)
    进而计算梯度阈值Gt,其公式为
    Gt(i,j)=W*max_grad(i,j)
    其中,W为梯度阈值为梯度最大值的比例系数。
  5. 一种如权利要求4所述对比度自适应的视频去噪系统,其特征在于,计算待检测点像素(i,j)的(2N+1)x1局部窗口的非过渡带标志α的公式为
  6. 一种如权利要求5所述对比度自适应的视频去噪系统,其特征在于,左窗口非过渡带像素的灰度均值left_mean和右窗口非过渡带的像素的灰度均值right_mean的计算方法为:
    当前输入帧I的待检测点像素(i,j)的(2N+1)x1局部窗口中,窗口中像素坐标表示为(i,j+n),-N≤n≤0的像素为左窗口的像素,1≤n≤N的像素为右窗口的像素;
    左窗口非过渡带像素的灰度均值left_mean的计算公式为:
    Figure PCTCN2015076783-appb-100002
    Figure PCTCN2015076783-appb-100003
    left_mean(i,j)=left_sum(i,j)/left_count(i,j)
    其中αn为第n个像素的非过渡带标志,left_sum为左窗口中非过渡带像素的灰度值的和,left_count为左窗口中非过渡带像素的个数;
    右窗口非过渡带像素的灰度均值right_mean的计算公式为:
    Figure PCTCN2015076783-appb-100004
    Figure PCTCN2015076783-appb-100005
    right_mean(i,j)=right_sum(i,j)/right_count(i,j)
    其中right_sum为右窗口中非过渡带像素的灰度值的和,right_count为右窗口中非过渡带像素的个数。
  7. 一种如权利要求6所述对比度自适应的视频去噪系统,其特征在于,所述的低对比度区域检测模块根据对比度噪声比X计算低对比度区域置信度R_LC计算低对比度区域置信度,对比度噪声比X的计算公式为
    Figure PCTCN2015076783-appb-100006
    其中,C(i,j)为像素的对比度,σg为噪声水平。
  8. 一种如权利要求7所述对比度自适应的视频去噪系统,其特征在于,根据对比度噪声比X计算低对比度区域置信度R_LC的方法为:设置对比度区域和高对比度区域临界阈值X1、低置信度衰减阈值X2,其中 X2<X1,当X(i,j)≤X2时置信度为1,当X2<X(i,j)<X1时置信度随对比度噪声比X(i,j)的增大由1到0单调递减,X1≤X(i,j)时置信度为0。
  9. 一种如权利要求8所述对比度自适应的视频去噪系统,其特征在于,根据对比度噪声比X计算低对比度区域置信度R_LC的方法为:设置阈值X1、X2、X3、X4,其中X4<X3<X2<X1,当X(i,j)≤X4时置信度为0,当X4<X(i,j)<X3时置信度随对比度噪声比X的增大由0到1单调递增,X3≤X(i,j)≤X2时置信度为1,当X2<X(i,j)<X1时置信度随对比度噪声比X的增大由1到0单调递减,X1≤X(i,j)时置信度为0。
  10. 一种如权利要求1-9中任一项所述对比度自适应的视频去噪系统,其特征在于,所述的运动检测模块计算输出像素的运动概率,其计算方法为软阈值运动检测方法,具体包括以下步骤:
    步骤11,设定运动检测的软阈值为T1、T2,
    T1(i,j)=(1-αR_LC(i,j))*T1Preset
    T2(i,j)=(1-βR_LC(i,j))*T2Preset
    其中,α和β为设定的固定参数,T1Preset和T2Preset为预设的运动检测参数;
    步骤12,依据帧间差异特征m计算像素的运动概率:当m(i,j)<T1时像素的运动概率为1,当T1≤m(i,j)≤T2时像素的运动概率随帧间差异特征m的增大由1到0单调递减,当T2<m(i,j)时像素的运动概率为0。
  11. 一种如权利要求10所述对比度自适应的视频去噪系统,其特征在于,所述的运动自适应时域滤波模块采用运动自适应的时域滤波方法对当前输入帧进行滤波,具体为:设定运动概率阈值Q;依据运动概 率R_Motion指导当前输入帧和帧存中的前一滤波帧的时域加权滤波:当R_Motion≤Q时进行时域加权滤波,当Q<R_Motion时不进行时域加权滤波。
PCT/CN2015/076783 2015-04-16 2015-04-16 一种对比度自适应的视频去噪系统 WO2016165112A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2015/076783 WO2016165112A1 (zh) 2015-04-16 2015-04-16 一种对比度自适应的视频去噪系统
US15/557,082 US10614554B2 (en) 2015-04-16 2015-04-16 Contrast adaptive video denoising system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/076783 WO2016165112A1 (zh) 2015-04-16 2015-04-16 一种对比度自适应的视频去噪系统

Publications (1)

Publication Number Publication Date
WO2016165112A1 true WO2016165112A1 (zh) 2016-10-20

Family

ID=57126169

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/076783 WO2016165112A1 (zh) 2015-04-16 2015-04-16 一种对比度自适应的视频去噪系统

Country Status (2)

Country Link
US (1) US10614554B2 (zh)
WO (1) WO2016165112A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754432A (zh) * 2020-06-22 2020-10-09 成都国科微电子有限公司 一种帧间差分运动检测方法及装置
CN114648469A (zh) * 2022-05-24 2022-06-21 上海齐感电子信息科技有限公司 视频图像去噪方法及其系统、设备和存储介质
CN116523765A (zh) * 2023-03-13 2023-08-01 湖南兴芯微电子科技有限公司 一种实时视频图像降噪方法、装置及存储器

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017132600A1 (en) * 2016-01-29 2017-08-03 Intuitive Surgical Operations, Inc. Light level adaptive filter and method
CN111179211A (zh) * 2018-10-23 2020-05-19 中国石油化工股份有限公司 原油管道巡检用无人机红外视频的管线发热诊断方法
CN110445953B (zh) * 2019-08-02 2020-11-06 浙江大华技术股份有限公司 降低动态条纹噪声的方法、装置、电子设备及存储装置
CN113011433B (zh) * 2019-12-20 2023-10-13 杭州海康威视数字技术股份有限公司 一种滤波参数调整方法及装置
CN111289848B (zh) * 2020-01-13 2023-04-07 甘肃省安全生产科学研究院有限公司 一种应用在基于安全生产的智能型热局放仪的复合数据滤波方法
CN117173059B (zh) * 2023-11-03 2024-01-19 奥谱天成(厦门)光电有限公司 用于近红外水分仪的异常点和噪声剔除方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6061100A (en) * 1997-09-30 2000-05-09 The University Of British Columbia Noise reduction for video signals
US20060139494A1 (en) * 2004-12-29 2006-06-29 Samsung Electronics Co., Ltd. Method of temporal noise reduction in video sequences
CN102132554A (zh) * 2008-06-20 2011-07-20 惠普开发有限公司 用于高效视频处理的方法和系统
CN103024248A (zh) * 2013-01-05 2013-04-03 上海富瀚微电子有限公司 运动自适应的视频图像降噪方法及其装置
CN104767913A (zh) * 2015-04-16 2015-07-08 中国科学院自动化研究所 一种对比度自适应的视频去噪系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8582032B2 (en) * 2006-09-07 2013-11-12 Texas Instruments Incorporated Motion detection for interlaced video
WO2009118978A1 (ja) * 2008-03-24 2009-10-01 パナソニック株式会社 ノイズ検出方法及びそのノイズ検出方法を用いた映像処理方法
TWI488494B (zh) * 2011-04-28 2015-06-11 Altek Corp 多畫面的影像降噪方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6061100A (en) * 1997-09-30 2000-05-09 The University Of British Columbia Noise reduction for video signals
US20060139494A1 (en) * 2004-12-29 2006-06-29 Samsung Electronics Co., Ltd. Method of temporal noise reduction in video sequences
CN102132554A (zh) * 2008-06-20 2011-07-20 惠普开发有限公司 用于高效视频处理的方法和系统
CN103024248A (zh) * 2013-01-05 2013-04-03 上海富瀚微电子有限公司 运动自适应的视频图像降噪方法及其装置
CN104767913A (zh) * 2015-04-16 2015-07-08 中国科学院自动化研究所 一种对比度自适应的视频去噪系统

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754432A (zh) * 2020-06-22 2020-10-09 成都国科微电子有限公司 一种帧间差分运动检测方法及装置
CN111754432B (zh) * 2020-06-22 2023-12-29 成都国科微电子有限公司 一种帧间差分运动检测方法及装置
CN114648469A (zh) * 2022-05-24 2022-06-21 上海齐感电子信息科技有限公司 视频图像去噪方法及其系统、设备和存储介质
CN116523765A (zh) * 2023-03-13 2023-08-01 湖南兴芯微电子科技有限公司 一种实时视频图像降噪方法、装置及存储器
CN116523765B (zh) * 2023-03-13 2023-09-05 湖南兴芯微电子科技有限公司 一种实时视频图像降噪方法、装置及存储器

Also Published As

Publication number Publication date
US20180061014A1 (en) 2018-03-01
US10614554B2 (en) 2020-04-07

Similar Documents

Publication Publication Date Title
WO2016165112A1 (zh) 一种对比度自适应的视频去噪系统
US9454805B2 (en) Method and apparatus for reducing noise of image
JP7256902B2 (ja) ビデオノイズ除去方法、装置及びコンピュータ読み取り可能な記憶媒体
US9202263B2 (en) System and method for spatio video image enhancement
Kim et al. A novel approach for denoising and enhancement of extremely low-light video
US8254454B2 (en) Apparatus and method for reducing temporal noise
CN104767913B (zh) 一种对比度自适应的视频去噪系统
KR102106537B1 (ko) 하이 다이나믹 레인지 영상 생성 방법 및, 그에 따른 장치, 그에 따른 시스템
US20100111438A1 (en) Anisotropic diffusion method and apparatus based on direction of edge
JP6254938B2 (ja) 画像ノイズ除去装置、および画像ノイズ除去方法
JP2004080252A (ja) 映像表示装置及びその方法
CN106791279B (zh) 基于遮挡检测的运动补偿方法及系统
WO2014069103A1 (ja) 画像処理装置
US20150332443A1 (en) Image processing device, monitoring camera, and image processing method
KR20050007106A (ko) 링잉 아티팩트 적응 감소 방법 및 장치
CN108270945B (zh) 一种运动补偿去噪方法及装置
US20130084005A1 (en) Image processing device and method for processing image
US20090316049A1 (en) Image processing apparatus, image processing method and program
CN105809633A (zh) 去除颜色噪声的方法及装置
WO2016165116A1 (zh) 一种基于噪声相关性的视频去噪系统
KR101558532B1 (ko) 영상의 잡음 제거장치
CN111383182B (zh) 图像去噪方法、装置及计算机可读存储介质
WO2017088391A1 (zh) 视频去噪与细节增强方法及装置
CN112435183A (zh) 一种图像降噪方法和装置以及存储介质
KR101582800B1 (ko) 적응적으로 컬러 영상 내의 에지를 검출하는 방법, 장치 및 컴퓨터 판독 가능한 기록 매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15888817

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15557082

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15888817

Country of ref document: EP

Kind code of ref document: A1