CN108174056A - A kind of united low-light vedio noise reduction method in time-space domain - Google Patents

A kind of united low-light vedio noise reduction method in time-space domain Download PDF

Info

Publication number
CN108174056A
CN108174056A CN201611115487.3A CN201611115487A CN108174056A CN 108174056 A CN108174056 A CN 108174056A CN 201611115487 A CN201611115487 A CN 201611115487A CN 108174056 A CN108174056 A CN 108174056A
Authority
CN
China
Prior art keywords
window
pixel
filtering
noise reduction
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611115487.3A
Other languages
Chinese (zh)
Inventor
富容国
冯澍
罗浩
杨柳
沈天宇
吕进
王焜
韦方
韦一方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201611115487.3A priority Critical patent/CN108174056A/en
Publication of CN108174056A publication Critical patent/CN108174056A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Abstract

The invention discloses a kind of united low-light vedio noise reduction method in time-space domain, this method includes:Boxfilter processing is carried out to original low-light video;To treated, video sequence consecutive frame carries out motion detection, determines corresponding filter factor, and carry out noise reduction process to original low-light video by three-dimensional coefficient time-recursive filtering algorithm;Image enhancement is carried out by improved bilateral filtering processing method, obtains the low-light video after noise reduction.The present invention enhances the effect of noise reduction as much as possible while smear is avoided;Time-space domain mixed filtering solves the defects such as poor contrast of the low-light video caused by environment and device factor, signal-to-noise ratio be low, improves the visual effect of human eye.

Description

A kind of united low-light vedio noise reduction method in time-space domain
Technical field
The invention belongs to technical field of video processing, particularly a kind of united low-light vedio noise reduction method in time-space domain.
Background technology
Under poor light condition, since illumination is relatively low and the limitation of detector sensitivity etc., system obtains video image letter It makes an uproar than relatively low, influences eye-observation or even can not effectively obtain target scene image.The noise of low-light video is mainly by CCD institutes Generate meet the white noise and image intensifier of Gaussian Profile caused by quantum noise formed.
Existing low-light vedio noise reduction algorithm is broadly divided into two class of airspace filter and time-domain filtering, and time-domain filtering utilizes video On the basis of motion detection or motion compensation, noise reduction is carried out to video for frame-to-frame correlation.Airspace filter utilizes two dimensional image Correlation between neighborhood of pixels is handled, such as mean filter, Wiener filtering etc..
Time-domain filtering is used alone, such as time-recursive filtering algorithm, for moving target scene noise reduction unobvious, and Easily occur error on object matching, ghost phenomena occur.Airspace filter, filtering medium filtering, mean filter etc. is used alone, Since the noise of interframe same position is there are randomness, easily lead to the scintillation of adjacent interframe after filtering.
Invention content
The purpose of the present invention is to provide a kind of united low-light vedio noise reduction methods in time-space domain.
Realize the object of the invention technical solution be:
A kind of united low-light vedio noise reduction method in time-space domain, includes the following steps:
Step 1, boxfilter processing is carried out to original low-light video;
Step 2, to treated, video sequence consecutive frame carries out motion detection, determines corresponding filter factor, and pass through Three-dimensional coefficient time-recursive filtering algorithm carries out noise reduction process to original low-light video;
Step 3, image enhancement is carried out by improved bilateral filtering processing method, obtains the low-light video after noise reduction.
Compared with prior art, remarkable advantage of the invention is:
(1) when calculating each ∑ A (i, j) using Boxfilter of the present invention, it is only necessary to which operation twice simplifies one by one The process of summation, and after Boxfilter pretreatments, need to be obtained in filtering algorithm in some window pixel and when, Corresponding position in B arrays can be directly accessed, the summation operation that script complexity is made to be O (9) is reduced to the complexity of O (1), Improve the speed of service of algorithm;
(2) present invention carries out spatial domain filter using improved bilateral filtering processing method on the basis of time-domain filtering to video Wave solves the defects such as poor contrast of the low-light video caused by environment and device factor, signal-to-noise ratio be low, improves human eye Visual effect.
Description of the drawings:
Fig. 1 is the present invention is based on three-dimensional coefficient time-recursive filtering and improves the united low-light vedio noise reduction calculation of bilateral filtering Method flow chart.
Fig. 2 is boxfilter algorithm schematic diagrames.
Fig. 3 (a) and Fig. 3 (b) is respectively sectional drawing after static building original video sectional drawing and filtering.
Fig. 4 (a) and Fig. 4 (b) is respectively to be cut after the original video sectional drawing of swing lens and filtering under 0.11lux illumination Figure.
Fig. 5 (a) and Fig. 5 (b) is respectively to be cut after the original video sectional drawing of static portrait and filtering under 0.02lux illumination Figure.
Fig. 6 (a) and Fig. 6 (b) is respectively to be cut after the original video sectional drawing of moving portrait and filtering under 0.02lux illumination Figure.
Specific embodiment
With reference to Fig. 1, a kind of united low-light vedio noise reduction method in time-space domain of the invention includes the following steps:
Step 1, boxfilter processing is carried out to original low-light video;
Given sliding window size, it is quick to the pixel value progress in each window to be added summation, that is, create one and original The array B of the identical sizes of image A, the value for making each element in array B are the pixels in the neighborhood of pixels of original image A corresponding positions With:
B (i, j)=∑ A (i, j), (i, j) ∈ M (1)
Wherein M is all pixels point set in the window centered on (i, j).
If window size is 3 × 3, which specifically includes:
Step 1-1, filter window size is 3 × 3, and the length of intermediate array mid is equal to the columns of original image pixel;
Step 1-2, filter window is since the upper left corner, and from left to right, from top to bottom individual element slides, and is often moved to one During a new position, if window center pixel is (i, j), by (i-1)-th, i, i+1 row each Leie time summation, summed result is placed on In intermediate array mid, mid [i, j-1], mid [i, j], mid [i, j+1] are summed, obtain Σ A (i, j), A (i, j) is artwork Middle pixel coordinate is the pixel value of (i, j), and summed result is stored in (i, j) point of array B;
Step 1-3, when window is to one pixel of right translation, window center pixel becomes (i, j+1), by the last time Summed result subtracts mid [i, j-1], along with mid [i, j+2], obtains the summed result of new window:
Σ A (i, j+1)=Σ A (i, j)-mid [i, j-1]+mid [i, j+2] (2)
Step 1-4, when window, which is moved to capable end, jumps to next line, mid is updated, for each mid [i+1, J], A (i+2, j) need to be added, then subtract A (i-1, j), then start the calculating of new a line again.
The above process is as shown in Fig. 2, each dot represents a pixel in original image in figure, selected by square-shaped frame represents Filter window, size be 3 × 3, square block be intermediate array, length be equal to original image pixel columns;
Step 2, to treated, video sequence consecutive frame carries out motion detection, determines corresponding filter factor, and pass through Three-dimensional coefficient time-recursive filtering algorithm carries out noise reduction process to original low-light video;Detailed process is:
Step 2-1, set a front and rear frame and correspond to the difference d of window pixel sum, recursive noise reduction while counts d=| WN- WN-1|;
Filter factor K is arranged to the piecewise linear function about d:
Wherein, d1、d2To be respectively the first difference threshold and the second difference threshold, K1、K2Respectively the first filter factor threshold Value and the second filter coefficient threshold, K1> K2
Step 2-2, improved time-recursive filtering Filtering Formula is as follows:
WN(i,j)=XN(i,j)+KN(i,j)(W(N-1)(i,j)-XN(i,j)) (4)
In formula, WN(i,j)For the window in the filtered output image of present frame centered on point (i, j), W(N-1)(i,j)For The window of former frame filtering output image corresponding position;XN(i,j)Window for current input picture corresponding position;KN(i,j)For The window filtering coefficient centered on point (i, j), K ∈ (0,1).
Step 3, image enhancement is carried out by improved bilateral filtering processing method, obtains the low-light video after noise reduction.Base In the similarity principle of pixel grey scale in window, penalty function is added in the gray scale similarity factor, detailed process is:
Step 3-1, the spatial neighbor degree factor is calculated, formula is as follows:
Wherein ωs(p, q) be the spatial neighbor degree factor, σsFor filtering parameter, (x, y) is filter window center pixel coordinate, (p, q) is other pixel coordinates in window;
Step 3-2, the gray scale similarity factor after computed improved
Based on the similitude of pixel grey scale in window, penalty function is added in the gray scale similarity factor,
Wherein ωr(p, q) be the spatial neighbor degree factor, σrFor filtering parameter, (x, y) is filter window center pixel coordinate, (p, q) is other pixel coordinates in window, and τ (x, y) is penalty function, and τ (x, y) setting rules are as follows:
1) similarity of pixel and central point gray value in filter window is judged based on similitude;If pixel and center The absolute value of point pixel value difference is less than σr/ 3, then judge that I (p, q) and I (x, y) is similar, retain I (p, q) initial value, otherwise I (p, q) It is 0;
2) penalty function is set according to the number of similitude in window, if the number that 0 is set in window pixel is less than window The 1/3 of number of pixels then sets τ (x, y)=0, otherwise, 3) is set according to rule.
3) variable min, max, mean are introduced, represents the minimum value of pixel, maximum value and average value in filter window respectively; A=I (p, q)-mean is enabled, if a > 0, τ (x, y)=max-I (p, q);If a < 0, τ (x, y)=min-I (p, q) is if a =0, τ (x, y)=0;
Step 3-3, the filtering image after computed improved bilateral filtering:
Wherein,For the image obtained after filtering;MX, yFor centered on (x, y), using r as the spatial neighborhood picture of radius Element set;I (p, q) is MX, yMiddle coordinate is the point pixel value of (p, q);ωs(p, q) and ωr(p, q) be respectively spatial neighbor degree because Son and the grey similarity factor.
It is described in detail in the following with reference to the drawings and specific embodiments to originally bright.
Embodiment
The present invention has chosen multiple experiment scenes shooting low-light videos, respectively to the video image that shoots in the case of a variety of into Experimental verification is gone.
Fig. 3 (a) and Fig. 3 (b) is respectively sectional drawing after static building original video sectional drawing and filtering;Fig. 4 (a) and Fig. 4 (b) The original video sectional drawing of swing lens and schematic diagram after filtering respectively under 0.11lux illumination;Fig. 5 (a) and Fig. 5 (b) is respectively For schematic diagram after the original video sectional drawing of portrait static under 0.02lux illumination and filtering;Fig. 6 (a) and Fig. 6 (b) respectively exists The original video sectional drawing of moving portrait and schematic diagram after filtering under 0.02lux illumination.
It can be seen that by Fig. 3 and Fig. 5 for static target, time-space domain unified algorithm proposed by the present invention can be very Noise is reduced well, enhances the information such as the edge details of image.
It can be seen that by Fig. 4 and Fig. 6 for the object in movement, algorithm proposed by the present invention can also be complete well Into noise reduction process, and it is not in motion blur phenomenon.
In view of the foregoing it is apparent that when the united low-light vedio noise reduction algorithm in time-space domain proposed by the present invention overcomes single Domain filtering can be so that the shortcomings that edge feature of video thickens, compared to simple time domain or spatial domain while noise is filtered out Algorithm can obtain better noise reduction, be capable of providing good video image quality, effectively inhibit noise of video image, and compared with Retain the detailed information such as edge, the texture of image well.

Claims (5)

  1. A kind of 1. united low-light vedio noise reduction method in time-space domain, which is characterized in that include the following steps:
    Step 1, boxfilter processing is carried out to original low-light video;
    Step 2, to treated, video sequence consecutive frame carries out motion detection, determines corresponding filter factor, and pass through three-dimensional Coefficient time-recursive filtering algorithm carries out noise reduction process to original low-light video;
    Step 3, image enhancement is carried out by improved bilateral filtering processing method, obtains the low-light video after noise reduction.
  2. 2. the united low-light vedio noise reduction method in time-space domain according to claim 1, which is characterized in that step 1 is specially:
    Given sliding window size carries out the pixel value in each window quick be added and sums, that is, creates one and original image A The array B of identical size, the value for making each element in array B be pixel in the neighborhood of pixels of original image A corresponding positions and:
    B (i, j)=Σ A (i, j), (i, j) ∈ M (1)
    Wherein M is all pixels point set in the window centered on (i, j).
  3. 3. the united low-light vedio noise reduction method in time-space domain according to claim 1 or 2, which is characterized in that step 1 is specific For:
    Step 1-1, filter window size is set as 3 × 3, and the length of intermediate array mid is equal to the columns of original image pixel;
    Step 1-2, filter window is since the upper left corner, and from left to right, from top to bottom individual element slides, and is often moved to one newly During position, if window center pixel is (i, j), by (i-1)-th, i, i+1 row each Leie time summation, summed result is placed on centre In array mid, mid [i, j-1], mid [i, j], mid [i, j+1] are summed, obtain Σ A (i, j), A (i, j) is picture in artwork Plain coordinate is the pixel value of (i, j), and summed result is stored in (i, j) point of array B;
    Step 1-3, when window is to one pixel of right translation, window center pixel becomes (i, j+1), by last summation As a result mid [i, j-1] is subtracted, along with mid [i, j+2], obtains the summed result of new window:
    Σ A (i, j+1)=Σ A (i, j)-mid [i, j-1]+mid [i, j+2] (2)
    Step 1-4, when window, which is moved to capable end, jumps to next line, mid is updated, for each mid [i+1, j], A (i+2, j) need to be added, then subtracts A (i-1, j), then starts the calculating of new a line again.
  4. 4. the united low-light vedio noise reduction method in time-space domain according to claim 1, which is characterized in that right in step 2 Boxfilter treated video sequence consecutive frames carry out motion detection, and corresponding filtering is determined on the basis of motion detection Coefficient, detailed process are:
    Step 2-1, set a front and rear frame and correspond to the difference d of window pixel sum, recursive noise reduction while counts d=| WN-WN-1|;
    Filter factor K is arranged to the piecewise linear function about d:
    Wherein, d1、d2To be respectively the first difference threshold and the second difference threshold, K1、K2Respectively the first filter coefficient threshold and Second filter coefficient threshold, K1> K2
    Step 2-2, improved time-recursive filtering Filtering Formula is as follows:
    WN(i,j)=XN(i,j)+KN(i,j)(W(N-1)(i,j)-XN(i,j)) (4)
    In formula, WN(i,j)For the window in the filtered output image of present frame centered on point (i, j), W(N-1)(i,j)For former frame The window of filtering output image corresponding position;XN(i,j)Window for current input picture corresponding position;KN(i,j)For with point (i, J) window filtering coefficient centered on, K ∈ (0,1).
  5. 5. the united low-light vedio noise reduction method in time-space domain according to claim 1, which is characterized in that be based in step 3 The similarity principle of pixel grey scale in window is added to penalty function in the gray scale similarity factor, and detailed process is:
    Step 3-1, the spatial neighbor degree factor is calculated, formula is as follows:
    Wherein ωs(p, q) be the spatial neighbor degree factor, σsFor filtering parameter, (x, y) is filter window center pixel coordinate, (p, Q) it is other pixel coordinates in window;
    Step 3-2, the gray scale similarity factor after computed improved
    Based on the similitude of pixel grey scale in window, penalty function is added in the gray scale similarity factor,
    Wherein ωr(p, q) be the spatial neighbor degree factor, σrFor filtering parameter, (x, y) is filter window center pixel coordinate, (p, Q) it is other pixel coordinates in window, τ (x, y) is penalty function, and τ (x, y) setting rules are as follows:
    1) similarity of pixel and central point gray value in filter window is judged based on similitude;If pixel and central point picture The absolute value of plain difference is less than σr/ 3, then judge that I (p, q) and I (x, y) is similar, retain I (p, q) initial value, otherwise I (p, q) is 0;
    2) penalty function is set according to the number of similitude in window, if the number that 0 is set in window pixel is less than window pixel The 1/3 of number then sets τ (x, y)=0, otherwise, 3) is set according to rule.
    3) variable min, max, mean are introduced, represents the minimum value of pixel, maximum value and average value in filter window respectively;Enable a =I (p, q)-mean, if a > 0, τ (x, y)=max-I (p, q);If a < 0, τ (x, y)=min-I (p, q) is if a= 0, τ (x, y)=0;
    Step 3-3, the filtering image after computed improved bilateral filtering:
    Wherein,For the image obtained after filtering;MX, yFor centered on (x, y), using r as the spatial neighborhood set of pixels of radius It closes;I (p, q) is MX, yMiddle coordinate is the point pixel value of (p, q);ωs(p, q) and ωr(p, q) be respectively the spatial neighbor degree factor and The grey similarity factor.
CN201611115487.3A 2016-12-07 2016-12-07 A kind of united low-light vedio noise reduction method in time-space domain Pending CN108174056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611115487.3A CN108174056A (en) 2016-12-07 2016-12-07 A kind of united low-light vedio noise reduction method in time-space domain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611115487.3A CN108174056A (en) 2016-12-07 2016-12-07 A kind of united low-light vedio noise reduction method in time-space domain

Publications (1)

Publication Number Publication Date
CN108174056A true CN108174056A (en) 2018-06-15

Family

ID=62526183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611115487.3A Pending CN108174056A (en) 2016-12-07 2016-12-07 A kind of united low-light vedio noise reduction method in time-space domain

Country Status (1)

Country Link
CN (1) CN108174056A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110944176A (en) * 2019-12-05 2020-03-31 浙江大华技术股份有限公司 Image frame noise reduction method and computer storage medium
CN113012061A (en) * 2021-02-20 2021-06-22 百果园技术(新加坡)有限公司 Noise reduction processing method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014240A (en) * 2010-12-01 2011-04-13 深圳市蓝韵实业有限公司 Real-time medical video image denoising method
CN102769722A (en) * 2012-07-20 2012-11-07 上海富瀚微电子有限公司 Time-space domain hybrid video noise reduction device and method
CN103024248A (en) * 2013-01-05 2013-04-03 上海富瀚微电子有限公司 Motion-adaptive video image denoising method and device
CN103533214A (en) * 2013-10-01 2014-01-22 中国人民解放军国防科学技术大学 Video real-time denoising method based on kalman filtering and bilateral filtering
WO2015172235A1 (en) * 2014-05-15 2015-11-19 Tandemlaunch Technologies Inc. Time-space methods and systems for the reduction of video noise

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014240A (en) * 2010-12-01 2011-04-13 深圳市蓝韵实业有限公司 Real-time medical video image denoising method
CN102769722A (en) * 2012-07-20 2012-11-07 上海富瀚微电子有限公司 Time-space domain hybrid video noise reduction device and method
CN103024248A (en) * 2013-01-05 2013-04-03 上海富瀚微电子有限公司 Motion-adaptive video image denoising method and device
CN103533214A (en) * 2013-10-01 2014-01-22 中国人民解放军国防科学技术大学 Video real-time denoising method based on kalman filtering and bilateral filtering
WO2015172235A1 (en) * 2014-05-15 2015-11-19 Tandemlaunch Technologies Inc. Time-space methods and systems for the reduction of video noise

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XINGYUN-LIU: ""integral image(积分图)和boxfliter"", 《CSDN论坛》 *
张海荣、檀结庆: ""改进的双边滤波算法"", 《合肥工业大学学报》 *
韩义勇等: ""时域递归滤波对微光成像视距影响的研究"", 《应用光学》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110944176A (en) * 2019-12-05 2020-03-31 浙江大华技术股份有限公司 Image frame noise reduction method and computer storage medium
CN110944176B (en) * 2019-12-05 2022-03-22 浙江大华技术股份有限公司 Image frame noise reduction method and computer storage medium
CN113012061A (en) * 2021-02-20 2021-06-22 百果园技术(新加坡)有限公司 Noise reduction processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN112288658B (en) Underwater image enhancement method based on multi-residual joint learning
US9615039B2 (en) Systems and methods for reducing noise in video streams
CN107945125B (en) Fuzzy image processing method integrating frequency spectrum estimation method and convolutional neural network
US9202263B2 (en) System and method for spatio video image enhancement
WO2016206087A1 (en) Low-illumination image processing method and device
Lv et al. Real-time dehazing for image and video
Kim et al. A novel approach for denoising and enhancement of extremely low-light video
CN107085833B (en) Remote sensing images filtering method based on the equal intermediate value fusion of gradient inverse self-adaptive switch
CN112785637B (en) Light field depth estimation method based on dynamic fusion network
CN110544213A (en) Image defogging method based on global and local feature fusion
CN105913404A (en) Low-illumination imaging method based on frame accumulation
RU2419880C2 (en) Method and apparatus for calculating and filtering disparity map based on stereo images
Kim et al. Temporally x real-time video dehazing
Yu et al. Image and video dehazing using view-based cluster segmentation
CN106327450A (en) Method for enhancing low-light video image based on space-time accumulation and image degradation model
CN106657948A (en) low illumination level Bayer image enhancing method and enhancing device
CN108174056A (en) A kind of united low-light vedio noise reduction method in time-space domain
CN115965537A (en) Video image denoising method and device and computer storage medium
Hu et al. A low illumination video enhancement algorithm based on the atmospheric physical model
CN105229998B (en) Image processing apparatus, method and computer-readable medium
Toka et al. A fast method of fog and haze removal
CN110136085B (en) Image noise reduction method and device
CN115471413A (en) Image processing method and device, computer readable storage medium and electronic device
CN110599431B (en) Time domain filtering method applied to infrared camera
CN114897720A (en) Image enhancement device and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180615