WO2021017809A1 - 视频去噪方法、装置及计算机可读存储介质 - Google Patents

视频去噪方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2021017809A1
WO2021017809A1 PCT/CN2020/101806 CN2020101806W WO2021017809A1 WO 2021017809 A1 WO2021017809 A1 WO 2021017809A1 CN 2020101806 W CN2020101806 W CN 2020101806W WO 2021017809 A1 WO2021017809 A1 WO 2021017809A1
Authority
WO
WIPO (PCT)
Prior art keywords
video frame
sub
variance
current video
noise
Prior art date
Application number
PCT/CN2020/101806
Other languages
English (en)
French (fr)
Inventor
艾吉松
徐科
孔德辉
王宁
刘欣
游晶
朱方
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Priority to KR1020217034597A priority Critical patent/KR102605747B1/ko
Priority to US17/624,237 priority patent/US20220351335A1/en
Priority to JP2021564231A priority patent/JP7256902B2/ja
Priority to EP20848408.9A priority patent/EP3944603A4/en
Publication of WO2021017809A1 publication Critical patent/WO2021017809A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing

Definitions

  • This application relates to the field of video processing technology, for example, to a video denoising method, device, and computer-readable storage medium.
  • Image denoising has always been a very important direction in the field of image processing.
  • photography technology has undergone earth-shaking changes. From the very beginning professional digital single-lens reflex cameras to simpler smart phones Point and shoot camera. Due to the limitation of the aperture and the size of the sensor, the smart phone will generate more noise than the SLR, resulting in the reduced resolution of the received image or video compared with the original image or video, which not only affects the visual effect, but also needs to obtain or identify from it.
  • the image or video of the moving target further affects the accuracy of the acquisition or recognition work. Therefore, a better denoising algorithm is needed to improve the image quality.
  • the adaptive denoising algorithms all use the estimation of the noise intensity, and then dynamically adjust the denoising related parameters, so as to achieve the effect of no residual noise and preserve the image details as much as possible.
  • the adaptive denoising algorithm has an effect on the current frame. The problem of low accuracy of image noise intensity estimation.
  • Noise estimation algorithms mainly fall into the following two categories:
  • the first category noise intensity estimation for the current image frame.
  • the steps are as follows: 1) Divide the image or video frame image to be estimated into sub-image blocks of the same size; 2) Perform variance calculation on the obtained sub-image blocks respectively to obtain the variance value of each sub-image block; 3) According to each sub-image block For the variance value of the image block, a certain proportion of the smaller variance is selected to estimate the noise intensity, and then the noise intensity of the current image frame is obtained.
  • This algorithm has relatively large errors for images with rich details and is easy to regard the details as noise.
  • the second category Perform noise intensity estimation for the current frame and the previous frame.
  • the steps are as follows: 1) Divide the current frame image and the previous frame image of the video to be estimated into one-to-one corresponding sub-image blocks of the same size; 2) Perform difference calculation on the obtained one-to-one corresponding sub-image blocks respectively, Obtain the variance value of each sub-image block; 3) According to the variance value of each sub-image block, select a certain proportion of the smaller variance to estimate the noise intensity, and then obtain the noise intensity of the current image frame.
  • This algorithm is used before and after the video When the brightness of the frame changes or there is a large-scale motion in the front and back frames, misjudgment is easy to occur.
  • the unreasonable denoising parameters will cause one frame of the image to be clear, one frame is blurred, or one frame has residual noise, and one frame has no residual noise flicker.
  • VBM3D video block-matching and 3D filtering
  • VBM4D video block-matching and 4D filtering
  • the embodiments of the present invention provide a video denoising method and device, and a computer-readable storage medium, which can improve the accuracy of noise intensity estimation.
  • the embodiment of the present invention provides a video denoising method, including:
  • the embodiment of the present invention also provides a computer-readable storage medium, the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to realize the above The described video denoising method.
  • the embodiment of the present invention also provides a video denoising device, including a processor and a memory, the processor and the memory are connected by electrical coupling, and the processor is configured to execute a program stored in the memory to realize the above Video denoising method.
  • the embodiment of the present invention also provides a video denoising device, including a noise statistics module, a noise estimation module, and a video denoising module;
  • the noise statistics module is configured to divide each video frame in the input video frame sequence into sub-image blocks, and calculate the block variance of each sub-image block;
  • the noise estimation module is configured to calculate the average variance of all sub-image blocks in the current video frame according to the calculated block variance, determine the noise intensity of the current video frame according to the calculated average variance, and select the filter intensity that matches the noise intensity And noise characteristic curve;
  • the video denoising module is configured to filter the current video frame according to the filtering strength and the noise characteristic curve.
  • FIG. 1 is a schematic flowchart of an exemplary video denoising method provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of the principle of smoothing noise intensity by a first-in first-out queue provided by an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of an exemplary video denoising process provided by an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of an exemplary process of denoising in airspace provided by an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a calculation principle of a motion vector based on motion compensation according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of a mapping relationship between exercise intensity and mixing coefficient provided by an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of an exemplary structure of a video denoising device provided by an embodiment of the present invention.
  • an embodiment of the present invention provides a video denoising method, which includes the following steps:
  • Step 101 Perform sub-image block division on each video frame in the input video frame sequence, and calculate the block variance of each sub-image block.
  • the calculating the block variance of each sub-image block includes:
  • Calculate the spatial variance of each sub-image block calculate the time-domain variance between the sub-image block in the current video frame and the sub-image block at the corresponding position of the previous video frame of the current video frame; select the spatial domain The smaller value of the variance and the time domain variance is used as the block variance of the sub-image block.
  • Step 102 Calculate the average variance of all sub-image blocks in the current video frame according to the calculated block variance, determine the noise intensity of the current video frame according to the calculated average variance, and select the filter intensity and noise characteristics that match the noise intensity curve.
  • the calculating the average variance of all sub-image blocks in the current video frame according to the calculated block variance includes:
  • the first n sub-image blocks may be the first N% of all sub-image blocks in the current video frame. For example, it can be set as the top 10% of the sub-image blocks after sorting.
  • the determining the noise intensity of the current video frame according to the calculated average variance includes:
  • the calculated average variance is less than the preset variance value, record the noise intensity of the current video frame as 0; if the calculated average variance is greater than or equal to the preset variance value, use the calculated average variance as the current video frame The intensity of noise.
  • the method before selecting the filter intensity and noise characteristic curve matching the noise intensity, the method further includes:
  • the filtering strength includes spatial filtering strength and temporal filtering strength.
  • the multiple sub-image blocks are sorted according to the variance value from small to large, and the variances of the second preset value sub-image blocks are accumulated, Calculate the average variance of all sub-image blocks according to the accumulated variance sum and the size of the second preset value. If the average variance of all sub-image blocks is less than the third preset value, write 0 into the FIFO; otherwise, write the sub-image block
  • the average variance of is written into the First Input First Output (FIFO) queue, as shown in Figure 2, the depth of the FIFO can be 16, that is, the noise intensity data of the last 16 frames are stored.
  • the smoothed noise level (noise level) of the current video frame is obtained, and then according to the size of the noise intensity, the spatial filtering intensity (Spatial denoise) that matches the noise intensity is selected. strength), temporal filtering strength (temporal denoise strength), and corresponding noise curve (noise curve).
  • Step 103 Filter the current video frame according to the filter strength and noise characteristic curve.
  • the step 103 includes:
  • the spatial filtering algorithm is the BM3D denoising algorithm
  • the Wiener coefficient is correspondingly performed according to the brightness value and the noise characteristic curve of the pixel. Proportional zoom operation.
  • the step 103 includes five related operations: spatial denoise (Spatial denoise), motion estimation (Motion estimation), motion detection (Moition detector), and mixing coefficient mapping (motion2 ⁇ ) And blending, the input includes the current video frame f_in(n) to be filtered, the previous video frame f_out(n-1) after filtering, and the spatial filter intensity coefficient, temporal filter intensity coefficient, and noise output in step 102 Characteristic curve and noise intensity.
  • spatial denoise Spatial denoise
  • Motion estimation motion estimation
  • Motion detector motion detection
  • motion2 ⁇ mixing coefficient mapping
  • the spatial denoising operation of this application can use the denoising algorithm of BM3D, but this application has improved the algorithm to make its algorithm more consistent with the characteristics of the noise introduced by the video capture terminal.
  • the Wiener filter operation the Wiener coefficient is scaled according to the brightness value of the pixel and its noise characteristic curve.
  • the spatial denoising operation of this application does not have to be the BM3D algorithm, and filtering algorithms such as guided filtering and bilateral filtering are also possible, but the processing effect is slightly worse than the BM3D algorithm.
  • Motion estimation operation the current video frame f_in(n) to be filtered is divided into blocks according to the preset value, and the image of the current video frame is divided into sub-image blocks.
  • the sub-image blocks can overlap, and then for each sub-image block Image block, perform a minimum mean square error (MSE) operation on all sub-image blocks within a certain search range centered on the corresponding position in the previous video frame f_out(n-1) after filtering, and the result will be
  • MSE minimum mean square error
  • the sub image block corresponding to the smallest MSE value of is set to the best matching block corresponding to the current sub image block in the current video frame, and the motion vector is set to the coordinates of the best matching block in the previous frame image minus the current sub image block
  • the coordinate values are shown in Figure 5.
  • each sub-image block in the current video frame to be filtered has a best matching block in the previous video frame, and each sub-image block and Its corresponding best matching block does the sum of absolute difference (Sum of Absolute Difference, SAD) operation.
  • the SAD value of each sub-image block is regarded as its motion intensity value, where (i, j) is the two-dimensional coordinate of the pixel to be filtered, 0 ⁇ i ⁇ M, 0 ⁇ j ⁇ N.
  • the mixing coefficient ⁇ can be obtained by mapping as shown in Figure 6, where the abscissa is the exercise intensity value, and the ordinate is the mixing coefficient value.
  • Base motion Base_motion
  • blend_slope blend_slope
  • Top_motion top motion
  • the mixing coefficient ⁇ obtained in the mixing coefficient mapping operation the motion vector obtained in the motion estimation operation and the image after spatial filtering obtained in the spatial denoising operation, the final output image can be obtained by weighted average ,Calculated as follows:
  • f_out(n,i,j) f_in_spa(n,i,j)*(1- ⁇ )+f_out(n-1,i+mvi,j+mvj) ⁇
  • (i, j) is the two-dimensional coordinates of the pixel to be filtered
  • (mvi, mvj) is the motion vector of the pixel to be filtered
  • n is the n-th video frame in the video frame sequence
  • f_in_spa(n,i,j) is The pixel to be filtered of the n-th video frame after spatial filtering
  • f_out(n-1, i+mvi, j+mvj) is the pixel of the n-1th video frame after the filtering.
  • the video denoising method provided by the embodiment of the present invention includes dividing each video frame in the input video frame sequence into sub-image blocks, and calculating the block variance of each sub-image block; according to the calculated block Variance calculates the average variance of all sub-image blocks in the current video frame, determines the noise intensity of the current video frame according to the calculated average variance, selects the filter intensity and noise characteristic curve that match the noise intensity; according to the filter intensity and The noise characteristic curve filters the current video frame, which effectively improves the accuracy of the noise intensity estimation. According to the predicted noise intensity, the matching filter intensity and noise characteristic curve can be selected, which can effectively remove the noise and prevent the Excessive noise intensity leads to the loss of image details, thereby achieving better overall denoising performance.
  • the embodiment of the present invention also provides a computer-readable storage medium, the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to achieve the following operating:
  • Each video frame in the input video frame sequence is divided into sub-image blocks, and the block variance of each sub-image block is calculated; according to the calculated block variance, the average variance of all sub-image blocks in the current video frame is calculated according to the calculated The average variance determines the noise intensity of the current video frame, selects the filter intensity and noise characteristic curve that match the noise intensity; and filters the current video frame according to the filter intensity and noise characteristic curve.
  • the calculating the block variance of each sub-image block includes:
  • Calculate the spatial variance of each sub-image block calculate the time-domain variance between the sub-image block in the current video frame and the sub-image block at the corresponding position of the previous frame of the current video frame; select the spatial variance and the time domain The smaller value in the variance is used as the block variance of the sub-image block.
  • the calculating the average variance of all sub-image blocks in the current video frame according to the calculated block variance includes:
  • the determining the noise intensity of the current video frame according to the calculated average variance includes:
  • the calculated average variance is less than the preset variance value, record the noise intensity of the current video frame as 0; if the calculated average variance is greater than or equal to the preset variance value, the calculated average variance As the noise intensity of the current video frame.
  • the operation before selecting the filter intensity and noise characteristic curve matching the noise intensity, the operation further includes:
  • the filtering strength includes spatial filtering strength and temporal filtering strength.
  • the spatial filtering algorithm is a block matching and three-dimensional filtering BM3D denoising algorithm
  • the Wiener coefficient is determined according to the brightness value and noise of the pixel. Characteristic curve, zoom operation in corresponding proportion.
  • An embodiment of the present invention also provides a video denoising device, including a processor and a memory, wherein: the processor is used to execute a program stored in the memory to implement the following operations:
  • Each video frame in the input video frame sequence is divided into sub-image blocks, and the block variance of each sub-image block is calculated; according to the calculated block variance, the average variance of all sub-image blocks in the current video frame is calculated according to the calculated The average variance determines the noise intensity of the current video frame, selects the filter intensity and noise characteristic curve that match the noise intensity; and filters the current video frame according to the filter intensity and noise characteristic curve.
  • the calculating the block variance of each sub-image block includes:
  • Calculate the spatial variance of each sub-image block calculate the time-domain variance between the sub-image block in the current video frame and the sub-image block at the corresponding position of the previous frame of the current video frame; select the spatial variance sum The smaller value in the time domain variance is used as the block variance of the sub-image block.
  • the calculating the average variance of all sub-image blocks in the current video frame according to the calculated block variance includes:
  • the determining the noise intensity of the current video frame according to the calculated average variance includes:
  • the calculated average variance is less than the preset variance value, record the noise intensity of the current video frame as 0; if the calculated average variance is greater than or equal to the preset variance value, the calculated average variance As the noise intensity of the current video frame.
  • the operation before selecting the filter intensity and noise characteristic curve matching the noise intensity, the operation further includes:
  • the filtering strength includes spatial filtering strength and temporal filtering strength.
  • the spatial filtering algorithm is a block matching and three-dimensional filtering BM3D denoising algorithm
  • the Wiener coefficient is determined according to the brightness value and noise of the pixel. Characteristic curve, zoom operation in corresponding proportion.
  • an embodiment of the present invention also provides a video denoising device, including a noise statistics module 701, a noise estimation module 702, and a video denoising module 703, wherein:
  • the noise statistics module 701 is configured to divide each video frame in the input video frame sequence into sub-image blocks, calculate the block variance of each sub-image block, and output the calculated block variance of the sub-image blocks to the noise estimation module 702 .
  • the noise estimation module 702 is configured to calculate the average variance of all sub-image blocks in the current video frame according to the calculated block variance, determine the noise intensity of the current video frame according to the calculated average variance, and select a filter that matches the noise intensity Intensity and noise characteristic curve.
  • the video denoising module 703 is configured to filter the current video frame according to the filtering strength and the noise characteristic curve.
  • the calculating the block variance of each sub-image block includes:
  • Calculate the spatial variance of each sub-image block calculate the time-domain variance between the sub-image block in the current video frame and the sub-image block at the corresponding position of the previous frame of the current video frame; select the spatial variance and the time domain The smaller value in the variance is used as the block variance of the sub-image block.
  • the outputting the calculated block variance of the sub-image block to the noise estimation module 702 includes:
  • the current video frame f_in(n) with noise and its previous video frame f_in(n-1) are input to the noise statistics module 701, and the noise statistics module 701 calculates f_in(n) according to the first preset value.
  • the noise statistics module 701 calculates f_in(n) according to the first preset value.
  • Is divided into sub-image blocks of the same size as f_in(n-1) and the variance ⁇ s of their spatial domain is calculated for each sub-image block in f_in(n), and the pixel value of each sub-image block in f_in(n)
  • the final variance of each sub-image block in f_in(n) is the sub-image block
  • the determining the noise intensity of the current video frame according to the calculated average variance includes:
  • the calculated average variance is less than the preset variance value, record the noise intensity of the current video frame as 0; if the calculated average variance is greater than or equal to the preset variance value, use the calculated average variance as the current video frame The intensity of noise.
  • the noise estimation module 702 is further configured to:
  • the filtering strength includes spatial filtering strength and temporal filtering strength.
  • the noise estimation module 702 receives the sum of variance and the number of sub image blocks output by the noise statistics module 701, and calculates the average variance of each sub image block. If the average variance of the sub image block is less than the third preset value, then Write 0 into the FIFO, otherwise the average variance of the sub-image block is written into the FIFO.
  • the depth of the FIFO can be 16, that is, the noise intensity data of the latest 16 frames are stored; after summing and averaging all the data in the FIFO, the current The smoothed noise intensity of the video frame, and then according to the size of the noise intensity, select the spatial filter intensity, the temporal filter intensity and the corresponding noise characteristic curve that match the noise intensity.
  • the video denoising module 703 is set to:
  • the current video frame is spatially filtered; according to the current video frame and the previous video frame, the motion intensity and motion vector of each sub-image block of the current video frame are estimated; according to the estimated motion intensity Get the weight of each pixel in the current video frame, get the position of the pixel in the previous video frame that participates in the temporal filtering according to the estimated motion vector, and compare the pixel in the current video frame after spatial filtering with the The pixel points in the previous video frame pointed to by the motion vector corresponding to the pixel point are subjected to weighted average filtering to obtain the filtered pixel point.
  • the spatial filtering algorithm may be the BM3D denoising algorithm, and in the Wiener filtering operation of the BM3D denoising algorithm, the Wiener coefficient is performed according to the brightness value and noise characteristic curve of the pixel. The corresponding scaling operation.
  • the inputs of the video denoising module 703 are: the current video frame to be filtered f_in(n), the filtered previous video frame f_out(n-1), and the noise estimate
  • the video denoising module 703 includes five submodules: spatial denoising submodule, motion estimation submodule, motion detection submodule, mixing coefficient mapping submodule and mixing submodule.
  • the spatial denoising submodule performs spatial filtering on f_in(n) according to the spatial filtering intensity and noise characteristic curve transmitted from the noise estimation module 702 to obtain the spatially filtered image f_in_spa(n); the motion estimation submodule performs spatial filtering on f_in_spa(n) according to the input two Frame image, calculate the motion vector value of each sub-image block in f_in(n); the motion detection sub-module performs motion detection on all sub-image blocks in the current video frame f_in(n) in a block-based manner to obtain each sub-image block The motion intensity of the image f_in_spa(n) after spatial filtering, the motion vector output by the motion estimation submodule, and the motion intensity information output by the motion detection submodule are output to the time domain filter (including the mixing coefficient mapping submodule and the mixing submodule ) Perform temporal filtering.
  • the temporal filter first obtains the weight of each pixel participating in the temporal filtering according to the motion intensity information, and obtains the position of the pixel participating in the temporal filtering in the previous video frame according to the motion vector information, and then The pixels of the current video frame and the pixels pointed to by the motion vector in the previous video frame are subjected to weighted average filtering to obtain the final filtered pixel.
  • the working principle of multiple modules is as follows:
  • Spatial denoising submodule As shown in Figure 4, the spatial denoising submodule of this application uses the BM3D denoising algorithm and has improved its algorithm to make its algorithm more consistent with the characteristics of the noise introduced by the video capture terminal In this application, in the Wiener filter operation, the Wiener coefficient is scaled according to the brightness value of the pixel and its noise characteristic curve.
  • the spatial denoising sub-module of this application does not necessarily have to use BM3D Algorithms, guided filtering, bilateral filtering and other filtering algorithms are all possible, but the processing effect is slightly worse.
  • Motion estimation sub-module The current video frame f_in(n) to be filtered is divided into blocks according to the preset value, and the image of f_in(n) is divided into sub-image blocks.
  • the sub-image blocks can overlap, and then for each A sub-image block performs MSE operation on all sub-image blocks within a certain search range centered on the corresponding position in the previous video frame f_out(n-1) after filtering, and the sub-image block corresponding to the smallest MSE value obtained
  • the image block is set to the best matching block corresponding to the current sub-image block in the current video frame, and the motion vector is set to the coordinate of the best matching block in the previous frame of the image minus the coordinate value of the current sub-image block, as shown in Figure 5. Show.
  • Motion detection sub-module According to the motion estimation sub-module, each sub-image block in the current video frame to be filtered has a best matching block in the previous video frame, and each sub-image The block and its corresponding best matching block perform SAD operation.
  • the SAD value of each sub-image block is regarded as its motion intensity value, where (i, j) are the two-dimensional coordinates of the pixel to be filtered, 0 ⁇ i ⁇ M, 0 ⁇ j ⁇ N.
  • the mixing coefficient ⁇ can be obtained by mapping according to the exercise intensity value calculated by the motion detection sub-module.
  • the abscissa is the exercise intensity value and the ordinate is the mixing coefficient value.
  • Base_motion, blend_slope, Top_motion are three preset values. The corresponding mapping relationship can be determined through these three preset values. The three preset values must ensure that the slope of the line segment is negative, that is, the more the movement Stronger, the smaller the mixing coefficient, otherwise it will cause motion blur and even smear.
  • Mixing submodule According to the mixing coefficient ⁇ obtained by the mixing coefficient mapping submodule, the motion vector obtained by the motion estimation submodule, and the spatially filtered image obtained by the spatial denoising submodule, the final output image can be weighted averaged To obtain, the calculation formula is as follows:
  • f_out(n,i,j) f_in_spa(n,i,j)*(1- ⁇ )+f_out(n-1,i+mvi,j+mvj) ⁇
  • (i, j) is the two-dimensional coordinates of the pixel to be filtered
  • (mvi, mvj) is the motion vector of the pixel to be filtered
  • n is the n-th video frame in the video frame sequence
  • f_in_spa(n,i,j) is The pixel to be filtered of the n-th video frame after spatial filtering
  • f_out(n-1, i+mvi, j+mvj) is the pixel of the n-1th video frame after the filtering.
  • the video denoising method and device and computer-readable storage medium provided by the embodiments of the present invention solve the inaccurate image noise estimation in related technologies by combining the image noise estimation method with the video image denoising method. , And the problem that denoising performance and image quality cannot have both.
  • the video denoising solution proposed in the embodiment of the present invention includes three modules: a noise statistics module 701, a noise estimation module 702, and a video denoising module 703.
  • the noise statistics module 701 divides into blocks according to the input video frame sequence, and counts the information related to the noise intensity of the current video frame; the noise estimation module 702 calculates the noise intensity information (mainly block variance information) calculated by the noise statistics module 701, after After certain preprocessing, select the real-time adjusted denoising related parameters (including spatial denoising intensity, time domain denoising intensity, noise characteristic curve and noise intensity) and send them to the video denoising module 703; video denoising module 703 The video denoising is performed according to the denoising related parameters issued by the noise estimation module 702 in real time.
  • the noise intensity information mainly block variance information
  • the noise intensity is estimated according to the statistical characteristics of noise. There are two ways to estimate the noise intensity. One is based on the estimation of the two frames before and after the video, and the other is the noise estimation for the current video frame. The two algorithms are mutually verified. The accuracy rate is higher.
  • noise intensity information calculated by this application is averaged for an (m+1) frame, and the noise intensity obtained is smoother, and there will be no large jumps in the noise intensity, resulting in a clear frame and a blurred frame. , Or there is residual noise in one frame, and the flicker phenomenon without residual noise in one frame.
  • This application uses a combination of spatial domain BM3D denoising algorithm, time domain motion compensation, and motion intensity detection.
  • the algorithm has better effect, and the complexity is not too high. It takes a value between the effect and the complexity. A good balance point.
  • this application introduces a noise characteristic curve and dynamically adjusts the denoising intensity according to the noise brightness, thereby achieving a better denoising effect.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Picture Signal Circuits (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本文公开了一种视频去噪方法、装置及计算机可读存储介质。所述视频去噪方法包括:对输入视频帧序列中的每一视频帧进行子图像块划分,并计算每个子图像块的块方差;根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,根据计算出的平均方差确定当前视频帧的噪声强度,选择与所述噪声强度相匹配的滤波强度及噪声特征曲线;根据所述滤波强度及噪声特征曲线,对当前视频帧进行滤波。

Description

视频去噪方法、装置及计算机可读存储介质
本申请要求在2019年07月29日提交中国专利局、申请号为201910691413.1的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频处理技术领域,例如涉及一种视频去噪方法、装置及计算机可读存储介质。
背景技术
图像去噪一直是图像处理领域非常重要的一个方向,随着手机拍照的流行,摄影技术已经发生了翻天覆地的变化,从最开始的专业数码单镜反光相机,变成了更简单的智能手机上的傻瓜相机。由于光圈和传感器的大小的限制,智能手机会比单反产生更多的噪声,导致接收到的图像或视频与原始的图像或视频相比分辨力降低,不仅影响视觉效果,对于需要从中获取或识别运动目标的图像或视频,更是影响了获取或识别工作的准确性,因此需要更好的去噪算法来实现图像质量的提升。自适应去噪算法都是通过对噪声强度的估计,然后动态调节去噪相关的参数,从而达到既没有噪声残留又尽可能地保留图像细节的效果,但是,自适应去噪算法存在对当前帧图像噪声强度估计准确率低的问题。
噪声估计算法主要有以下两类:
第一类:针对当前图像帧进行噪声强度估计。
步骤如下:1)将待估计图像或者视频帧图像划分成大小一致的子图像块;2)对得到的子图像块分别进行方差计算,得到每个子图像块的方差值;3)根据每个子图像块的方差值,选取一定比例的较小的方差进行噪声强度估计,进而得到当前图像帧的噪声强度,这种算法对于细节比较丰富的图像,误差比较大,容易将细节当做噪声。
第二类:针对当前帧与前一帧进行噪声强度估计。
步骤如下:1)将待估计视频的当前帧图像与前一帧图像,划分成大小一致的一一对应的子图像块;2)对得到的一一对应的子图像块分别进行差值计算,得到每个子图像块的方差值;3)根据每个子图像块的方差值,选取一定比例的较小的方差进行噪声强度估计,进而得到当前图像帧的噪声强度,这种算法在视频前后帧亮度有变化或者前后帧有大规模运动的时候,容易发生误判。
当对图像帧噪声强度估计出现偏差时,不合理的去噪参数会导致图像一帧 清晰,一帧模糊,或者一帧有噪声残留,一帧没有噪声残留的闪烁现象。
而对视频去噪效果比较优秀的算法,比如视频块匹配和三维过滤(Video Block-Matching and 3D filtering,VBM3D),视频块匹配和四维过滤(Video Block-Matching and 4D filtering,VBM4D)等,时间复杂度比较高,硬件资源代价比较高。此外,很多去噪算法没有考虑亮度对噪声的影响,对一帧内的所有的像素点采用统一的去噪强度,这样的处理其实并不符合高斯噪声的特性。
发明内容
本发明实施例提供了一种视频去噪方法和装置、计算机可读存储介质,能够提高噪声强度估计的准确率。
本发明实施例提供了一种视频去噪方法,包括:
对输入视频帧序列中的每一视频帧进行子图像块划分,并计算每个子图像块的块方差;
根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,根据计算出的平均方差确定当前视频帧的噪声强度,选择与所述噪声强度相匹配的滤波强度及噪声特征曲线;
根据所述滤波强度及噪声特征曲线,对当前视频帧进行滤波。
本发明实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现以上所述的视频去噪方法。
本发明实施例还提供了一种视频去噪装置,包括处理器及存储器,所述处理器及存储器通过电耦合进行连接,所述处理器设置为执行存储器中存储的程序,以实现以上所述的视频去噪方法。
本发明实施例还提供了一种视频去噪装置,包括噪声统计模块、噪声估计模块和视频去噪模块;
噪声统计模块,设置为对输入视频帧序列中的每一视频帧进行子图像块划分,并计算每个子图像块的块方差;
噪声估计模块,设置为根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,根据计算出的平均方差确定当前视频帧的噪声强度,选择与所述噪声强度相匹配的滤波强度及噪声特征曲线;
视频去噪模块,设置为根据所述滤波强度及噪声特征曲线,对当前视频帧进行滤波。
附图说明
图1为本发明实施例提供的一种视频去噪方法的示例性流程示意图;
图2为本发明实施例提供的一种先入先出队列对噪声强度进行平滑的原理示意图;
图3为本发明实施例提供的一种视频去噪过程的示例性流程示意图;
图4为本发明实施例提供的一种空域去噪过程的示例性流程示意图;
图5为本发明实施例提供的一种基于运动补偿的运动矢量的计算原理示意图;
图6为本发明实施例提供的一种运动强度与混合系数的映射关系示意图;
图7为本发明实施例提供的一种视频去噪装置的示例性结构示意图。
具体实施方式
下文中将结合附图对本发明实施例进行说明。
在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在一些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
如图1所示,本发明实施例提供了一种视频去噪方法,包括如下步骤:
步骤101:对输入视频帧序列中的每一视频帧进行子图像块划分,并计算每个子图像块的块方差。
在一种示例性实施例中,所述计算每个子图像块的块方差,包括:
计算所述每个子图像块的空域方差;计算所述当前视频帧中所述子图像块与所述当前视频帧的前一帧视频帧对应位置的子图像块之间的时域方差;选择空域方差和时域方差中的较小值作为所述子图像块的块方差。
在该实施例中,输入带有噪声的视频帧f_in(n)与其前一帧视频帧f_in(n-1),根据第一预设值将f_in(n)与f_in(n-1)划分为大小相同的子图像块,针对f_in(n)中的每个子图像块计算得到他们的空域方差δs,f_in(n)中的每个子图像块的像素值减去f_in(n-1)中对应位置的子图像块的像素值,得到f_in(n)中每个子图像块的时域方差δt,f_in(n)中每个子图像块最后的方差为该子图像块的空域方差δs与时域方差δt中的较小值。
步骤102:根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,根据计算出的平均方差确定当前视频帧的噪声强度,选择与所述噪声强度 相匹配的滤波强度及噪声特征曲线。
在一种示例性实施例中,所述根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,包括:
将当前视频帧中所有子图像块的块方差从小到大排序;对排序后的前n个子图像块的块方差进行累加,将累加的块方差和与n的比值作为当前视频帧中所有子图像块的平均方差,其中,n为大于1的自然数。
在该实施例中,所述前n个子图像块可以为当前视频帧中所有子图像块中的前N%个子图像块。例如,可以设置为排序后的子图像块中的前10%个子图像块。
在一种示例性实施例中,所述根据计算出的平均方差确定当前视频帧的噪声强度,包括:
如果计算出的平均方差小于预设方差值,记录所述当前视频帧的噪声强度为0;如果计算出的平均方差大于或等于预设方差值,将计算出的平均方差作为当前视频帧的噪声强度。
在一种示例性实施例中,所述在选择与所述噪声强度相匹配的滤波强度及噪声特征曲线之前,所述方法还包括:
计算当前视频帧及其之前的m帧视频帧的噪声强度的平均值,其中,m为大于1的自然数;将计算出的噪声强度的平均值作为当前视频帧平滑后的噪声强度。
在一种示例性实施例中,所述滤波强度包括空域滤波强度和时域滤波强度。
在该实施例中,得到f_in(n)中每个子图像块的方差值之后,把多个子图像块按照方差值从小到大排序,把第二预设值个子图像块的方差进行累加,根据累加的方差和以及第二预设值的大小计算所有子图像块的平均方差,如果所有子图像块的平均方差小于第三预设值,则把0写入FIFO中,否则把子图像块的平均方差写入先入先出(First Input First Output,FIFO)队列中,如图2所示,FIFO的深度可以为16,即存储最近16帧的噪声强度数据。把FIFO中所有的数据求和平均后(Average value),得到当前视频帧的平滑后的噪声强度(noise level),然后根据噪声强度的大小,选择与噪声强度相匹配的空域滤波强度(Spatial denoise strength),时域滤波强度(temporal denoise strength),以及对应噪声特征曲线(noise curve)。
步骤103:根据所述滤波强度及噪声特征曲线,对当前视频帧进行滤波。
在一种示例性实施例中,所述步骤103包括:
根据空域滤波强度及噪声特征曲线,对当前视频帧进行空域滤波;根据所述当前视频帧及所述当前视频帧的前一帧视频帧估算所述当前视频帧的每个子图像块的运动强度及运动矢量;根据估算出的运动强度得到当前视频帧中每个像素点的权重,根据估算出的运动矢量得到前一帧视频帧中参与时域滤波的像素点的位置,将空域滤波后的当前视频帧中的像素点与所述像素点对应的运动矢量指向的前一帧视频帧中的像素点进行加权平均滤波,得到滤波后的像素点。
在一种示例性实施例中,所述空域滤波的算法为BM3D去噪声算法,且在BM3D去噪声算法的维纳滤波操作中对维纳系数根据像素点的亮度值及噪声特征曲线,进行相应比例的缩放操作。
在该实施例中,如图3所示,所述步骤103包含五个相关操作:空域去噪(Spatial denoise)、运动估计(Motion estimate)、运动检测(Moition detector)、混合系数映射(motion2α)及混合(blending),输入包括当前待滤波的视频帧f_in(n)、滤波后的前一帧视频帧f_out(n-1)以及步骤102输出的空域滤波强度系数、时域滤波强度系数、噪声特征曲线以及噪声强度。
空域去噪操作:如图4所示,本申请的空域去噪操作可以采用BM3D的去噪声算法,但是本申请对该算法进行了改进,使得其算法更符合视频采集端引入的噪声的特性,本申请在维纳滤波(Wiener filter)的操作中对维纳系数根据像素点的亮度值及其噪声特征曲线,进行一定比例的缩放操作。本申请的空域去噪操作不一定非得是BM3D算法,引导滤波、双边滤波等滤波算法也都可以,只是处理出来效果比BM3D算法稍微差一些。
运动估计操作:把当前待滤波的视频帧f_in(n),按照预设值进行分块操作,把当前视频帧的图像划分成一块块子图像块,子图像块可以重叠,然后针对每一个子图像块,对滤波后的前一帧视频帧f_out(n-1)中对应位置为中心的一定搜索范围内的所有子图像块进行最小均方误差(Minimum Squared Error,MSE)操作,将求得的最小的MSE值对应的子图像块,设置成当前视频帧中当前子图像块对应的最佳匹配块,运动矢量设置为最佳匹配块在上一帧图像中的坐标减去当前子图像块的坐标值,如图5所示。
运动检测操作:根据前述运动估计操作中,当前待滤波的视频帧中的每一个子图像块在前一帧视频帧中有一个与之相匹配的最佳匹配块,将每一个子图像块与它对应的最佳匹配块做绝对差的总和(Sum of Absolute Difference,SAD)操作。
Figure PCTCN2020101806-appb-000001
将每个子图像块的SAD值当做它的运动强度值其中,(i,j)为待滤波像素点的二维坐标,0≤i≤M,0≤j≤N。
混合系数映射操作:根据上述运动检测操作求出的运动强度值,根据图6所示进行映射,就可以得到混合系数α,图6中横坐标为运动强度值,纵坐标为混合系数值,其中,基础运动(Base_motion)、混合坡度(blend_slope)、顶部运动(Top_motion)为三个预设值,通过这三个预设值就可以把对应的映射关系给确定下来,这三个预设值要保证线段的斜率为负值,即运动越强,混合系数越小,不然会导致运动模糊,甚至拖影的产生。
混合操作:根据混合系数映射操作中得到的混合系数α、运动估计操作中得到的运动矢量以及空域去噪操作中得到的空域滤波之后的图像,最后的输出图像就可以通过加权平均的方式来获得,计算公式如下:
f_out(n,i,j)=f_in_spa(n,i,j)*(1-α)+f_out(n-1,i+mvi,j+mvj)α
(i,j)为待滤波像素点的二维坐标,(mvi,mvj)为待滤波像素点的运动矢量,n为视频帧序列中第n帧视频帧,f_in_spa(n,i,j)为空域滤波后的第n帧视频帧的待滤波像素点,f_out(n-1,i+mvi,j+mvj)为滤波后的第n-1帧视频帧的像素点。
与相关技术相比,本发明实施例提供的视频去噪方法,包括对输入视频帧序列中的每一视频帧进行子图像块划分,并计算每个子图像块的块方差;根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,根据计算出的平均方差确定当前视频帧的噪声强度,选择与所述噪声强度相匹配的滤波强度及噪声特征曲线;根据所述滤波强度及噪声特征曲线,对当前视频帧进行滤波,有效地提高了噪声强度估计的准确率,根据预测出的噪声强度选择相匹配的滤波强度及噪声特征曲线,能有效地去除噪声,又能防止因为去噪强度过大导致图像细节丢失的问题,从而达到整体去噪效果较优的性能。
本发明实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如下操作:
对输入视频帧序列中的每一视频帧进行子图像块划分,并计算每个子图像块的块方差;根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,根据计算出的平均方差确定当前视频帧的噪声强度,选择与所述噪声强度相匹配的滤波强度及噪声特征曲线;根据所述滤波强度及噪声特征曲线,对当前视频帧进行滤波。
在一种示例性实施例中,所述计算每个子图像块的块方差,包括:
计算每个子图像块的空域方差;计算所述当前视频帧中所述子图像块与所 述当前视频帧的前一帧对应位置的子图像块之间的时域方差;选择空域方差和时域方差中的较小值作为所述子图像块的块方差。
在一种示例性实施例中,所述根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,包括:
将所述当前视频帧中所有子图像块的块方差从小到大排序;对排序后的前n个子图像块的块方差进行累加,将所述累加的块方差和与n的比值作为所述当前视频帧中所有子图像块的平均方差,其中,n为大于1的自然数。
在一种示例性实施例中,所述根据计算出的平均方差确定当前视频帧的噪声强度,包括:
如果所述计算出的平均方差小于预设方差值,记录所述当前视频帧的噪声强度为0;如果所述计算出的平均方差大于或等于预设方差值,将计算出的平均方差作为所述当前视频帧的噪声强度。
在一种示例性实施例中,所述在选择与所述噪声强度相匹配的滤波强度及噪声特征曲线之前,所述操作还包括:
计算所述当前视频帧及其之前的m帧视频帧的噪声强度的平均值,其中,m为大于1的自然数;将计算出的噪声强度的平均值作为所述当前视频帧平滑后的噪声强度。
在一种示例性实施例中,所述滤波强度包括空域滤波强度和时域滤波强度。
所述根据所述滤波强度及噪声特征曲线,对当前视频帧进行滤波,包括:
根据空域滤波强度及所述噪声特征曲线,对所述当前视频帧进行空域滤波;根据所述当前视频帧及所述当前视频帧的前一帧视频帧估算当前视频帧的每个子图像块的运动强度及运动矢量;根据估算出的运动强度得到当前视频帧中每个像素点的权重,根据估算出的运动矢量得到前一帧视频帧中参与时域滤波的像素点的位置,将空域滤波后的当前视频帧中的像素点与所述像素点对应的运动矢量指向的前一帧视频帧中的像素点进行加权平均滤波,得到滤波后的像素点。
在一种示例性实施例中,所述空域滤波的算法为块匹配和三维过滤BM3D去噪声算法,且在BM3D去噪声算法的维纳滤波操作中对维纳系数根据像素点的亮度值及噪声特征曲线,进行相应比例的缩放操作。
本发明实施例还提供了一种视频去噪装置,包括处理器及存储器,其中:所述处理器用于执行存储器中存储的程序,以实现如下操作:
对输入视频帧序列中的每一视频帧进行子图像块划分,并计算每个子图像块的块方差;根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,根据计算出的平均方差确定当前视频帧的噪声强度,选择与所述噪声强度相匹配的滤波强度及噪声特征曲线;根据所述滤波强度及噪声特征曲线,对当前视频帧进行滤波。
在一种示例性实施例中,所述计算每个子图像块的块方差,包括:
计算所述每个子图像块的空域方差;计算所述当前视频帧中所述子图像块与所述当前视频帧的前一帧对应位置的子图像块之间的时域方差;选择空域方差和时域方差中的较小值作为所述子图像块的块方差。
在一种示例性实施例中,所述根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,包括:
将所述当前视频帧中所有子图像块的块方差从小到大排序;对排序后的前n个子图像块的块方差进行累加,将所述累加的块方差和与n的比值作为所述当前视频帧中所有子图像块的平均方差,其中,n为大于1的自然数。
在一种示例性实施例中,所述根据计算出的平均方差确定当前视频帧的噪声强度,包括:
如果所述计算出的平均方差小于预设方差值,记录所述当前视频帧的噪声强度为0;如果所述计算出的平均方差大于或等于预设方差值,将计算出的平均方差作为所述当前视频帧的噪声强度。
在一种示例性实施例中,所述在选择与所述噪声强度相匹配的滤波强度及噪声特征曲线之前,所述操作还包括:
计算所述当前视频帧及其之前的m帧视频帧的噪声强度的平均值,其中,m为大于1的自然数;将计算出的噪声强度的平均值作为所述当前视频帧平滑后的噪声强度。
在一种示例性实施例中,所述滤波强度包括空域滤波强度和时域滤波强度。
所述根据所述滤波强度及噪声特征曲线,对当前视频帧进行滤波,包括:
根据空域滤波强度及所述噪声特征曲线,对所述当前视频帧进行空域滤波;根据所述当前视频帧及所述当前视频帧的前一帧视频帧估算当前视频帧的每个子图像块的运动强度及运动矢量;根据估算出的运动强度得到当前视频帧中每个像素点的权重,根据估算出的运动矢量得到前一帧视频帧中参与时域滤波的像素点的位置,将空域滤波后的当前视频帧中的像素点与所述像素点对应的运动矢量指向的前一帧视频帧中的像素点进行加权平均滤波,得到滤波后的像素 点。
在一种示例性实施例中,所述空域滤波的算法为块匹配和三维过滤BM3D去噪声算法,且在BM3D去噪声算法的维纳滤波操作中对维纳系数根据像素点的亮度值及噪声特征曲线,进行相应比例的缩放操作。
如图7所示,本发明实施例还提供了一种视频去噪装置,包括噪声统计模块701、噪声估计模块702和视频去噪模块703,其中:
噪声统计模块701,设置为对输入视频帧序列中的每一视频帧进行子图像块划分,并计算每个子图像块的块方差,将计算出的子图像块的块方差输出至噪声估计模块702。
噪声估计模块702,设置为根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,根据计算出的平均方差确定当前视频帧的噪声强度,选择与所述噪声强度相匹配的滤波强度及噪声特征曲线。
视频去噪模块703,设置为根据所述滤波强度及噪声特征曲线,对当前视频帧进行滤波。
在一种示例性实施例中,所述计算每个子图像块的块方差,包括:
计算每个子图像块的空域方差;计算所述当前视频帧中所述子图像块与所述当前视频帧的前一帧对应位置的子图像块之间的时域方差;选择空域方差和时域方差中的较小值作为所述子图像块的块方差。
在一种示例性实施例中,所述将计算出的子图像块的块方差输出至噪声估计模块702,包括:
将当前视频帧中所有子图像块的块方差从小到大排序;对排序后的子图像块中前n个子图像块的块方差进行累加,把累加的块方差和与n输出至噪声估计模块702,其中,n为大于1的自然数。
在该实施例中,带有噪声的当前视频帧f_in(n)与其前一帧视频帧f_in(n-1)输入至噪声统计模块701,噪声统计模块701根据第一预设值将f_in(n)与f_in(n-1)划分为大小相同的子图像块,针对f_in(n)中的每个子图像块计算得到他们的空域的方差δs,f_in(n)中的每个子图像块的像素值减去f_in(n-1)中对应位置的子图像块像素值,得到f_in(n)中每个块的时域方差δt,f_in(n)中每个子图像块最后的方差为该子图像块的空域的方差δs与时域的方差δt中最小值,得到f_in(n)中每个子图像块的方差值之后,把多个子图像块按照方差值从小到大排序,根据第二预设值,把第二预设值个子图像块的方差进行累加,然后把累加的方差 和,以及第二预设值的大小输出。
在一种示例性实施例中,所述根据计算出的平均方差确定当前视频帧的噪声强度,包括:
如果计算出的平均方差小于预设方差值,记录所述当前视频帧的噪声强度为0;如果计算出的平均方差大于或等于预设方差值,将计算出的平均方差作为当前视频帧的噪声强度。
在一种示例性实施例中,所述在选择与所述噪声强度相匹配的滤波强度及噪声特征曲线之前,所述噪声估计模块702还设置为:
计算当前视频帧及其之前的m帧视频帧的噪声强度的平均值,其中,m为大于1的自然数;将计算出的噪声强度的平均值作为当前视频帧平滑后的噪声强度。
在一种示例性实施例中,所述滤波强度包括空域滤波强度和时域滤波强度。
如图2所示,噪声估计模块702接收到噪声统计模块701输出的方差和与子图像块数,计算每个子图像块的平均方差,如果子图像块的平均方差小于第三预设值,则把0写入FIFO中,否则把子图像块的平均方差写入FIFO中,FIFO的深度可以为16,即存储最近16帧的噪声强度数据;把FIFO中所有数据的求和平均后,得到当前视频帧的平滑后的噪声强度,然后根据噪声强度的大小,选择与噪声强度相匹配的空域滤波强度、时域滤波强度以及对应的噪声特征曲线。
在一种示例性实施例中,所述视频去噪模块703是设置为:
根据空域滤波强度及噪声特征曲线,对当前视频帧进行空域滤波;根据当前视频帧及其前一帧视频帧估算当前视频帧的每个子图像块的运动强度及运动矢量;根据估算出的运动强度得到当前视频帧中每个像素点的权重,根据估算出的运动矢量得到前一帧视频帧中参与时域滤波的像素点的位置,将空域滤波后的当前视频帧中的像素点与所述像素点对应的运动矢量指向的前一帧视频帧中的像素点进行加权平均滤波,得到滤波后的像素点。
在一种示例性实施例中,所述空域滤波的算法可以为BM3D去噪声算法,且在BM3D去噪声算法的维纳滤波操作中对维纳系数根据像素点的亮度值及噪声特征曲线,进行相应比例的缩放操作。
在该实施例中,如图3所示,视频去噪模块703的输入为:当前待滤波的视频帧f_in(n),滤波后的前一帧视频帧f_out(n-1),以及噪声估计模块702输出的空域滤波强度系数、时域滤波强度系数、噪声特征曲线以及噪声强度。视频去噪模块703中包含五个子模块:空域去噪子模块,运动估计子模块,运动检 测子模块,混合系数映射子模块及混合子模块。其中,空域去噪子模块根据噪声估计模块702传过来的空域滤波强度以及噪声特征曲线,对f_in(n)进行空域滤波得到空域滤波后的图像f_in_spa(n);运动估计子模块根据输入的两帧图像,计算出f_in(n)中每个子图像块的运动矢量值;运动检测子模块基于块的方式对当前视频帧f_in(n)中所有的子图像块进行运动检测,得到每个子图像块的运动强度;空域滤波之后的图像f_in_spa(n),运动估计子模块输出的运动矢量,运动检测子模块输出的运动强度信息,输出给时域滤波器(包括混合系数映射子模块及混合子模块)进行时域滤波,时域滤波器先根据运动强度信息得到参与时域滤波的每个像素点的权重,根据运动矢量信息得到前一帧视频帧中参与时域滤波的像素点的位置,然后当前视频帧的像素点与前一帧视频帧中运动矢量指向的像素点进行加权平均滤波得到最后滤波的像素点。多个模块的工作原理如下:
空域去噪子模块:如图4所示,本申请的空域去噪子模块采用的是BM3D的去噪声算法,并对其算法进行了改进,使得其算法更符合视频采集端引入的噪声的特性,本申请在维纳滤波(Wiener filter)的操作中对维纳系数根据像素点的亮度值及其噪声特征曲线,进行一定比例的缩放操作,本申请的空域去噪子模块不一定非得使用BM3D算法,引导滤波、双边滤波等滤波算法都可以,只是处理出来效果稍微差一些。
运动估计子模块:把当前待滤波的视频帧f_in(n),按照预设值进行分块操作,把f_in(n)的图像划分成一块块子图像块,子图像块可以重叠,然后针对每一个子图像块,对滤波后的前一帧视频帧f_out(n-1)中对应位置为中心的一定搜索范围内的所有子图像块进行MSE操作,将求得的最小的MSE值对应的子图像块,设置成当前视频帧中当前子图像块对应的最佳匹配块,运动矢量设置为最佳匹配块在上一帧图像中的坐标减去当前子图像块的坐标值,如图5所示。
运动检测子模块:根据运动估计子模块计算出的当前待滤波的视频帧中的每一个子图像块在前一帧视频帧中有一个与之相匹配的最佳匹配块,将每一个子图像块与它对应的最佳匹配块做SAD操作。
Figure PCTCN2020101806-appb-000002
将每个子图像块的SAD值当做它的运动强度值,其中,(i,j)为待滤波像素点的二维坐标,0≤i≤M,0≤j≤N。
混合系数映射子模块:如图6所示,根据运动检测子模块计算出的运动强度值进行映射,就可以得到混合系数α,图6中横坐标为运动强度值,纵坐标为混合系数值,其中,Base_motion,blend_slope,Top_motion为三个预设值,通过这三个预设值就可以把对应的映射关系给确定下来,这三个预设值要保证 线段的斜率为负值,即运动越强,混合系数越小,不然会导致运动模糊,甚至拖影的产生。
混合子模块:根据混合系数映射子模块得到的混合系数α,运动估计子模块得到的运动矢量,以及空域去噪子模块得到的空域滤波之后的图像,最后的输出图像就可以通过加权平均的方式来获得,计算公式如下:
f_out(n,i,j)=f_in_spa(n,i,j)*(1-α)+f_out(n-1,i+mvi,j+mvj)α
(i,j)为待滤波像素点的二维坐标,(mvi,mvj)为待滤波像素点的运动矢量,n为视频帧序列中第n帧视频帧,f_in_spa(n,i,j)为空域滤波后的第n帧视频帧的待滤波像素点,f_out(n-1,i+mvi,j+mvj)为滤波后的第n-1帧视频帧的像素点。
本发明实施例提供的视频去噪方法和装置、计算机可读存储介质,通过将图像的噪声估计方法与视频图像去噪方法相结合联合去噪,解决了相关技术中对图像的噪声估计不准确,以及去噪性能与图像质量不能兼得的问题。本发明实施例提出的视频去噪方案包括三个模块:噪声统计模块701,噪声估计模块702以及视频去噪模块703。噪声统计模块701根据输入视频帧序列进行分块划分,统计出当前视频帧的噪声强度相关的信息;噪声估计模块702根据噪声统计模块701统计出来的噪声强度信息(主要是块方差信息),经过一定的预处理之后,选择实时调节的去噪相关的参数(包括空域去噪强度、时域去噪强度、噪声特性曲线以及噪声强度)并下发到视频去噪模块703;视频去噪模块703根据噪声估计模块702实时下发的去噪相关的参数,进行视频去噪。
采用本申请所述的视频去噪方案,具有如下优点:
(1)根据噪声统计特性来进行噪声强度估计,噪声强度估计有两种方式,一种是基于视频前后两帧的估计,另一种是针对当前视频帧的噪声估计,两种算法相互验证,准确率更高。
(2)本申请计算出的噪声强度信息,做了一个(m+1)帧的平均,得到的噪声强度更平滑,不会出现噪声强度跳跃很大,导致的图像一帧清晰,一帧模糊,或者一帧有噪声残留,一帧没有噪声残留的闪烁现象。
(3)本申请采用的是空域BM3D去噪算法、时域运动补偿以及运动强度检测相结合的算法,算法效果更好,复杂度又不算太高,在效果与复杂度之间取了一个比较好的平衡点。
(4)根据噪声方差跟亮度成正比的原理,本申请引入噪声特征曲线,根据噪声亮度动态调节去噪强度,从而达到了更好的去噪效果。

Claims (10)

  1. 一种视频去噪方法,包括:
    对输入视频帧序列中的每一视频帧进行子图像块划分,并计算每个子图像块的块方差;
    根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,根据计算出的平均方差确定所述当前视频帧的噪声强度,选择与所述噪声强度相匹配的滤波强度及噪声特征曲线;
    根据所述滤波强度及所述噪声特征曲线,对所述当前视频帧进行滤波。
  2. 根据权利要求1所述的方法,其中,所述计算每个子图像块的块方差,包括:
    计算每个子图像块的空域方差;
    计算所述当前视频帧中所述子图像块与所述当前视频帧的前一帧视频帧对应位置的子图像块之间的时域方差;
    选择所述空域方差和所述时域方差中的较小值作为所述子图像块的块方差。
  3. 根据权利要求1所述的方法,其中,所述根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,包括:
    将所述当前视频帧中所有子图像块的块方差从小到大排序;
    对排序后的前n个子图像块的块方差进行累加,将累加的块方差和与n的比值作为所述当前视频帧中所有子图像块的平均方差,其中,n为大于1的自然数。
  4. 根据权利要求1所述的方法,其中,所述根据计算出的平均方差确定所述当前视频帧的噪声强度,包括:
    在所述计算出的平均方差小于预设方差值的情况下,确定所述当前视频帧的噪声强度为0;
    在所述计算出的平均方差大于或等于预设方差值的情况下,将所述计算出的平均方差作为所述当前视频帧的噪声强度。
  5. 根据权利要求1所述的方法,在所述根据计算出的平均方差确定所述当前视频帧的噪声强度之后,且在所述选择与所述噪声强度相匹配的滤波强度及噪声特征曲线之前,还包括:
    计算所述当前视频帧及所述当前视频帧之前的m帧视频帧的噪声强度的平均值,其中,m为大于1的自然数;
    将计算出的噪声强度的平均值作为所述当前视频帧平滑后的噪声强度。
  6. 根据权利要求1所述的方法,其中,所述滤波强度包括空域滤波强度和时域滤波强度;
    所述根据所述滤波强度及所述噪声特征曲线,对所述当前视频帧进行滤波,包括:
    根据所述空域滤波强度及所述噪声特征曲线,对所述当前视频帧进行空域滤波;
    根据所述当前视频帧及所述当前视频帧的前一帧视频帧估算所述当前视频帧的每个子图像块的运动强度及运动矢量;
    根据估算出的运动强度得到所述当前视频帧中每个像素点的权重,根据估算出的运动矢量得到所述前一帧视频帧中参与时域滤波的像素点的位置,将空域滤波后的所述当前视频帧中的像素点与所述像素点对应的运动矢量指向的所述前一帧视频帧中的像素点进行加权平均滤波,得到滤波后的所述像素点。
  7. 根据权利要求6所述的方法,其中,所述空域滤波的算法为块匹配和三维过滤BM3D去噪声算法,且在所述BM3D去噪声算法的维纳滤波操作中对维纳系数根据像素点的亮度值及噪声特征曲线,进行相应比例的缩放操作。
  8. 一种计算机可读存储介质,存储有至少一个程序,所述至少一个程序可被至少一个处理器执行,以实现如权利要求1至权利要求7中任一项所述的视频去噪方法。
  9. 一种视频去噪装置,包括处理器及存储器,所述处理器及所述存储器通过电耦合进行连接,所述处理器设置为执行所述存储器中存储的程序,以实现如权利要求1至权利要求7中任一项所述的视频去噪方法。
  10. 一种视频去噪装置,包括噪声统计模块、噪声估计模块和视频去噪模块;
    所述噪声统计模块,设置为对输入视频帧序列中的每一视频帧进行子图像块划分,并计算每个子图像块的块方差;
    所述噪声估计模块,设置为根据计算出的块方差计算当前视频帧中所有子图像块的平均方差,根据计算出的平均方差确所述定当前视频帧的噪声强度,选择与所述噪声强度相匹配的滤波强度及噪声特征曲线;
    所述视频去噪模块,设置为根据所述滤波强度及所述噪声特征曲线,对所述当前视频帧进行滤波。
PCT/CN2020/101806 2019-07-29 2020-07-14 视频去噪方法、装置及计算机可读存储介质 WO2021017809A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020217034597A KR102605747B1 (ko) 2019-07-29 2020-07-14 비디오 노이즈 제거 방법, 장치 및 컴퓨터 판독 가능 저장 매체
US17/624,237 US20220351335A1 (en) 2019-07-29 2020-07-14 Video denoising method and device, and computer readable storage medium
JP2021564231A JP7256902B2 (ja) 2019-07-29 2020-07-14 ビデオノイズ除去方法、装置及びコンピュータ読み取り可能な記憶媒体
EP20848408.9A EP3944603A4 (en) 2019-07-29 2020-07-14 METHOD AND APPARATUS FOR DE-NOISED VIDEO, AND COMPUTER READABLE INFORMATION MEDIA

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910691413.1 2019-07-29
CN201910691413.1A CN112311962B (zh) 2019-07-29 2019-07-29 一种视频去噪方法和装置、计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021017809A1 true WO2021017809A1 (zh) 2021-02-04

Family

ID=74228194

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/101806 WO2021017809A1 (zh) 2019-07-29 2020-07-14 视频去噪方法、装置及计算机可读存储介质

Country Status (6)

Country Link
US (1) US20220351335A1 (zh)
EP (1) EP3944603A4 (zh)
JP (1) JP7256902B2 (zh)
KR (1) KR102605747B1 (zh)
CN (1) CN112311962B (zh)
WO (1) WO2021017809A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438386A (zh) * 2021-05-20 2021-09-24 珠海全志科技股份有限公司 一种应用于视频处理的动静判定方法及装置
CN114742727A (zh) * 2022-03-31 2022-07-12 南通电博士自动化设备有限公司 一种基于图像平滑的噪声处理方法及系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230059035A1 (en) * 2021-08-23 2023-02-23 Netflix, Inc. Efficient encoding of film grain noise
CN114648469B (zh) * 2022-05-24 2022-09-27 上海齐感电子信息科技有限公司 视频图像去噪方法及其系统、设备和存储介质
CN116016807B (zh) * 2022-12-30 2024-04-19 广东中星电子有限公司 一种视频处理方法、系统、可存储介质和电子设备
CN116523765B (zh) * 2023-03-13 2023-09-05 湖南兴芯微电子科技有限公司 一种实时视频图像降噪方法、装置及存储器
CN116342891B (zh) * 2023-05-24 2023-08-15 济南科汛智能科技有限公司 一种适用于自闭症儿童结构化教学监控数据管理系统
CN116634284B (zh) * 2023-07-20 2023-10-13 清华大学 Raw域视频去噪方法、装置、电子设备及存储介质
CN117615146A (zh) * 2023-11-13 2024-02-27 书行科技(北京)有限公司 视频处理方法及装置、电子设备及计算机可读存储介质

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050094889A1 (en) * 2003-10-30 2005-05-05 Samsung Electronics Co., Ltd. Global and local statistics controlled noise reduction system
CN101489034A (zh) * 2008-12-19 2009-07-22 四川虹微技术有限公司 一种视频图像噪声估计与去除方法
CN102118546A (zh) * 2011-03-22 2011-07-06 上海富瀚微电子有限公司 一种视频图像噪声估计算法的快速实现方法
CN102164278A (zh) * 2011-02-15 2011-08-24 杭州海康威视软件有限公司 用于去除i帧闪烁的视频编码方法及其装置
CN102238316A (zh) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 一种3d数字视频图像的自适应实时降噪方案
CN102436646A (zh) * 2011-11-07 2012-05-02 天津大学 基于压缩感知的ccd噪声估计方法
CN102769722A (zh) * 2012-07-20 2012-11-07 上海富瀚微电子有限公司 时域与空域结合的视频降噪装置及方法
CN103414845A (zh) * 2013-07-24 2013-11-27 中国航天科工集团第三研究院第八三五七研究所 一种自适应的视频图像降噪方法及降噪系统
CN103491282A (zh) * 2013-09-23 2014-01-01 华为技术有限公司 图像消噪方法与装置
CN104021533A (zh) * 2014-06-24 2014-09-03 浙江宇视科技有限公司 一种实时图像降噪方法及装置
CN104134191A (zh) * 2014-07-11 2014-11-05 三星电子(中国)研发中心 图像去噪方法及其装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7983501B2 (en) * 2007-03-29 2011-07-19 Intel Corporation Noise detection and estimation techniques for picture enhancement
US20080316364A1 (en) * 2007-06-25 2008-12-25 The Hong Kong University Of Science And Technology Rate distortion optimization for video denoising
US8149336B2 (en) * 2008-05-07 2012-04-03 Honeywell International Inc. Method for digital noise reduction in low light video
CN104680483B (zh) * 2013-11-25 2016-03-02 浙江大华技术股份有限公司 图像的噪声估计方法、视频图像去噪方法及装置
US9123103B2 (en) * 2013-12-26 2015-09-01 Mediatek Inc. Method and apparatus for image denoising with three-dimensional block-matching
US20170178309A1 (en) * 2014-05-15 2017-06-22 Wrnch Inc. Methods and systems for the estimation of different types of noise in image and video signals
WO2016185708A1 (ja) 2015-05-18 2016-11-24 日本電気株式会社 画像処理装置、画像処理方法、および、記憶媒体
CN105208376B (zh) * 2015-08-28 2017-09-12 青岛中星微电子有限公司 一种数字降噪方法和装置
EP3154021A1 (en) * 2015-10-09 2017-04-12 Thomson Licensing Method and apparatus for de-noising an image using video epitome
CN107645621A (zh) * 2016-07-20 2018-01-30 阿里巴巴集团控股有限公司 一种视频处理的方法和设备
CN107016650B (zh) * 2017-02-27 2020-12-29 苏州科达科技股份有限公司 视频图像3d降噪方法及装置
US10674045B2 (en) * 2017-05-31 2020-06-02 Google Llc Mutual noise estimation for videos
CN109859126B (zh) * 2019-01-17 2021-02-02 浙江大华技术股份有限公司 一种视频降噪方法、装置、电子设备及存储介质
TWI703864B (zh) * 2019-07-04 2020-09-01 瑞昱半導體股份有限公司 基於雜訊比的去雜訊方法
CA3194402A1 (en) * 2020-10-07 2022-04-14 David Taylor A line clearance system
US20230377104A1 (en) * 2022-05-20 2023-11-23 GE Precision Healthcare LLC System and methods for filtering medical images

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050094889A1 (en) * 2003-10-30 2005-05-05 Samsung Electronics Co., Ltd. Global and local statistics controlled noise reduction system
CN101489034A (zh) * 2008-12-19 2009-07-22 四川虹微技术有限公司 一种视频图像噪声估计与去除方法
CN102238316A (zh) * 2010-04-29 2011-11-09 北京科迪讯通科技有限公司 一种3d数字视频图像的自适应实时降噪方案
CN102164278A (zh) * 2011-02-15 2011-08-24 杭州海康威视软件有限公司 用于去除i帧闪烁的视频编码方法及其装置
CN102118546A (zh) * 2011-03-22 2011-07-06 上海富瀚微电子有限公司 一种视频图像噪声估计算法的快速实现方法
CN102436646A (zh) * 2011-11-07 2012-05-02 天津大学 基于压缩感知的ccd噪声估计方法
CN102769722A (zh) * 2012-07-20 2012-11-07 上海富瀚微电子有限公司 时域与空域结合的视频降噪装置及方法
CN103414845A (zh) * 2013-07-24 2013-11-27 中国航天科工集团第三研究院第八三五七研究所 一种自适应的视频图像降噪方法及降噪系统
CN103491282A (zh) * 2013-09-23 2014-01-01 华为技术有限公司 图像消噪方法与装置
CN104021533A (zh) * 2014-06-24 2014-09-03 浙江宇视科技有限公司 一种实时图像降噪方法及装置
CN104134191A (zh) * 2014-07-11 2014-11-05 三星电子(中国)研发中心 图像去噪方法及其装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3944603A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438386A (zh) * 2021-05-20 2021-09-24 珠海全志科技股份有限公司 一种应用于视频处理的动静判定方法及装置
CN113438386B (zh) * 2021-05-20 2023-02-17 珠海全志科技股份有限公司 一种应用于视频处理的动静判定方法及装置
CN114742727A (zh) * 2022-03-31 2022-07-12 南通电博士自动化设备有限公司 一种基于图像平滑的噪声处理方法及系统
CN114742727B (zh) * 2022-03-31 2023-05-05 南通电博士自动化设备有限公司 一种基于图像平滑的噪声处理方法及系统

Also Published As

Publication number Publication date
US20220351335A1 (en) 2022-11-03
EP3944603A4 (en) 2022-06-01
JP7256902B2 (ja) 2023-04-12
KR102605747B1 (ko) 2023-11-23
CN112311962B (zh) 2023-11-24
KR20210141697A (ko) 2021-11-23
JP2022542334A (ja) 2022-10-03
EP3944603A1 (en) 2022-01-26
CN112311962A (zh) 2021-02-02

Similar Documents

Publication Publication Date Title
WO2021017809A1 (zh) 视频去噪方法、装置及计算机可读存储介质
US9202263B2 (en) System and method for spatio video image enhancement
US9615039B2 (en) Systems and methods for reducing noise in video streams
EP3099044B1 (en) Multi-frame noise reduction method and terminal
US8233062B2 (en) Image processing apparatus, image processing method, and imaging apparatus
KR102182695B1 (ko) 영상 잡음 제거 장치 및 방법
EP2164040B1 (en) System and method for high quality image and video upscaling
KR20040098162A (ko) 프레임 레이트 변환시의 프레임 보간 방법 및 그 장치
CN106412441B (zh) 一种视频防抖控制方法以及终端
CN110418065B (zh) 高动态范围图像运动补偿方法、装置及电子设备
Jin et al. Quaternion-based impulse noise removal from color video sequences
TWI536319B (zh) 去雜訊方法以及影像系統
CN106791279B (zh) 基于遮挡检测的运动补偿方法及系统
WO2021232963A1 (zh) 视频去噪方法、装置、移动终端和存储介质
KR20150035315A (ko) 하이 다이나믹 레인지 영상 생성 방법 및, 그에 따른 장치, 그에 따른 시스템
TW201525940A (zh) 影像雜訊估測的方法與裝置
KR101517233B1 (ko) 움직임 추정을 이용한 잡음 제거장치
CN108632501B (zh) 视频防抖方法及装置、移动终端
JP6570304B2 (ja) 映像処理装置、映像処理方法およびプログラム
WO2012172728A1 (ja) 画像処理システム
JP3948616B2 (ja) 画像のマッチング装置
JP2021044652A (ja) 動きベクトル検出装置及び動きベクトル検出方法
PASHIKANTI Contemporary ACE Algorithm on Mobile Media Visual Quality Encoder-Integrated Demising & Deploring Stabilization
CN117544735A (zh) 一种视频降噪方法、装置、设备、存储介质及产品
JP2015133532A (ja) 撮像装置及び画像処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20848408

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217034597

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021564231

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020848408

Country of ref document: EP

Effective date: 20211020

NENP Non-entry into the national phase

Ref country code: DE