US12094085B2 - Video denoising method and apparatus, terminal, and storage medium - Google Patents
Video denoising method and apparatus, terminal, and storage medium Download PDFInfo
- Publication number
- US12094085B2 US12094085B2 US17/572,604 US202217572604A US12094085B2 US 12094085 B2 US12094085 B2 US 12094085B2 US 202217572604 A US202217572604 A US 202217572604A US 12094085 B2 US12094085 B2 US 12094085B2
- Authority
- US
- United States
- Prior art keywords
- image
- pixel
- pixels
- denoised
- target image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/268—Signal distribution or switching
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/70—Circuits for processing colour signals for colour killing
- H04N9/71—Circuits for processing colour signals for colour killing combined with colour gain control
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
Definitions
- the present disclosure relates to the field of multimedia technologies, and in particular, to a video denoising method and apparatus, a terminal, and a storage medium.
- neighborhood pixels often include pixels that have completed the filtering process before, so that the subsequent pixel has dependency on the pixels that have been processed, and the filtering process is a serial process, which results in a slow algorithm computing speed.
- a video denoising method and apparatus a terminal, and a storage medium are provided according to embodiments provided in the present disclosure.
- One aspect of the present disclosure provides a video denoising method executed by a terminal.
- the method includes: performing spatial filtering on pixels of a target image in a video to be processed to obtain a first image, the spatial filtering being used for eliminating dependencies between the pixels of the target image; performing, according to a frame difference between the first image and a first denoised image, temporal filtering on the pixels of the target image in parallel to obtain a second image, the first denoised image being an denoised image that corresponds to a preceding frame of the target image; predicting first gain coefficients corresponding to pixels of the second image in a second denoised image according to second gain coefficients corresponding to the pixels of the target image in the first denoised image; and fusing the first image and the second image according to the first gain coefficients to obtain the second denoised image that corresponds to the target image.
- a video denoising apparatus including: a spatial filtering module configured to perform spatial filtering on pixels of a target image in a video to be processed to obtain a first image, the spatial filtering being used for eliminating dependencies between the pixels of the target image; a temporal filtering module configured to perform, according to a frame difference between the first image and a first denoised image, temporal filtering on the pixels of the target image in parallel to obtain a second image, the first denoised image being a denoised image that corresponds to the previous frame of the target image; and a fusing module configured to predict first gain coefficients corresponding to pixels of the second image in a second denoised image according to second gain coefficients corresponding to the pixels of the target image in the first denoised image; and fuse the first image and the second image according to the first gain coefficients to obtain the second denoised image that corresponds to the target image.
- a spatial filtering module configured to perform spatial filtering on pixels of a target image in a video to be processed to obtain a first image
- Another aspect of the present disclosure provides a non-transitory storage medium that stores computer-readable instructions.
- the computer-readable instructions when executed by one or more processors, cause the one or more processors to perform: performing spatial filtering on pixels of a target image in a video to be processed to obtain a first image, the spatial filtering being used for eliminating dependencies between the pixels of the target image; performing, according to a frame difference between the first image and a first denoised image, temporal filtering on the pixels of the target image in parallel to obtain a second image, the first denoised image being an denoised image that corresponds to a preceding frame of the target image; predicting first gain coefficients corresponding to pixels of the second image in a second denoised image according to second gain coefficients corresponding to the pixels of the target image in the first denoised image; and fusing the first image and the second image according to the first gain coefficients to obtain the second denoised image that corresponds to the target image.
- the terminal includes a memory and a processor, computer-readable instructions being stored in the memory, and the computer-readable instructions, when executed by the processor, causing the processor to execute the operations of the video denoising method.
- FIG. 1 is a video image captured by a low-performance camera configured in a notebook computer according to an embodiment of the present disclosure.
- FIG. 3 is a structural block diagram of a video denoising system according to an embodiment of the present disclosure.
- FIG. 4 is a flowchart of a video denoising method according to an embodiment of the present disclosure.
- FIG. 5 is a schematic diagram of image filtering before removing pixel dependency according to an embodiment of the present disclosure.
- FIG. 6 is a schematic diagram of image filtering after removing pixel dependency according to an embodiment of the present disclosure.
- FIG. 8 is a schematic diagram of a comparison before and after denoising according to an embodiment of the present disclosure.
- FIG. 9 is a key schematic flowchart a video denoising method according to an embodiment of the present disclosure.
- FIG. 10 is a schematic flowchart of an algorithm of a video denoising method according to an embodiment of the present disclosure.
- FIG. 11 is a block diagram of a video denoising apparatus according to an embodiment of the present disclosure.
- FIG. 12 is a structural block diagram of a terminal according to an embodiment of the present disclosure.
- the embodiments of the present disclosure mainly relate to a scenario of performing denoising on a video, and take performing denoising on a remote conference video as an example for description.
- Remote video conferencing is an important part of various functions of collaborative office products, has very strict requirements on captured videos, and usually requires the use of high-definition cameras for video capturing.
- a captured video When a camera with poor performance is used for video capturing, a captured video generally has noises. If these noises are not processed, the experience of the video conference will be poor.
- FIG. 1 a video image captured by a low-performance camera configured in a notebook computer is shown. As can be seen from FIG. 1 , captured video images contain a lot of noises.
- the embodiment of the present disclosure may also be applied to perform denoising on a video captured by a mobile phone camera during a video call, or perform denoising on a video captured by a monitoring device, etc., which is not limited in the embodiment of the present disclosure.
- a video denoising method will be described in brief below.
- video denoising In order to enable videos captured by cameras to meet requirements of remote video conferences, it is usually necessary to perform denoising on the captured videos.
- these methods usually implement video denoising by running a video denoising related algorithm through a central processing unit (CPU) of a terminal.
- the collaborative office products include not only the function of remote video conferencing, but also other functions such as process approval and project management, if the remote video conferencing function occupies most of CPU resources, other functions of the collaborative office products cannot be used normally, or the collaborative office products require high CPU processing capabilities and cannot be used in most scenarios.
- the video denoising method provided by the embodiments of the present disclosure removes dependency between pixels in an image to meet requirements of parallel computing.
- a Graphics Processing Unit (GPU) has a stronger parallel computing capability than a CPU, and therefore, the video denoising method according to the embodiments of the present disclosure replaces the CPU by calling Metal (an image processing interface provided by Apple) or DirectX (an image processing interface provided by Microsoft) provided by the GPU, for realizing the parallel processing of the pixels.
- Metal an image processing interface provided by Apple
- DirectX an image processing interface provided by Microsoft
- the video denoising method provided by the embodiments of the present disclosure can achieve fast video denoising with a very low CPU occupancy rate.
- FIG. 2 is a schematic flowchart of a video conference according to an embodiment of the present disclosure.
- a video image captured by a camera is displayed locally after undergone denoising and other operations, for example, displayed on a screen of a notebook computer.
- An encoder encodes the denoised video image, and transmits it to a remote end through a network.
- a decoder at the remote end decodes the video image and displays the decoded video image at the remote end.
- the remote end may also be a notebook computer.
- FIG. 3 is a structural block diagram of a video denoising system 300 according to an embodiment of the present disclosure.
- the video denoising system 300 may be configured to implement video denoising, and includes a terminal 310 and a video service platform 320 .
- the terminal 310 may be connected to the video service platform 320 through a wireless network or a wired network.
- the terminal 310 may be at least one of a smartphone, a video camera, a desktop computer, a tablet computer, an a Moving Picture Experts Group Audio Layer IV (MP4) player, and a laptop portable computer.
- An application that supports remote video conferencing is installed and run on the terminal 310 .
- the terminal 310 may be a terminal used by a user, and an account of the user is logged in an application running by the terminal.
- the video service platform 320 includes at least one of one server, a plurality of servers, and a cloud computing platform.
- the video service platform 320 is configured to provide background services for remote video conferences, such as user management and video stream forwarding.
- the video service platform 320 includes: an access server, a data management server, a user management server, and a database.
- the access server is configured to provide an access service for the terminal 310 .
- the data management server is configured to forward a video stream uploaded by the terminal, and the like.
- the same service is provided in a load balancing manner or the same service is provided in the manner of a main server and a mirror server, which is not limited in the embodiment of the present disclosure.
- the database is configured to store account information of the user.
- the account information is data information that has been authorized by the user for capturing.
- the terminal 310 may generally refer to one of a plurality of terminals, and this embodiment only uses the local terminal 310 and two remote terminals 310 as examples. A person skilled in the art may learn that there may be more or fewer terminals. For example, there may be only one above remote terminal, or may be dozens or hundreds of above remote terminal, or more. The number and types of the terminals 310 are not limited in the embodiment of the present disclosure.
- FIG. 4 is a flowchart of a video denoising method according to an embodiment of the present disclosure. As shown in FIG. 4 , the method includes the following steps:
- Step 401 A terminal performs spatial filtering on pixels of a target image in a video to be processed to obtain a first image, the spatial filtering being used for eliminating dependencies between the pixels of the target image.
- the terminal may implement spatial filtering on the pixels of the target image based on a first filter, that is, input the target image into the first filter, and an output of the first filter is the first image after the spatial filtering.
- the first filter may be an improved bilateral filter, and the first filter can process the pixels of the target image in parallel.
- the first filter is described below:
- a bilateral filtering algorithm is a non-linear edge-preserving filtering algorithm, and is a compromised processing method that combines the spatial proximity of an image and the similarity of pixel values.
- the bilateral filtering algorithm considers both spatial information and gray-scale similarity to achieve the purpose of edge preservation and denoising, and has the characteristics of simple, non-iterative, and partial.
- the edge preservation and denoising refers to replacing an original pixel value of a currently processed pixel by an average value of neighborhood pixels of the pixel.
- the entire image is usually scanned by using a filter template first from left to right, then from top to bottom (or first from top to bottom, then from top to bottom).
- spatial filtering is performed on a pixel, it is often achieved by linear or non-linear processing on neighborhood pixels of the currently processed pixel.
- pixels in the neighborhood of the pixel often include pixels that have completed the spatial filtering process before, resulting in that the subsequent pixel has dependency on the filtered pixels, and such dependency causes the spatial filtering of the entire image to become a serial process.
- the removal of pixel dependency refers to elimination of the dependency between pixels.
- the principle of the first filter may be seen in Formula (1) and Formula (2).
- ⁇ (p) indicates a pixel value of the currently processed pixel in the image after spatial filtering
- ⁇ (p) indicates a pixel value of the currently processed pixel in the image
- I(q) indicates a pixel value of a neighborhood pixel of the currently processed pixel in the image
- p indicates the coordinates of the currently processed pixel in the image
- q indicates the coordinates of the neighborhood pixel of the currently processed pixel in the image
- ⁇ (p, q) indicates a weight related to the position of the pixel
- g(•) indicates a Gaussian function
- ⁇ s and ⁇ r indicate variances of the Gaussian function, respectively.
- I(q) corresponding to the neighborhood pixel sequentially before the currently processed pixel is a pixel value after the spatial filtering
- I(q) corresponding to the neighborhood pixel sequentially after the currently processed pixel is an original pixel value of the currently processed pixel
- the neighborhood pixels of the currently processed pixel refer to the pixels within the neighborhood of the currently processed pixel.
- the neighborhoods of the pixel have different sizes, and the numbers of the neighborhood pixels of the pixel are different.
- the neighborhood of a pixel may be four neighborhoods, that is, an upper neighborhood, a lower neighborhood, a left neighborhood, and a right neighborhood.
- the neighborhood pixels of a pixel are four pixels adjacent to the top, bottom, left, and right of the pixel.
- the neighborhood of a pixel may be eight neighborhoods, that is, an upper neighborhood, an upper left neighborhood, an upper right neighborhood, a lower neighborhood, a lower left neighborhood, a lower right neighborhood, a left neighborhood, and a right neighborhood.
- the neighborhood pixels of the pixel are eight pixels surrounding the pixel.
- the neighborhood of the pixel may also be selected in other ways.
- a currently processed pixel is a central pixel, and the central pixel corresponds to 12 neighborhood pixels.
- the neighborhood pixels located on the left and above the central pixel are pixels that have been processed.
- the neighborhood pixels located on the right and below the central pixel are unprocessed pixels.
- the above spatial filtering process is a serial process, and it takes a long time compared with a parallel process. Therefore, in the embodiment of the present disclosure, a first improvement is made to the above process, that is, the above bilateral filter is improved, and the pixel dependency between pixels is removed, thus obtaining the above first filter.
- the first filter is also based on the bilateral filtering algorithm. The difference is that when a pixel of the target image is filtered by the above formulas (1) and (2), pixel values of the neighborhood pixels of the pixel, that is, values of I(q) all use original pixel values of the image. In other words, pixel values after filtering are not used. In this way, each pixel no longer depends on the pixels whose processing orders are arranged before the current pixel, and the impact of the pixels whose processing orders are arranged before the current pixel on the current pixel after filtering is removed.
- FIG. 6 a schematic diagram of image filtering after pixel dependency is removed according to an embodiment of the present disclosure is shown.
- a currently processed pixel is a central pixel, and the central pixel corresponds to 12 neighborhood pixels.
- These 12 neighborhood pixels are all unprocessed pixels, that is, pixel values of the neighborhood pixels are all initial pixel values.
- the terminal can also call the image processing interface of the graphics processing unit, acquire, through the image processing interface, pixels of the target image in a video to be processed in parallel, and perform spatial filtering on the pixels acquired in parallel, thereby implementing parallel spatial filtering on the pixels of the target image in the video to be processed, which accelerates the entire spatial filtering process, saves CPU resources, and reduces the CPU occupancy rate.
- Step 403 The terminal determines a frame difference between the first image and the first denoised image.
- Step 404 The terminal performs temporal filtering on the pixels of the target image in parallel according to the frame difference between the first image and the first denoised image to obtain a second image.
- P k indicates a variance that the pixel needs to use in the next frame of image.
- the video denoising method provided in the embodiment of the present disclosure optimizes Formula (4), and introduces a frame difference when calculating the variance, thus obtaining Formula (8).
- P k ⁇ P k-1 + ⁇ 2 Q (8)
- ⁇ indicates a frame difference between the first image and the first denoised image.
- the video denoising method provided in the embodiment of the present disclosure adds Formula (9) and Formula (10), and optimizes Formula (5) to obtain Formula (11).
- R k 1+ R k-1 (1+ K k-1 ) ⁇ 1 (9)
- this step can be implemented through the following sub-step 4041 to sub-step 4043 .
- the terminal can perform temporal filtering on the pixels of the target image in parallel, in sub-step 4041 to sub-step 4044 , any pixel in the target image is exemplarily taken as an example, and the processing method of another pixel is the same as that of the pixel.
- the second image is obtained.
- Step 405 The terminal predicts first gain coefficients corresponding to pixels of the second image in the second denoised image according to second gain coefficients corresponding to the pixels of the target image in the first denoised image. For details, reference can be made to sub-step 4041 to sub-step 4044 .
- Step 4041 The terminal determines a second variance of the pixel according to the corresponding first variance of the pixel in the first denoised image, the frame difference between the first image and the first denoised image, and a variance offset coefficient.
- the first variance corresponding to the pixel in the first denoised image is P k-1
- the frame difference between the first image and the first denoised image is ⁇
- the variance offset coefficient is Q
- the second variance P k ⁇ of the pixel can be calculated according to the above Formula (8).
- Step 4042 The terminal acquires a second gain coefficient and a second gain offset coefficient corresponding to the pixel in the first denoised image, and determines a first gain offset coefficient corresponding to the pixel according to the second gain coefficient and the second gain offset coefficient.
- the second gain coefficient corresponding to the pixel in the first denoised image is K k-1
- the second gain offset coefficient corresponding to the pixel in the first denoised image is R k-1
- the first gain offset coefficient R k corresponding to the pixel can be calculated according to Formula (9).
- Step 4043 The terminal determines a motion compensation coefficient corresponding to the pixel according to the frame difference.
- the frame difference is ⁇
- the motion compensation coefficient U k corresponding to the pixel can be calculated according to Formula (10).
- Step 4044 The terminal determines the first gain coefficient corresponding to the pixel according to the second variance, the first gain offset coefficient corresponding to the pixel, and the motion compensation coefficient.
- the second variance P k ⁇ , the first gain offset coefficient R k , and the motion compensation coefficient U k obtained from the above sub-steps 4041 to 4043 are calculated to obtain the first gain coefficient K k corresponding to the pixel.
- the terminal may also determine a third variance P k that the pixel needs to use in the next frame of image according to Formula (7) and the second variance P k ⁇ .
- Step 406 The terminal fuses the first image and the second image according to the first gain coefficients to obtain a second denoised image that corresponds to the target image and has undergone denoising.
- the fused image obtained from fusing the first image and the second image is used as the second denoised image.
- the terminal also obtains the first gain coefficients corresponding to the pixels of the second image in the process of performing temporal filtering on the pixels of the target image to obtain the second image.
- the terminal may use a product of a difference between the first gain coefficient corresponding to the pixel and a preset value and a first pixel value of the pixel as a first fusion value, and use a product of the first gain coefficient corresponding to the pixel and a second pixel value of the pixel as a second fusion value.
- the first pixel value is a pixel value of the pixel after the temporal filtering
- the second pixel value is a pixel value of the pixel after the spatial filtering.
- the terminal sums the first fusion value and the second fusion value to obtain the denoised pixel value corresponding to the pixel.
- ⁇ circumflex over (x) ⁇ k indicates a denoised pixel value corresponding to the pixel.
- the terminal may use the first gain coefficients as weighting coefficients for fusing the first image and the second image. Specifically, differences between the first gain coefficients corresponding to the pixels of the second image in the second denoised image and a preset value 1 are used as fusion weights of the pixels of the second image; the first gain coefficients corresponding to the pixels of the second image in the second denoised image are used as fusion weights of the pixels of the first image, and the pixel values of the first image and the second image are weighted fused to obtain the second denoised image.
- FIG. 8 a schematic diagram of a comparison before and after denoising according to an embodiment of the present disclosure is shown.
- FIG. 8 includes a target image before denoising and a target image after denoising.
- the noises in the target image after denoising are significantly reduced compared to the target image before denoising, that is, the video denoising method provided in the embodiment of the application effectively realizes denoising of the target image.
- the above steps 401 to 405 are optional implementations of the video denoising method provided in the embodiment of the present disclosure, and the corresponding video denoising method may not be performed in the order of the above steps 401 to 405 , or alternatively, a third filter may also be set.
- the third filter has the same structure as the first filter.
- the third filter, the first filter, and the second filter can process the pixels in the target image in parallel by calling the image processing interface of the GPU, thus achieving denoising of the target image.
- FIG. 9 a key schematic flowchart of a video denoising method according to an embodiment of the present disclosure is shown.
- three parts i.e., input, denoising, and output are included in the drawing, and the input is the target image f C and the first denoised image f D L .
- a first filter and a third filter are indicated by using image denoising filters F 1 and F 2 , respectively.
- a second filter is indicated by a Kalman filter Fk.
- Parallel acceleration is performed through an image processing interface of a GPU.
- a terminal When a terminal performs denoising on the target image, it processes the target image f C through the image denoising filter F 1 to obtain a first image f F 1 C , calculates a frame difference f D between the first denoised image f D L and the first image f F 1 C according to a result of the processing, inputs the frame difference f D and the target image f C to the Kalman filter Fk, and fuses an output result of the Kalman filter Fk, i.e., the second image, and an output result of the image denoising filter F 2 to obtain a second denoised image f D C that corresponds to the target image and has undergone denoising.
- the second denoised image may also be stored in the Kalman filter to participate in subsequent image operations.
- FIG. 10 is a schematic diagram of an algorithm flow of a video denoising method according to an embodiment of the present disclosure.
- Performing spatial filtering on the target image includes: f F 1 C ⁇ F 1 (f C ), f F 2 C ⁇ F 2 (f C ).
- the arrow indicates assignment.
- Performing temporal filtering on the target image includes: f D C ⁇ (f C ,f F 1 C ,f F 2 C ).
- Performing temporal filtering on any pixel of the target image includes: ⁇ f D L ⁇ f F 1 C calculating the frame difference; R k ⁇ 1+R k-1 (1+K k-1 ) ⁇ 1 , calculating the gain offset coefficient; x k ⁇ ⁇ circumflex over (x) ⁇ k-1 , using the corresponding denoised pixel value in the first denoised image as a predicted pixel value of the pixel in the target image; P k ⁇ ⁇ P k-1 ⁇ 2 Q calculating the second variance;
- the dependency of pixels is removed when spatial filtering is performed on the image, so that the GPU can perform parallel computing on the pixels, and when temporal filtering is performed on the image, the problem of pixel dependency does not exist either, and the GPU can also perform parallel computing on the pixels, so that the entire video denoising can be processed in parallel.
- the complex denoising process is migrated to the GPU for implementation, the CPU occupancy rate of the computer will become very low.
- the video denoising method provided in the embodiments of the present disclosure has a fourth improvement, that is, a format of an input image is set to adopt a YCbCr (YUV) format, and when denoising is performed on the image, the first filter and the second filter respectively perform spatial filtering and temporal filtering on a brightness component of the target image, that is, the denoising is performed only on the Y channel that represents the brightness detail information.
- a fourth improvement that is, a format of an input image is set to adopt a YCbCr (YUV) format, and when denoising is performed on the image, the first filter and the second filter respectively perform spatial filtering and temporal filtering on a brightness component of the target image, that is, the denoising is performed only on the Y channel that represents the brightness detail information.
- the present disclosure also conducts a comparison experiment.
- the comparison experiment two notebook computers of different models are used for comparison.
- the comparison results can be seen in Table 1.
- the spatial filtering that removes the pixel dependency on the pixels of the target image
- the temporal filtering is performed on the pixels of the target image in parallel according to the frame difference between the first image and the first denoised image obtained by the spatial filtering, so that the video denoising is converted from a serial process to a parallel process, and the denoising process is accelerated.
- FIG. 11 is a block diagram of a video denoising apparatus according to an embodiment of the present disclosure.
- the apparatus is configured to perform the operations when the above video denoising method is performed.
- the apparatus includes: a spatial filtering module 1101 , a temporal filtering module 1102 , and a fusing module 1103 .
- the modules included in the video denoising apparatus may be implemented in whole or in part by software, hardware, or a combination thereof.
- the spatial filtering module 1101 is configured to perform spatial filtering on pixels of a target image in a video to be processed to obtain a first image, the spatial filtering being used for eliminating dependencies between the pixels of the target image.
- the temporal filtering module 1102 is configured to perform, according to a frame difference between the first image and a first denoised image, temporal filtering on the pixels of the target image in parallel to obtain a second image, the first denoised image being an image that corresponds to a preceding frame of the target image and has undergone denoising.
- the fusing module 1103 is configured to predict first gain coefficients corresponding to pixels of the second image in a second denoised image according to second gain coefficients corresponding to the pixels of the target image in the first denoised image; and fuse the first image and the second image according to the first gain coefficients to obtain the second denoised image that corresponds to the target image and has undergone denoising.
- the spatial filtering module 1101 is further configured to, for all the pixels of the target image in the video to be processed, acquire initial pixel values of neighborhood pixels of each pixel; and perform spatial filtering on the pixels according to the initial pixel values of the neighborhood pixels.
- the video denoising apparatus further includes: an interface calling module configured to call an image processing interface of a graphics processing unit; and a parallel acquisition module configured to acquire the pixels of the target image in the video to be processed in parallel through the image processing interface; and filter, through the image processing interface, the pixels acquired in parallel.
- the temporal filtering module 1102 is further configured to acquire each pixel of the target image in parallel; for any pixel of the target image, determine a second variance of the pixel according to a first variance corresponding to the pixel in the first denoised image, the frame difference between the first image and the first denoised image, and a variance offset coefficient; determine a first gain coefficient corresponding to the pixel according to the second variance, a first gain offset coefficient corresponding to the pixel, and a motion compensation coefficient; determine, according to the first gain coefficient, an initial pixel value of the pixel, and a denoised pixel value corresponding to the pixel in the first denoised image, a first pixel value of the pixel after the temporal filtering; and obtain the second image according to the first pixel value of each pixel of the target image after the temporal filtering.
- the video denoising apparatus further includes: a first determination module configured to determine the motion compensation coefficient according to the frame difference.
- the video denoising apparatus further includes: an acquisition module configured to acquire a second gain coefficient and a second gain offset coefficient corresponding to the pixel in the first denoised image; and a second determination module configured to determine the first gain offset coefficient corresponding to the pixel according to the second gain coefficient and the second gain offset coefficient.
- the temporal filtering module 1102 is further configured to, for any pixel of the second image, use a product of a difference between the first gain coefficient corresponding to the pixel and a preset value and the first pixel value of the pixel as a first fusion value; use a product of the first gain coefficient corresponding to the pixel and a second pixel value of the pixel as a second fusion value, the second pixel value being a pixel value of the pixel after the spatial filtering; and sum the first fusion value and the second fusion value to obtain the denoised pixel value corresponding to the pixel.
- the spatial filtering and the temporal filtering are respectively performed on brightness components of the pixels.
- the spatial filtering that removes the pixel dependency on the pixels of the target image
- the temporal filtering is performed on the pixels of the target image in parallel according to the frame difference between the first image and the first denoised image obtained by the spatial filtering, so that the video denoising is converted from a serial process to a parallel process, and the denoising is accelerated.
- the apparatus provided in the above embodiment runs an application, only the division of the above functional modules is used as an example.
- the above function allocation may be completed by different functional modules as required, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the functions described above.
- the apparatus provided in the above embodiment belongs to the same concept as that of the method embodiment, and a specific implementation process thereof is detailed in the method embodiment.
- unit in this disclosure may refer to a software unit, a hardware unit, or a combination thereof.
- a software unit e.g., computer program
- a hardware unit may be implemented using processing circuitry and/or memory.
- processors or processors and memory
- a processor or processors and memory
- each unit can be part of an overall unit that includes the functionalities of the unit.
- FIG. 12 is a structural block diagram of a terminal 1200 according to an embodiment of the present disclosure.
- the terminal 1200 may be a smartphone, a tablet computer, a Moving Picture Experts Group Audio Layer III (MP3) player, a MP4 player, a notebook computer, or a desktop computer.
- MP3 Moving Picture Experts Group Audio Layer III
- MP4 MP4
- the terminal 1200 may also be referred to as other names such as user equipment, a portable terminal, a laptop terminal, or a desktop terminal.
- the terminal 1200 includes a processor 1201 and a memory 1202 .
- the processor 1201 may include one or more processing cores, for example, a 4-core processor or an 8-core processor.
- the processor 1201 may be implemented by using at least one hardware form of a digital signal processor (DSP), a field-programmable gate array (FPGA), and a programmable logic array (PLA).
- DSP digital signal processor
- FPGA field-programmable gate array
- PDA programmable logic array
- the processor 1201 may alternatively include a main processor and a coprocessor.
- the main processor is a processor that is configured to process data in an awake state and also referred to as a CPU, and the coprocessor is a low-power processor configured to process data in an idle state.
- the processor 1201 may be integrated with a GPU.
- the GPU is configured to render and draw content that needs to be displayed on a display.
- the processor 1201 may further include an artificial intelligence (AI) processor.
- the AI processor is configured to process computing operations related to machine learning.
- the memory 1202 may include one or more computer-readable storage media that may be non-transitory.
- the memory 1202 may further include a high-speed random access memory (ROM), and a non-volatile memory such as one or more magnetic disk storage devices and a flash storage device.
- a non-transitory computer-readable storage medium in the memory 1202 is configured to store at least one instruction, the at least one instruction being configured to be executed by the processor 1201 to implement the method provided in the video denoising method embodiments of the present disclosure.
- the terminal 1200 may optionally include a peripheral interface 1203 and at least one peripheral.
- the processor 1201 , the memory 1202 , and the peripheral interface 1203 may be connected by using a bus or a signal cable.
- Each peripheral may be connected to the peripheral interface 1203 by using a bus, a signal cable, or a circuit board.
- the peripheral includes: at least one of a radio frequency (RF) circuit 1204 , a touch display screen 1205 , a camera assembly 1206 , an audio circuit 1207 , a positioning component 1208 , and a power supply 1209 .
- RF radio frequency
- the peripheral interface 1203 may be configured to connect at least one peripheral related to input/output (I/O) to the processor 1201 and the memory 1202 .
- the processor 1201 , the memory 1202 , and the peripheral interface 1203 are integrated on the same chip or circuit board.
- any one or two of the processors 1201 , the memory 1202 , and the peripheral interface 1203 may be implemented on an independent chip or circuit board. This is not limited in this embodiment.
- the RF circuit 1204 is configured to receive and transmit an RF signal, also referred to as an electromagnetic signal.
- the RF circuit 1204 communicates with a communication network and another communication devices by using the electromagnetic signal.
- the RF circuit 1204 converts an electric signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electric signal.
- the RF circuit 1204 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chip set, a subscriber identity module card, and the like.
- the RF circuit 1204 may communicate with another terminal by using at least one wireless communication protocol.
- the wireless communication protocol includes, but is not limited to: a metropolitan area network, generations of mobile communications networks (2G, 3G, 4G, and 5G), a wireless local area network and/or a wireless fidelity (Wi-Fi) network.
- the RF 1204 may further include a circuit related to near field communication (NFC), which is not limited in the present disclosure.
- NFC near field communication
- the display screen 1205 is configured to display a user interface (UI).
- the UI may include a graph, text, an icon, a video, and any combination thereof.
- the display screen 1205 is further capable of collecting touch signals on or above a surface of the display screen 1205 .
- the touch signal may be inputted, as a control signal, to the processor 1201 for processing.
- the display screen 1205 may be further configured to provide a virtual button and/or a virtual keyboard, which is also referred to as a soft button and/or a soft keyboard.
- the display screen 1205 may be a flexible display screen disposed on a curved surface or a folded surface of the terminal 1200 .
- the display screen 1205 may further be set to have a non-rectangular irregular graph, that is, a special-shaped screen.
- the display screen 1205 may be manufactured by using a material such as a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
- the camera assembly 1206 is configured to capture an image or a video.
- the camera assembly 1206 includes a front-facing camera and a rear-facing camera.
- the front-facing camera is disposed on the front panel of the terminal
- the rear-facing camera is disposed on a back surface of the terminal.
- there are at least two rear cameras which are respectively any of a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, to implement background blur through fusion of the main camera and the depth-of-field camera, panoramic photographing and virtual reality (VR) photographing through fusion of the main camera and the wide-angle camera, or other fusion photographing functions.
- VR virtual reality
- the camera assembly 1206 may further include a flash.
- the flash may be a single color temperature flash or a double color temperature flash.
- the double color temperature flash is a combination of a warm light flash and a cold light flash, and may be used for light compensation under different color temperatures.
- the audio circuit 1207 may include a microphone and a speaker.
- the microphone is configured to acquire sound waves of users and surroundings, and convert the sound waves into electrical signals and input the signals to the processor 1201 for processing, or input the signals to the RF circuit 1204 to implement voice communication.
- the microphone may further be an array microphone or an omni-directional acquisition type microphone.
- the speaker is configured to convert electric signals from the processor 1201 or the RF circuit 1204 into sound waves.
- the speaker may be a conventional thin-film speaker or a piezoelectric ceramic speaker.
- the speaker When the speaker is the piezoelectric ceramic speaker, the speaker can not only convert an electrical signal into sound waves audible to a human being, but also convert an electrical signal into sound waves inaudible to the human being for ranging and other purposes.
- the audio circuit 1207 may further include an earphone jack.
- the positioning component 1208 is configured to position a current geographic location of the terminal 1200 for implementing navigation or a location-based service (LBS).
- the positioning component 1208 may be a positioning component based on the Global Positioning System (GPS) of the United States, the BeiDou system of China, the GLONASS System of Russia, or the GALILEO System of the European Union.
- GPS Global Positioning System
- the power supply 1209 is configured to supply power to components in the terminal 1200 .
- the power supply 1209 may be an alternating current, a direct current, a primary battery, or a rechargeable battery.
- the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
- the rechargeable battery may be further configured to support a fast charge technology.
- the terminal 1200 may also include one or more sensors 1210 .
- the one or more sensors 1210 include, but are not limited to: an acceleration sensor 1211 , a gyroscope sensor 1212 , a pressure sensor 1213 , a fingerprint sensor 1214 , an optical sensor 1215 , and a proximity sensor 1216 .
- the fingerprint sensor 1214 is configured to collect a fingerprint of a user.
- the processor 1201 identifies an identity of the user according to the fingerprint collected by the fingerprint sensor 1214 , or the fingerprint sensor 1214 identifies an identity of the user according to the collected fingerprint.
- the processor 1201 authorizes the user to perform related sensitive operations.
- the sensitive operations include: unlocking a screen, viewing encrypted information, downloading software, paying, changing a setting, and the like.
- the fingerprint sensor 1214 may be disposed on a front surface, a back surface, or a side surface of the terminal 1200 . When a physical button or a vendor logo is disposed on the terminal 1200 , the fingerprint sensor 1214 may be integrated with the physical button or the vendor logo.
- the optical sensor 1215 is configured to acquire ambient light intensity.
- the processor 1201 may control display luminance of the display screen 1205 according to the ambient light intensity collected by the optical sensor 1215 . Specifically, when the ambient light intensity is relatively high, the display luminance of the display screen 1205 is increased, and when the ambient light intensity is relatively low, the display luminance of the touch display screen 1205 is reduced.
- the processor 1201 may further dynamically adjust a camera parameter of the camera assembly 1206 according to the ambient light intensity acquired by the optical sensor 1215 .
- the proximity sensor 1216 also referred to as a distance sensor, is usually disposed on a front panel of the terminal 1200 .
- the proximity sensor 1216 is configured to collect a distance between a user and the front surface of the terminal 1200 .
- the display screen 1205 is controlled by the processor 1201 to switch from a screen-on state to a screen-off state.
- the display screen 1205 is controlled by the processor 1201 to switch from the screen-off state to the screen-on state.
- FIG. 12 does not constitute a limitation on the terminal 1200 , and the terminal may include more components or fewer components than those shown in the figure, or some components may be combined, or a different component deployment may be used.
- An embodiment of the present disclosure further provides a computer-readable storage medium, storing computer-readable instructions, and the computer-readable instructions, when being executed by the processor, causing a processor to perform the steps in the foregoing video denoising method.
- the steps of the video denoising method here may be the steps of the video denoising methods in the above embodiments.
- the program may be stored in a non-volatile computer-readable storage medium. When the program is executed, the procedures of the foregoing method embodiments may be implemented.
- References to the memory, the storage, the database, or other medium used in the embodiments provided in the present disclosure may all include a non-volatile or a volatile memory.
- the non-volatile memory may include a read-only memory (ROM), a programmable ROM (PROM), an electrically programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM) or a flash memory.
- the volatile memory may include a RAM or an external high-speed cache.
- the RAM is available in a plurality of forms, such as a static RAM (SRAM), a dynamic RAM (DRAM), a synchronous DRAM (SDRAM), a double data rate SDRAM (DDR-SDRAM), an enhanced SDRAM (ESDRAM), a synchronous link (Synchlink) DRAM (SLDRAM), a RAM bus (Rambus) direct RAM (RDRAM), a direct Rambus dynamic RAM (DRDRAM), and a Rambus dynamic RAM (RDRAM).
- SRAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDR-SDRAM double data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM synchronous link
- RAM bus Rabus direct RAM
- DRDRAM direct Rambus dynamic RAM
- RDRAM Rambus dynamic RAM
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Picture Signal Circuits (AREA)
Abstract
Description
x k − ={circumflex over (x)} k-1 (3)
P k − =P k-1 +Q (4)
K k =P k −(P k − +R)−1 (5)
x k =x k − +K k(z k −x k −) (6)
P k − =P k-1+Δ2 Q (8)
R k=1+R k-1(1+K k-1)−1 (9)
K k =P k −(P k − +R k U k)−1 (11)
{circumflex over (x)} k=(1−K k)x k +K k z k (12)
calculating the motion compensation coefficient; Kk←Pk −(Pk −+RkUk)−1 calculating the first gain coefficient; xk←xk −+Kk(zx−xk −), calculating the pixel value after the temporal filtering of the pixel; {circumflex over (x)}k←(1−Kk)xk+Kkzk calculating the denoised pixel value; Pk←(1−Kk)Pk − calculating the variance to be used in the next frame of image, and returning {circumflex over (x)}k.
| TABLE 1 | ||
| CPU occupancy rate | CPU occupancy rate | |
| when decoupling and | when decoupling and | |
| GPU parallel | GPU parallel | |
| Model | computing are not used | computing are used |
| |
12.34% | 0.24 |
| Notebook computer | ||
| 2 | 8.71% | 0.76% |
Claims (20)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911288617.7 | 2019-12-12 | ||
| CN201911288617.7A CN110933334B (en) | 2019-12-12 | 2019-12-12 | Video noise reduction method, device, terminal and storage medium |
| PCT/CN2020/095359 WO2021114592A1 (en) | 2019-12-12 | 2020-06-10 | Video denoising method, device, terminal, and storage medium |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2020/095359 Continuation WO2021114592A1 (en) | 2019-12-12 | 2020-06-10 | Video denoising method, device, terminal, and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20220130023A1 US20220130023A1 (en) | 2022-04-28 |
| US12094085B2 true US12094085B2 (en) | 2024-09-17 |
Family
ID=69863644
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/572,604 Active 2041-08-21 US12094085B2 (en) | 2019-12-12 | 2022-01-10 | Video denoising method and apparatus, terminal, and storage medium |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US12094085B2 (en) |
| EP (1) | EP3993396B1 (en) |
| CN (1) | CN110933334B (en) |
| WO (1) | WO2021114592A1 (en) |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110933334B (en) | 2019-12-12 | 2021-08-03 | 腾讯科技(深圳)有限公司 | Video noise reduction method, device, terminal and storage medium |
| WO2021217392A1 (en) * | 2020-04-28 | 2021-11-04 | 深圳市大疆创新科技有限公司 | Infrared image denoising method and apparatus, and device |
| CN113362260B (en) * | 2021-07-21 | 2025-01-07 | Oppo广东移动通信有限公司 | Image optimization method and device, storage medium and electronic device |
| CN115330628B (en) * | 2022-08-18 | 2023-09-12 | 盐城众拓视觉创意有限公司 | Video frame-by-frame denoising method based on image processing |
| AU2023327801A1 (en) * | 2022-08-26 | 2025-03-20 | Cuvos Pty Ltd | A signal processing system |
| CN117876243A (en) * | 2022-09-30 | 2024-04-12 | 深圳市中兴微电子技术有限公司 | Video noise reduction method, electronic device and computer readable storage medium |
| CN116228589B (en) * | 2023-03-22 | 2023-08-29 | 新创碳谷集团有限公司 | Method, equipment and storage medium for eliminating noise points of visual inspection camera |
| CN116777775B (en) * | 2023-06-14 | 2025-10-10 | 杭州微影软件有限公司 | Image processing method, device and electronic equipment for infrared images |
| CN118365554B (en) * | 2024-06-19 | 2024-08-23 | 深圳市超像素智能科技有限公司 | Video noise reduction method, device, electronic equipment and computer readable storage medium |
Citations (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20100036601A (en) | 2008-09-30 | 2010-04-08 | 엘지전자 주식회사 | Apparatus and method for removing noise-image |
| US20100309377A1 (en) | 2009-06-05 | 2010-12-09 | Schoenblum Joel W | Consolidating prior temporally-matched frames in 3d-based video denoising |
| CN101964863A (en) | 2010-05-07 | 2011-02-02 | 镇江唐桥微电子有限公司 | Self-adaptive time-space domain video image denoising method |
| CN102497497A (en) | 2011-12-05 | 2012-06-13 | 四川九洲电器集团有限责任公司 | Method for dynamically adjusting threshold in image denoising algorithm |
| CN102769722A (en) | 2012-07-20 | 2012-11-07 | 上海富瀚微电子有限公司 | Time-space domain hybrid video noise reduction device and method |
| CN103369209A (en) | 2013-07-31 | 2013-10-23 | 上海通途半导体科技有限公司 | Video noise reduction device and video noise reduction method |
| CN103533214A (en) | 2013-10-01 | 2014-01-22 | 中国人民解放军国防科学技术大学 | Video real-time denoising method based on kalman filtering and bilateral filtering |
| US20140240512A1 (en) * | 2009-03-02 | 2014-08-28 | Flir Systems, Inc. | Time spaced infrared image enhancement |
| US20140267762A1 (en) * | 2013-03-15 | 2014-09-18 | Pelican Imaging Corporation | Extended color processing on pelican array cameras |
| CN104735300A (en) | 2015-03-31 | 2015-06-24 | 中国科学院自动化研究所 | Video denoising device and method based on weight filtering |
| US20170287190A1 (en) * | 2014-12-31 | 2017-10-05 | Flir Systems, Inc. | Image enhancement with fusion |
| CN107979712A (en) | 2016-10-20 | 2018-05-01 | 上海富瀚微电子股份有限公司 | A kind of vedio noise reduction method and device |
| CN108174056A (en) | 2016-12-07 | 2018-06-15 | 南京理工大学 | A low-light video noise reduction method based on joint spatio-temporal domain |
| US20180220129A1 (en) | 2017-01-30 | 2018-08-02 | Intel Corporation | Motion, coding, and application aware temporal and spatial filtering for video pre-processing |
| CN109410124A (en) | 2016-12-27 | 2019-03-01 | 深圳开阳电子股份有限公司 | A kind of noise-reduction method and device of video image |
| CN109743473A (en) | 2019-01-11 | 2019-05-10 | 珠海全志科技股份有限公司 | Video image 3 D noise-reduction method, computer installation and computer readable storage medium |
| CN110933334A (en) | 2019-12-12 | 2020-03-27 | 腾讯科技(深圳)有限公司 | Video noise reduction method, device, terminal and storage medium |
| US20220222795A1 (en) * | 2019-05-31 | 2022-07-14 | Hangzhou Hikvision Digital Technology Co., Ltd. | Apparatus for image fusion and method for image fusion |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7110455B2 (en) * | 2001-08-14 | 2006-09-19 | General Instrument Corporation | Noise reduction pre-processor for digital video using previously generated motion vectors and adaptive spatial filtering |
-
2019
- 2019-12-12 CN CN201911288617.7A patent/CN110933334B/en active Active
-
2020
- 2020-06-10 WO PCT/CN2020/095359 patent/WO2021114592A1/en not_active Ceased
- 2020-06-10 EP EP20898101.9A patent/EP3993396B1/en active Active
-
2022
- 2022-01-10 US US17/572,604 patent/US12094085B2/en active Active
Patent Citations (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20100036601A (en) | 2008-09-30 | 2010-04-08 | 엘지전자 주식회사 | Apparatus and method for removing noise-image |
| US20140240512A1 (en) * | 2009-03-02 | 2014-08-28 | Flir Systems, Inc. | Time spaced infrared image enhancement |
| US20100309377A1 (en) | 2009-06-05 | 2010-12-09 | Schoenblum Joel W | Consolidating prior temporally-matched frames in 3d-based video denoising |
| CN101964863A (en) | 2010-05-07 | 2011-02-02 | 镇江唐桥微电子有限公司 | Self-adaptive time-space domain video image denoising method |
| CN102497497A (en) | 2011-12-05 | 2012-06-13 | 四川九洲电器集团有限责任公司 | Method for dynamically adjusting threshold in image denoising algorithm |
| CN102769722A (en) | 2012-07-20 | 2012-11-07 | 上海富瀚微电子有限公司 | Time-space domain hybrid video noise reduction device and method |
| US20140267762A1 (en) * | 2013-03-15 | 2014-09-18 | Pelican Imaging Corporation | Extended color processing on pelican array cameras |
| CN103369209A (en) | 2013-07-31 | 2013-10-23 | 上海通途半导体科技有限公司 | Video noise reduction device and video noise reduction method |
| CN103533214A (en) | 2013-10-01 | 2014-01-22 | 中国人民解放军国防科学技术大学 | Video real-time denoising method based on kalman filtering and bilateral filtering |
| US20170287190A1 (en) * | 2014-12-31 | 2017-10-05 | Flir Systems, Inc. | Image enhancement with fusion |
| CN104735300A (en) | 2015-03-31 | 2015-06-24 | 中国科学院自动化研究所 | Video denoising device and method based on weight filtering |
| CN107979712A (en) | 2016-10-20 | 2018-05-01 | 上海富瀚微电子股份有限公司 | A kind of vedio noise reduction method and device |
| CN108174056A (en) | 2016-12-07 | 2018-06-15 | 南京理工大学 | A low-light video noise reduction method based on joint spatio-temporal domain |
| CN109410124A (en) | 2016-12-27 | 2019-03-01 | 深圳开阳电子股份有限公司 | A kind of noise-reduction method and device of video image |
| US20180220129A1 (en) | 2017-01-30 | 2018-08-02 | Intel Corporation | Motion, coding, and application aware temporal and spatial filtering for video pre-processing |
| CN109743473A (en) | 2019-01-11 | 2019-05-10 | 珠海全志科技股份有限公司 | Video image 3 D noise-reduction method, computer installation and computer readable storage medium |
| US20220222795A1 (en) * | 2019-05-31 | 2022-07-14 | Hangzhou Hikvision Digital Technology Co., Ltd. | Apparatus for image fusion and method for image fusion |
| CN110933334A (en) | 2019-12-12 | 2020-03-27 | 腾讯科技(深圳)有限公司 | Video noise reduction method, device, terminal and storage medium |
Non-Patent Citations (6)
| Title |
|---|
| Anonymous: "May 25, 2010 1 Image Filtering", May 25, 2010 (May 25, 2010), Retrieved from the Internet: URL:https://www.cs.auckland.ac.nz/courses/compsci373slc/PatricesLectures/Image Filtering_2up.pdf [retrieved on Nov. 8, 2017]. 8 pages. |
| Chenglin Zuo et al., "Video Denoising Based on a Spatiotemporal Kalman-Bilateral Mixture Model," The Scientific World Journal, vol. 2013, Jan. 1, 2013 (Jan. 1, 2013), pp. 1-10. 11 pages. |
| Ergio G Pfleger S et al., "Real-time video denoising on multicores and GPUs with Kalman-based and Bilateral filters fusion," Journal of Real-Time Image Processing, Springer, DE, vol. 16, No. 5, Feb. 8, 2017 (Feb. 8, 2017), pp. 1629-1642. 14 pages. |
| The European Patent Office (EPO) The Extended European Search Report for 20898101.9 Jul. 29, 2022 11 pages. |
| The State Intellectual Property Office of the People's Republic of China (SIPO) Office Action 1 for for 201911288617.7 Feb. 9, 2021 11 Pages (including translation). |
| The World Intellectual Property Organization (WIPO) International Search Report for PCT/CN2020/095359 Sep. 2, 2020 5 Pages (including translation). |
Also Published As
| Publication number | Publication date |
|---|---|
| US20220130023A1 (en) | 2022-04-28 |
| EP3993396B1 (en) | 2025-10-15 |
| WO2021114592A1 (en) | 2021-06-17 |
| EP3993396A4 (en) | 2022-08-31 |
| CN110933334A (en) | 2020-03-27 |
| EP3993396A1 (en) | 2022-05-04 |
| CN110933334B (en) | 2021-08-03 |
| EP3993396C0 (en) | 2025-10-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12094085B2 (en) | Video denoising method and apparatus, terminal, and storage medium | |
| EP3828769B1 (en) | Image processing method and apparatus, terminal and computer-readable storage medium | |
| WO2021008456A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
| US9692959B2 (en) | Image processing apparatus and method | |
| EP3965060A1 (en) | Method for determining motion information of image feature point, task execution method, and device | |
| US20220044026A1 (en) | Method for generating clipping template, and electronic device | |
| US10863077B2 (en) | Image photographing method, apparatus, and terminal | |
| US20230095250A1 (en) | Method for recommending multimedia resource and electronic device | |
| CN110110787A (en) | Location acquiring method, device, computer equipment and the storage medium of target | |
| KR20190014638A (en) | Electronic device and method for controlling of the same | |
| CN107169939A (en) | Image processing method and related product | |
| CN111277893B (en) | Video processing method and device, readable medium and electronic equipment | |
| CN110248197B (en) | Speech enhancement method and device | |
| EP4000700A1 (en) | Camera shot movement control method, device, apparatus, and storage medium | |
| CN112581358A (en) | Training method of image processing model, image processing method and device | |
| WO2022033272A1 (en) | Image processing method and electronic device | |
| CN108449541A (en) | A kind of panoramic picture image pickup method and mobile terminal | |
| CN110807769B (en) | Image display control method and device | |
| CN110443752A (en) | An image processing method and mobile terminal | |
| CN111369456B (en) | Image denoising method and device, electronic device and storage medium | |
| CN111860064B (en) | Video-based target detection methods, devices, equipment and storage media | |
| CN107730443B (en) | Image processing method and device and user equipment | |
| CN113658283B (en) | Image processing method, device, electronic equipment and storage medium | |
| WO2018219274A1 (en) | Method and apparatus for denoising processing, storage medium and terminal | |
| EP4621721A1 (en) | Face image generation method and apparatus, device, and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, BENCHAO;LI, FENG;LIU, YI;AND OTHERS;SIGNING DATES FROM 20190102 TO 20211208;REEL/FRAME:059449/0234 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |