WO2021114868A1 - 降噪方法、终端及存储介质 - Google Patents

降噪方法、终端及存储介质 Download PDF

Info

Publication number
WO2021114868A1
WO2021114868A1 PCT/CN2020/121645 CN2020121645W WO2021114868A1 WO 2021114868 A1 WO2021114868 A1 WO 2021114868A1 CN 2020121645 W CN2020121645 W CN 2020121645W WO 2021114868 A1 WO2021114868 A1 WO 2021114868A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
noise
value
noise reduction
pixel
Prior art date
Application number
PCT/CN2020/121645
Other languages
English (en)
French (fr)
Inventor
权威
马元蛟
罗俊
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021114868A1 publication Critical patent/WO2021114868A1/zh

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the embodiments of the present application relate to the field of image processing technologies, and in particular, to a noise reduction method, a terminal, and a storage medium.
  • Multi-frame video noise reduction is realized based on multi-frame noise reduction technology, that is, after collecting multiple frames of photos or images, different pixels with noise properties are found at different frame numbers, and a comparison is obtained after weighted synthesis. Clean, pure images. In order to ensure that the overall brightness of the fused image does not change, alpha fusion is often used, in which the selection of weights can directly determine the effect of noise reduction.
  • the embodiments of the present application provide a noise reduction method, a terminal, and a storage medium, which can effectively improve the fusion noise reduction effect of multiple frames of video images and improve the video image quality when performing noise reduction processing.
  • an embodiment of the present application provides a noise reduction method, and the method includes:
  • the previous frame of image is the previous one in the video to be noise-reduced that is continuous with the current image image;
  • next frame of image Is the next image in the video to be noise-reduced that is continuous with the current image.
  • an embodiment of the present application provides a terminal, the terminal includes: an acquisition part, a calculation part, a determination part, and a noise reduction part,
  • the acquiring part is configured to perform registration processing on the current image by using the first noise-reduction image of the previous frame of image to obtain the registered image; wherein, the previous frame of image is the same as that of the video to be noise-reduced.
  • the previous image in succession to the current image;
  • the calculation part is configured to calculate the noise estimation value of the current image
  • the determining part is configured to determine the fusion weight according to the first noise value of the previous frame image and the noise estimated value
  • the obtaining part is further configured to obtain a second noise reduction image and a second noise value corresponding to the current image by using the fusion weight;
  • the noise reduction part is configured to continue to perform noise reduction processing on the next frame of image according to the second noise reduction image and the second noise value, until each frame of the video to be noise reduced is traversed; Wherein, the next frame of image is the next image that is continuous with the current image in the video to be denoised.
  • an embodiment of the present application provides a terminal.
  • the terminal includes a processor and a memory storing executable instructions of the processor. When the instructions are executed by the processor, the above-mentioned Noise reduction method.
  • an embodiment of the present application provides a computer-readable storage medium with a program stored thereon and applied to a terminal.
  • the program is executed by a processor, the above-mentioned noise reduction method is implemented.
  • the embodiments of the application provide a noise reduction method, a terminal, and a storage medium.
  • the terminal uses the first noise reduction image of the previous frame of image to perform registration processing on the current image to obtain the registered image; where the previous frame of image is The previous image in the video to be denoised that is continuous with the current image; calculate the noise estimate of the current image, and determine the fusion weight according to the first noise value and the noise estimate of the previous frame of image; use the fusion weight to obtain the current image corresponding The second denoised image and the second noise value; continue to denoise the next frame of image according to the second denoised image and the second noise value, until each frame of the video to be denoised is traversed; where, The next frame of image is the next image that is continuous with the current image in the video to be denoised.
  • the terminal when the terminal performs noise reduction processing on the video to be denoised, it can use the denoised image and noise value of the previous frame of image denoising, combined with the noise estimate of the current image, to determine the fusion weight, so that the fusion weight can be used
  • the weight completes the fusion and noise reduction processing of the current image, and can continue to use the denoised image and noise value of the current image after denoising, and continue to denoise the next frame of image, until each frame in the video to be denoised is traversed image.
  • the fusion weight is determined based on the noise value corresponding to the previous frame image and the noise value corresponding to the current image. Therefore, when performing noise reduction processing, it can effectively improve the multi-frame video image.
  • the integrated noise reduction effect improves the video quality.
  • Figure 1 is the first schematic diagram of the implementation process of the noise reduction method
  • Figure 2 is a schematic diagram of the composition of a video to be noise-reduced
  • Figure 3 is a schematic diagram of pixels
  • Figure 4 is a second schematic diagram of the implementation process of the noise reduction method
  • Figure 5 is the third schematic diagram of the implementation process of the noise reduction method
  • Figure 6 is a structural block diagram of fusion noise reduction processing
  • Figure 7 is a schematic diagram of fusion noise reduction processing
  • Figure 8 is a first schematic diagram of the terminal structure
  • Figure 9 is a second schematic diagram of the terminal structure.
  • Multi-frame video noise reduction is a common method of video noise reduction.
  • multi-frame video noise reduction can specifically include the following processes: use image registration technology to align the image content, and then merge the pixels of the video. Due to the noise signal of the image Most of them are random signals, and the noise intensity can be reduced by multi-frame fusion.
  • the multi-frame video noise reduction is based on the multi-frame noise reduction technology.
  • the so-called multi-frame noise reduction technology is to find different noise characteristics under different frame numbers after collecting multiple frames of photos or images. Pixels of, get a cleaner and pure image after weighted synthesis.
  • the terminal when the terminal is shooting, it will calculate and filter the number of noise points and positions of multiple frames, and replace the position with the number of frames without noise in the noisy place. After repeated weighting and replacement, one will be obtained.
  • a very clean image In fact, the final image is composed of multiple frames of images, so on some occasions we can vaguely see the ghosting of some objects. Of course, as long as it does not affect the main body of the image, it can be. ignore.
  • alpha fusion is often used.
  • This method is a weighted summation method.
  • the selection of fusion weights often adopts fixed parameters, or according to the difference of fused pixels
  • the selection of weights can directly determine the effect of noise reduction. For example, if the weight of the previous frame image is larger, the noise reduction effect of the image will be better.
  • the use of alpha fusion for noise reduction processing is prone to ghosting, which reduces the effect of video noise reduction.
  • this application uses the noise estimation value estimated by the preset noise model to more accurately determine the fusion weight, so that the principle of higher weight can be adopted for fusion noise reduction based on the pixels with lower noise, which improves the video noise reduction.
  • the effect and purity of the picture is not limited.
  • the terminal when the terminal performs noise reduction processing on the video to be denoised, it can use the denoised image and noise value of the previous frame of image denoising, combined with the noise estimate of the current image, to determine the fusion weight, so that it can use the fusion
  • the weight completes the fusion and noise reduction processing of the current image, and can continue to use the denoised image and noise value of the current image after denoising, and continue to denoise the next frame of image, until each frame in the video to be denoised is traversed image.
  • the fusion weight is determined based on the noise value corresponding to the previous frame image and the noise value corresponding to the current image. Therefore, when performing noise reduction processing, it can effectively improve the multi-frame video image.
  • the integrated noise reduction effect improves the video quality.
  • FIG. 1 is a schematic diagram of the implementation process of the noise reduction method. As shown in FIG. 1, in the embodiment of the present application, the terminal performs The method of noise reduction may include the following steps:
  • Step 101 Perform registration processing on the current image by using the first denoised image of the previous frame of image to obtain a registered image; where the previous frame of image is the previous image in the video to be denoised and continuous with the current image .
  • the terminal when the terminal performs noise reduction processing on the current image in the video to be noise-reduced, it can first use the first noise-reduction image of the previous frame of image to perform registration processing on the current image, and then the registration can be obtained. After the image.
  • the terminal can be any device with communication and storage functions, such as: tablet computer, mobile phone, e-reader, remote control, personal computer (PC), notebook computer , In-vehicle equipment, Internet TV, wearable devices and other equipment.
  • tablet computer mobile phone
  • e-reader remote control
  • PC personal computer
  • notebook computer In-vehicle equipment
  • Internet TV Internet TV
  • wearable devices and other equipment.
  • the terminal may be equipped with a photographing device, so that the photographing device may be used to collect and obtain the video to be noise-reduced.
  • the photographing device may be an image sensor.
  • the photographing device may be a charge-coupled device (CCD) or a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) configured with a terminal.
  • CCD charge-coupled device
  • CMOS complementary metal oxide semiconductor
  • the video to be noise-reduced may also be received or acquired by the terminal.
  • the terminal may receive the video to be noise-reduced from other devices, or download the video to be noise-reduced from the server. You can also read the video to be noise-reduced from the local storage.
  • the video to be noise-reduced may include consecutive multiple frames of images.
  • the multi-frame image may be an image sequence arranged along the time axis.
  • FIG. 2 is a schematic diagram of the composition of the video to be noise-reduced.
  • the video to be noise-reduced may include time t1, t2, t3, t4...t(n- 1) Image sequence of image 1, image 2, image 3, image 4, image (n-1), image n arranged separately in tn, where n is an integer greater than 5, and the previous image can be the image to be dropped
  • the previous image in the noisy video that is continuous with the current image. For example, if the current image is image 3, the previous image is image 2, and the next image is image 4.
  • the terminal when the terminal performs noise reduction processing on the video to be denoised, it can sequentially denoise the image sequence in the order of the time axis in the video to be denoised, that is, the terminal according to image 1 , Image 2, Image 3, Image 4, Image 5, Image 6 sequentially perform noise reduction processing on each frame of image. Therefore, when the terminal performs noise reduction on the current image of the video to be noise-reduced, all previous images before the current image have completed the noise reduction processing.
  • the terminal since the terminal sequentially denoises the image sequence in the order of the time axis in the video to be denoised, the terminal can obtain the image before denoising the current image.
  • the noise-reduced image corresponding to the previous frame of image that is, the first noise-reduced image
  • can also obtain the noise value corresponding to the previous frame of image that is, the first noise value, so that it can be based on the first noise reduction of the previous frame of image
  • the image and the first noise value further perform noise reduction processing on the current image.
  • the terminal before using the first denoising image corresponding to the previous frame of image to perform registration processing on the current image, and before obtaining the registered image, that is, before step 101, the terminal can first read the first denoising image of the previous frame.
  • noisy image and first noise value the terminal can first read the first denoising image of the previous frame.
  • the terminal when the terminal uses the first noise-reduction image of the previous frame of image to perform registration processing on the current image, it can use the current image registration technology to register the current image with the previous image.
  • the first noise-reduction image of the frame image the terminal can perform registration processing according to a variety of image registration methods, such as optical flow method, block search method, and feature matching method.
  • image registration methods such as optical flow method, block search method, and feature matching method.
  • the purpose of the terminal performing registration processing on the current image and the first noise-reduced image is to align the image content of the current image and the first noise-reduced image. That is, in this application, the image content of the registered image obtained by the terminal registration corresponds to the image content of the first noise-reduction image.
  • Step 102 Calculate the noise estimation value of the current image, and determine the fusion weight according to the first noise value and the noise estimation value of the previous frame of image.
  • the terminal uses the first noise-reduction image of the previous frame to perform registration processing on the current image, and after obtaining the registered image, it can first calculate the noise estimate of the current image, and then calculate the noise estimation value of the current image according to the previous image.
  • the first noise value and the noise estimate value of the frame image are further determined the fusion weight.
  • the first noise value of the previous frame of image may be used to estimate the noise of the first denoised image. That is to say, the first noise value represents the noise intensity of the first noise-reduced image, that is, represents the noise intensity of the previous frame of image after noise reduction processing.
  • the terminal when it calculates the noise estimation value of the current image, it may use a preset noise model to estimate the noise of the current image.
  • the preset noise model corresponds to the camera that shoots the video to be denoised. Therefore, the preset noise model can be used to determine the relationship between the noise value and the signal of the camera. That is to say, in this application, for a fixed photographing device, a preset noise model can be obtained or calculated correspondingly.
  • the terminal when calculating the noise estimation value of the current image, the terminal may first determine the signal strength corresponding to the current image, and then input the signal strength into the preset noise model, so that it can output Noise estimate.
  • the terminal may continue to calculate the fusion weight according to the first noise value and the noise estimation value of the previous frame image, where the fusion weight can be used To perform fusion noise reduction processing on the current image.
  • the terminal when the terminal determines the fusion weight according to the first noise value and the noise estimated value of the previous frame of image, it may first use the first noise value and the noise estimated value to calculate the fusion parameter, and then The fusion weight is further determined according to the fusion parameters.
  • the terminal when calculating the fusion weight, the terminal also needs to compare the pixel values of the first denoised image and the registered image, and then calculate the fusion weight according to the comparison result obtained after the comparison. .
  • Fig. 3 is a schematic diagram of pixel points. As shown in Fig. 3, the terminal may determine the pixel point A corresponding to the first pixel coordinate in the first noise reduction image as the first pixel, and at the same time, the terminal may determine the pixel point in the registered image. The pixel point B corresponding to the first pixel coordinate is determined as the second pixel.
  • the terminal since the terminal first uses the first noise-reduced image to perform image registration processing on the current image, the registered image and the first noise-reduced image complete the image content alignment. Therefore, for the same pixel coordinates, the terminal can respectively determine two different pixels corresponding to each other in the first noise-reduced image and the registered image.
  • the terminal may continue to obtain the first pixel value of the first pixel after determining the first pixel and the second pixel from the first noise-reduced image and the registered image respectively, And the second pixel value of the second pixel, so that the pixel value of the first denoised image and the registered image can be compared further according to the first pixel value and the second pixel value.
  • the terminal when the terminal compares the pixel values of the first noise-reduced image and the registered image based on the first pixel value and the second pixel value, it can calculate one of the two pixel values.
  • the difference parameter between the two where the difference parameter may be the absolute value of the difference between the first pixel value and the second pixel value.
  • the terminal before the terminal determines the fusion weight according to the fusion parameters, it needs to obtain the first pixel of the first pixel from the first denoising image and the registered image based on the first pixel coordinates. Value and the second pixel value of the second pixel, and then when the fusion weight is determined, the difference parameter between the first pixel value and the second pixel value can be calculated, and then the difference parameter and the fusion parameter are input to the preset In the weight model, the fusion weight can be obtained.
  • the preset weight model may be a specific function stored in the terminal that characterizes the fusion weight and the pixel difference.
  • the preset weight model it is necessary to follow the principle that the smaller the pixel difference, the greater the fusion weight, that is, the difference parameter and the fusion weight are inversely proportional.
  • the fusion weight is a natural number greater than or equal to 0 and less than or equal to 1.
  • Step 103 Use the fusion weight to obtain a second noise reduction image and a second noise value corresponding to the current image.
  • the terminal after the terminal determines the fusion weight according to the first noise value and the noise estimate value of the previous frame of image, it can use the fusion weight to obtain the second noise reduction image and the second noise corresponding to the current image, respectively. value.
  • the second noise reduction image may be an image obtained after performing fusion noise reduction processing on the current image, and the second noise value may be used to estimate the noise of the second noise reduction image.
  • the second noise value represents the noise intensity of the second noise-reduced image, that is, represents the noise intensity of the current image after the noise reduction processing is performed.
  • the terminal when the terminal uses the fusion weight to obtain the second denoising image corresponding to the current image, it can specifically perform fusion denoising processing on the registered image according to the fusion weight, so that the second denoising can be obtained.
  • noisy image when the terminal uses the fusion weight to obtain the second denoising image corresponding to the current image, it can specifically perform fusion denoising processing on the registered image according to the fusion weight, so that the second denoising can be obtained. noisy image.
  • the terminal when the terminal performs fusion noise reduction processing on the registered image according to the fusion weight to obtain the second noise reduction image, it may first perform the fusion weight, the first pixel value, and the second pixel value. , Determine the fused pixel value corresponding to the first pixel coordinate, and then traverse the registered image and other pixel coordinates in the first noise reduction map except the first coordinate to obtain other pixel values corresponding to other pixel coordinates, thereby Then, a second noise-reduced image can be generated based on the fused pixel value and other pixel values.
  • the terminal may first based on a pixel coordinate, and use the pixel value difference parameter of the two corresponding pixels on the first noise-reduction image and the registered image corresponding to the pixel coordinate,
  • the fusion pixel value corresponding to the pixel coordinate is obtained by fusion, and then the same fusion method is used to determine the pixel value difference parameter of the corresponding pixel on the first noise reduction image and the other pixel coordinates on the registered image to obtain other
  • the other fused pixel values corresponding to the pixel coordinates can finally use all the fused pixel values corresponding to all the pixel coordinates to generate the fused noise-reduced image corresponding to the current image, that is, the second noise-reduced image to complete the noise reduction processing of the current image.
  • the terminal first uses the first noise reduction image to perform registration processing on the current image, so that the first noise reduction image and the registered image have mutually aligned image content. Therefore, based on the first noise reduction image
  • the image and the second noise-reduced image obtained from the registered image also have image content aligned with each other with the above two images.
  • the terminal uses the fusion weight to obtain the second noise value corresponding to the current image, and specifically may generate the second noise value according to the fusion weight, the first noise value, and the noise estimation value. That is, in this application, the noise estimation value is used to determine the noise intensity of the current image, and the second noise value is used to determine the noise intensity of the second noise-reduced image after the current image is de-noised. Therefore, after the fusion noise reduction of the current image is completed, the second noise value can be selected as the noise reduction result and input into the noise reduction processing flow of the next frame of image. At the same time, it is also necessary to use the second noise reduction image as the noise reduction result to perform the noise reduction processing of the next frame image.
  • Step 104 Continue to perform noise reduction processing on the next frame of image according to the second noise reduction image and the second noise value, until each frame of the video to be denoised is traversed; wherein, the next frame of image is the video to be denoised The next image in and continuous with the current image.
  • the terminal after the terminal obtains the second noise reduction image and the second noise value corresponding to the current image by using the fusion weight, that is, after completing the fusion noise reduction of the current image according to the above steps 101 to 103, it can Continue to perform noise reduction processing on the next frame of image according to the second noise reduction image and the second noise value, until each frame of the video to be denoised is traversed to complete the noise reduction processing in the video to be denoised.
  • the terminal after the terminal completes the fusion noise reduction processing on the current image and obtains the second noise reduction image and the second noise value, it can continue to use the second noise reduction image to perform the next step.
  • the frame image is registered to obtain the registered image; then the noise estimation value of the next frame image is calculated, and the fusion weight is determined according to the second noise value and the noise estimation value; finally, the fusion weight can be used to obtain the corresponding image of the next frame
  • the third noise reduction image and the third noise value of the image complete the fusion noise reduction processing of the next frame image, and use the third noise reduction image and the third noise value as the noise reduction result of the next frame image, continue to input the value In the next fusion noise reduction processing flow, until the noise reduction processing of the video to be noise-reduced is completed.
  • the terminal traverses each frame of the video to be noise-reduced according to the method from step 101 to step 104, it can obtain all the noise-reduced images corresponding to all the frame images, and then You can continue the sequence along the time axis to generate a reduced-noise video based on all the reduced-noise images.
  • Figure 4 is a second schematic diagram of the implementation process of the noise reduction method. As shown in Figure 4, after the terminal uses the fusion weight to obtain the second noise reduction image and the second noise value corresponding to the current image, that is, after step 103, the terminal performs noise reduction processing
  • the method can also include the following steps:
  • Step 105 Store the second noise reduction image, and update the first noise value using the second noise value.
  • the terminal after the terminal completes the fusion noise reduction process for each frame of image and obtains the noise reduction image and the noise value, it can store the noise reduction image for subsequent generation of the noise reduction video At the same time, the terminal can directly use the new noise value to update the noise value corresponding to the previous frame. For example, after the terminal completes the noise reduction processing on the current image and obtains the second noise reduction image and the second noise value, it can store the second noise reduction image, and at the same time, can use the second noise value to compare the first noise value Perform update processing.
  • FIG. 5 is the third schematic diagram of the implementation process of the noise reduction method.
  • the terminal continues to reduce the next frame of image according to the second noise reduction image and the second noise value.
  • Noise processing, until each frame of image in the video to be denoised is traversed, that is, before step 104, the method for the terminal to perform denoising processing may further include the following steps:
  • Step 106 If the current image is the first frame image of the video to be noise-reduced, determine the second noise value according to the preset noise model, and determine the current image as the second noise-reduced image.
  • the terminal can directly use it as the second noise-reduced video after noise reduction, and at the same time, determine the second noise value according to the preset noise model, and then reduce the second noise value.
  • the noisy image and the second noise value are input into the noise reduction process of the next frame of image to complete the noise reduction process of the second frame of image in the to-be-reduced video.
  • the terminal when the terminal performs noise reduction processing on the video to be denoised, it can use a preset noise model to estimate the noise of the current image, and then further The fusion weight of the fusion noise reduction is determined according to the noise estimate and the first noise value of the previous frame image, and the fusion weight is used to perform the registration process on the first noise reduction image based on the previous frame image. Perform fusion noise reduction to obtain the second noise reduction image and the second noise value of the current image. Further, the terminal can continue to use the second noise reduction image and the second noise value to perform fusion noise reduction on the next frame of image until the completion The noise reduction processing of the video to be reduced.
  • the noise estimation value estimated by the preset noise model can more accurately determine the fusion weight, so that the principle of higher weight can be adopted for fusion noise reduction based on the lower noise pixel, which improves the video noise reduction effect and the purity of the picture degree.
  • the terminal uses the first noise reduction image of the previous frame of image to perform registration processing on the current image to obtain the registered image; where the previous frame of image is the video to be denoised The previous image that is continuous with the current image; calculate the noise estimation value of the current image, and determine the fusion weight according to the first noise value and the noise estimation value of the previous frame image; use the fusion weight to obtain the second noise reduction corresponding to the current image Image and the second noise value; continue to perform noise reduction processing on the next frame of image according to the second denoised image and the second noise value, until each frame of the video to be denoised is traversed; where the next frame of image is The next image in the video to be denoised that is continuous with the current image.
  • the terminal when the terminal performs noise reduction processing on the video to be denoised, it can use the denoised image and noise value of the previous frame of image denoising, combined with the noise estimate of the current image, to determine the fusion weight, so that the fusion weight can be used
  • the weight completes the fusion and noise reduction processing of the current image, and can continue to use the denoised image and noise value of the current image after denoising, and continue to denoise the next frame of image, until each frame in the video to be denoised is traversed image.
  • the fusion weight is determined based on the noise value corresponding to the previous frame image and the noise value corresponding to the current image. Therefore, when performing noise reduction processing, it can effectively improve the multi-frame video image.
  • the integrated noise reduction effect improves the video quality.
  • the terminal may use a preset noise model to perform noise estimation on the current image, so as to calculate the noise estimation value of the current image.
  • the preset noise model corresponds to the shooting device that shoots the video to be noise-reduced.
  • a corresponding preset noise model can be obtained.
  • the preset noise The model conforms to the normal distribution and can be expressed as follows:
  • X is the strength of the real signal.
  • the noise can be mainly divided into two types: read noise and shot noise.
  • the model parameter ⁇ read corresponding to read noise and the model parameter ⁇ shot corresponding to shot noise can both be measured in practice:
  • gd is the digital gain of the camera signal
  • ga is the analog gain of the camera signal
  • the preset noise model can estimate the noise of the current image, and the preset noise model can be calculated by shooting a chart in a dark room, or directly obtained from the manufacturer of the shooting device.
  • the terminal can use the standard deviation of the signal of the current image to determine the noise intensity. Therefore, for a pixel in the current image whose signal intensity is X, the terminal can use the standard deviation of the signal of the current image to determine the noise intensity.
  • the signal strength X is input into the preset noise model as in the above formula (1), and the standard deviation corresponding to the pixel is calculated, so that the noise estimation value of the current image can be further obtained. That is to say, because the preset noise model can be used to determine the relationship between the noise value and the signal of the camera that shoots the video to be noise-reduced, the terminal can first determine the current image corresponding to the noise estimation value.
  • the signal strength is calculated based on a preset noise model to obtain a noise estimate.
  • the terminal when the terminal fuses the first noise-reduction image and the registered image, it may first determine that the first pixel coordinate corresponds to the fusion weight, the first pixel value, and the second pixel value.
  • the terminal can first fuse the pixel point A and the pixel point B based on the pixel value P A and the pixel value P B , so as to obtain the fusion
  • the pixel value P of the pixel value is specifically as follows:
  • P A and P B are the pixel values of pixel A and pixel B respectively, and w is the fusion weight.
  • the fusion weight can be determined by the difference parameter of the pixel values of the two fused pixels.
  • the terminal needs to calculate the fusion weight when calculating the fusion weight.
  • a noise-reduced image and a registered image are compared for pixel values.
  • the pixel point A and the pixel point B of the first pixel coordinate can be corresponding to the difference between the absolute value of the pixel values as the difference parameter, i.e.
  • the terminal may input the difference parameter between the first pixel value and the second pixel value and the fusion parameter into the preset weight model to calculate the fusion weight.
  • the preset weight model may be a specific function stored in the terminal that represents the fusion weight and the pixel difference.
  • the terminal can calculate the fusion weight w through the following formula:
  • is a fusion parameter
  • d is a difference parameter between the first pixel value and the second pixel value.
  • d
  • the terminal can adjust the relationship between d and w through ⁇ .
  • w must be in the range of [0, 1].
  • the fusion parameter may be obtained by calculation of the first noise value and the noise estimation value.
  • the terminal may use the following formula to calculate the fusion parameter ⁇ :
  • A is S standard signal intensity difference between the pixel A, That is, the first pixel S A first noise image noise value corresponding to, standard signal S B B pixel intensity difference, also That is, S B of the second pixels in the image corresponding to the value of the noise after registration.
  • the fusion parameter ⁇ can be obtained by calculating the first noise value of the first denoised image and the noise estimated value of the registered image.
  • the second noise value may be calculated according to the fusion weight, the first noise value, and the noise estimation value.
  • the standard deviation of the pixels may be used to characterize the noise value S of the pixels in the second noise-reduced image after the current image is fused and noise-reduced, and the specific calculation method is as follows:
  • the terminal can first based on a pixel coordinate, use the pixel coordinate corresponding to the pixel value difference parameter of the first noise-reduction image and the two corresponding pixels on the registered image to fuse to obtain the The fused pixel value corresponding to the pixel coordinate, and then the same fusion method is used to determine the pixel value difference parameter of the corresponding pixel on the first noise-reduced image and the other pixel coordinates on the registered image to obtain the corresponding pixel coordinates Finally, all the fused pixel values corresponding to all pixel coordinates can be used to generate the fused noise-reduced image corresponding to the current image, that is, the second noise-reduced image, to complete the noise reduction processing of the current image.
  • the terminal uses the first noise reduction image of the previous frame of image to perform registration processing on the current image to obtain the registered image; where the previous frame of image is the video to be denoised The previous image that is continuous with the current image; calculate the noise estimation value of the current image, and determine the fusion weight according to the first noise value and the noise estimation value of the previous frame image; use the fusion weight to obtain the second noise reduction corresponding to the current image Image and the second noise value; continue to perform noise reduction processing on the next frame of image according to the second denoised image and the second noise value, until each frame of the video to be denoised is traversed; where the next frame of image is The next image in the video to be denoised that is continuous with the current image.
  • the terminal when the terminal performs noise reduction processing on the video to be denoised, it can use the denoised image and noise value of the previous frame of image denoising, combined with the noise estimate of the current image, to determine the fusion weight, so that the fusion weight can be used
  • the weight completes the fusion and noise reduction processing of the current image, and can continue to use the denoised image and noise value of the current image after denoising, and continue to denoise the next frame of image, until each frame in the video to be denoised is traversed image.
  • the fusion weight is determined based on the noise value corresponding to the previous frame image and the noise value corresponding to the current image. Therefore, when performing noise reduction processing, it can effectively improve the multi-frame video image.
  • the integrated noise reduction effect improves the video quality.
  • FIG. 6 is a structural block diagram of fusion noise reduction processing.
  • the terminal when the terminal performs noise reduction on the current image in the video to be denoised, it can first obtain the upper The denoising result of a frame of image includes the first denoised image and the first noise value corresponding to the previous frame of image, and then use the first denoised image to perform image registration processing on the current image (step 201) to obtain the current The registered image corresponding to the image.
  • the terminal can perform noise estimation on the current image based on a preset noise model (step 202) to obtain a noise estimation value, and combine the noise estimation value with the first noise value of the first denoised image, Further calculation to obtain the fusion weight.
  • the noise estimate and the first noise value can be used to calculate the fusion parameters (step 203).
  • the same can be determined in the first denoised image and the registered image respectively.
  • the pixel coordinates of the first pixel and the second pixel are compared, and the pixel value of the one pixel and the second pixel are compared (step 204) to obtain the difference parameter of the pixel value, so that the difference parameter and the fusion parameter can be input to the preset
  • the weight model performs weight calculation (step 205) to obtain the fusion weight.
  • the terminal after the terminal calculates and obtains the fusion weight, on the one hand, it can perform fusion noise reduction processing on the registered image according to the fusion weight (step 206) to obtain the second noise reduction image; on the other hand, on the one hand, the noise calculation can be performed according to the fusion weight, the first noise value, and the noise estimated value (step 207) to generate the second noise value. In this way, the noise reduction result of the current image, that is, the second noise reduction image and the second noise value can be obtained.
  • the terminal after the terminal completes the fusion noise reduction process of the current image and obtains the second noise reduction image and the second noise value, it can continue to use the second noise reduction and the second noise value to complete the pairing.
  • the fusion noise reduction processing of the next frame of image until after each frame of the video to be denoised is traversed, all the denoised images corresponding to all the frames of images can be obtained, and then the sequence along the time axis can be continued, based on the total reduction
  • the noisy image generates the noise-reduced video.
  • FIG. 7 is a schematic diagram of fusion noise reduction processing.
  • the noise reduction method proposed in this application can arrange multiple frames of images collected by the camera along the time axis to obtain
  • the noise reduction result of the previous frame of image is selected, that is, the first noise reduction image and the first noise value.
  • the registered image is obtained, and then the first noise value and the first noise-reduced image are used to fuse the registered image to reduce noise (step 302). )
  • the second noise reduction image and the second noise value of the current image is obtained.
  • the terminal may store the second noise reduction map, and use the second noise value to update the first noise value at the same time.
  • the second noise reduction image and the second noise value will be used as the input of the noise reduction process of the next frame of image to participate in the noise reduction process. In this way, iterate until the complete video to be denoised is processed, and the corresponding denoised video is obtained.
  • the terminal when the terminal performs noise reduction processing on the video to be denoised, it can use the preset noise model to estimate the noise of the current image, and then further estimate the noise of the current image based on the noise estimate and the first noise of the previous frame of image.
  • the fusion weight of the value fusion noise reduction is determined, and the fusion weight is used to perform the fusion noise reduction on the registered image based on the first noise reduction image of the previous frame image to obtain the second noise reduction of the current image
  • the image and the second noise value further, the terminal may continue to use the second noise-reduced image and the second noise value to perform fusion noise reduction on the next frame of image until the noise reduction processing of the video to be noise-reduced is completed.
  • the noise estimation value estimated by the preset noise model can more accurately determine the fusion weight, so that the principle of higher weight can be adopted for fusion noise reduction based on the lower noise pixel, which improves the video noise reduction effect and the purity of the picture degree.
  • the terminal uses the first noise reduction image of the previous frame of image to perform registration processing on the current image to obtain the registered image; where the previous frame of image is the video to be denoised The previous image that is continuous with the current image; calculate the noise estimation value of the current image, and determine the fusion weight according to the first noise value and the noise estimation value of the previous frame image; use the fusion weight to obtain the second noise reduction corresponding to the current image Image and the second noise value; continue to perform noise reduction processing on the next frame of image according to the second denoised image and the second noise value, until each frame of the video to be denoised is traversed; where the next frame of image is The next image in the video to be denoised that is continuous with the current image.
  • the terminal when the terminal performs noise reduction processing on the video to be denoised, it can use the denoised image and noise value of the previous frame of image denoising, combined with the noise estimate of the current image, to determine the fusion weight, so that the fusion weight can be used
  • the weight completes the fusion and noise reduction processing of the current image, and can continue to use the denoised image and noise value of the current image after denoising, and continue to denoise the next frame of image, until each frame in the video to be denoised is traversed image.
  • the fusion weight is determined based on the noise value corresponding to the previous frame image and the noise value corresponding to the current image. Therefore, when performing noise reduction processing, it can effectively improve the multi-frame video image.
  • the integrated noise reduction effect improves the video quality.
  • FIG. 8 is a schematic diagram of the composition structure of a terminal.
  • the terminal 10 proposed in this embodiment of the present application may include an acquiring part 11, a calculating part 12, and determining Part 13, noise reduction part 14, storage part 15, and update part 16.
  • the acquiring part 11 is configured to perform registration processing on the current image by using the first noise-reduction image of the previous frame of image to obtain a registered image; wherein, the previous frame of image is the image of the video to be noise-reduced, and The previous image consecutive to the current image;
  • the calculation part 12 is configured to calculate the noise estimation value of the current image
  • the determining part 13 is configured to determine the fusion weight according to the first noise value of the previous frame image and the noise estimated value;
  • the obtaining part 11 is further configured to obtain a second noise reduction image and a second noise value corresponding to the current image by using the fusion weight;
  • the noise reduction part 14 is configured to continue to perform noise reduction processing on the next frame of image according to the second noise reduction image and the second noise value, until each frame of the image to be denoised is traversed. ; Wherein, the next frame of image is the next image that is continuous with the current image in the video to be noise-reduced.
  • the calculation part 12 is specifically configured to determine the signal strength corresponding to the current image; input the signal strength into a preset noise model, and output the noise estimation value;
  • the preset noise model is used to determine the relationship between the noise value and the signal of the photographing device that photographed the video to be noise-reduced.
  • the determining part 13 is specifically configured to calculate a fusion parameter based on the first noise value and the noise estimation value; and determine the fusion weight according to the fusion parameter.
  • the determining part 13 is further configured to determine a first pixel in the first noise reduction image before determining the fusion weight according to the fusion parameter, and to determine the first pixel in the first noise reduction image.
  • the second pixel is determined in the quasi-post image; wherein the first pixel and the second pixel are two pixels corresponding to the first pixel coordinates.
  • the obtaining part 11 is further configured to obtain the first pixel value of the first pixel and obtain the second pixel value of the second pixel.
  • the determining part 13 is further specifically configured to calculate a difference parameter between the first pixel value and the second pixel value;
  • the fusion parameter is input into a preset weight model, and the fusion weight is output.
  • the acquisition part 11 is specifically configured to perform fusion noise reduction processing on the registered image according to the fusion weight to obtain the second noise reduction image.
  • the acquiring part 11 is further specifically configured to determine the corresponding to the first pixel coordinate according to the fusion weight, the first pixel value, and the second pixel value. Fused pixel values; traverse the registered image and other pixel coordinates in the first noise reduction map to obtain other pixel values corresponding to the other pixel coordinates; based on the fused pixel value and the other pixel value, The second noise reduction image is generated.
  • the acquiring part 11 is further specifically configured to generate the second noise value according to the fusion weight, the first noise value, and the noise estimation value.
  • the storage part 15 is configured to use the fusion weight to obtain the second noise reduction image and the second noise value corresponding to the current image, and then store the second noise reduction image.
  • the update part 16 is further configured to use the fusion weight to obtain the second noise reduction image and the second noise value corresponding to the current image, and then use the second noise Value to update the first noise value.
  • the acquiring part 11 is further configured to perform registration processing on the current image by using the first noise-reduction image corresponding to the previous frame of image, and before obtaining the registered image, read the The first noise reduction image and the first noise value.
  • the determining part 13 is further configured to continue to perform noise reduction processing on the next frame of image according to the second noise reduction image and the second noise value, until all traversal is completed.
  • the second noise value is determined according to the preset noise model, and the The current image is determined to be the second noise-reduced image.
  • Fig. 9 is a second schematic diagram of the composition structure of the terminal.
  • the terminal 10 proposed in the embodiment of the present application may further include a processor 17, and a memory 18 storing executable instructions of the processor 17. Further, the terminal 10 may also It includes a communication interface 19 and a bus 110 for connecting the processor 17, the memory 18, and the communication interface 19.
  • the aforementioned processor 17 may be an Application Specific Integrated Circuit (ASIC), a digital signal processor (Digital Signal Processor, DSP), or a digital signal processing device (Digital Signal Processing Device, DSPD). ), Programmable Logic Device (ProgRAMmable Logic Device, PLD), Field Programmable Gate Array (Field ProgRAMmable Gate Array, FPGA), Central Processing Unit (CPU), Controller, Microcontroller, Microprocessor At least one of. It is understandable that, for different devices, the electronic devices used to implement the above-mentioned processor functions may also be other, which is not specifically limited in the embodiment of the present application.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • Field Programmable Gate Array Field ProgRAMmable Gate Array
  • CPU Central Processing Unit
  • Controller Microcontroller
  • Microprocessor At least one of. It is understandable that, for different devices, the electronic devices used to implement the above-mentioned processor functions may
  • the terminal 10 may further include a memory 18, which may be connected to the processor 17, wherein the memory 18 is used to store executable program code, the program code includes computer operation instructions, the memory 18 may include a high-speed RAM memory, or may also include Non-volatile memory, for example, at least two disk memories.
  • the bus 110 is used to connect the communication interface 19, the processor 17 and the memory 18, and to communicate with each other among these devices.
  • the memory 18 is used to store instructions and data.
  • the above-mentioned processor 17 is configured to perform registration processing on the current image by using the first noise-reduction image of the previous frame of image to obtain the registered image; wherein, the previous frame The image is the previous image that is continuous with the current image in the video to be denoised; the noise estimation value of the current image is calculated and determined according to the first noise value of the previous frame image and the noise estimation value Fusion weight; use the fusion weight to obtain the second denoised image and the second noise value corresponding to the current image; continue to denoise the next frame of image according to the second denoised image and the second noise value The processing is performed until each frame of the image in the video to be denoised is traversed; wherein, the next frame of image is the next image in the video to be denoised that is continuous with the current image.
  • the aforementioned memory 18 may be a volatile memory (volatile memory), such as a random-access memory (Random-Access Memory, RAM); or a non-volatile memory (non-volatile memory), such as a read-only memory (Read-Only Memory, ROM), flash memory (flash memory), hard disk (Hard Disk Drive, HDD) or solid-state drive (Solid-State Drive, SSD); or a combination of the above types of memory, and send it to the processor 17 Provide instructions and data.
  • volatile memory such as a random-access memory (Random-Access Memory, RAM)
  • non-volatile memory such as a read-only memory (Read-Only Memory, ROM), flash memory (flash memory), hard disk (Hard Disk Drive, HDD) or solid-state drive (Solid-State Drive, SSD); or a combination of the above types of memory, and send it to the processor 17 Provide instructions and data.
  • the functional modules in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be realized in the form of hardware or software function module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this embodiment is essentially or correct
  • the part that the prior art contributes or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can be a personal computer).
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • An embodiment of the present application proposes a terminal that uses the first noise-reduction image of the previous frame of image to perform registration processing on the current image to obtain the registered image; wherein, the previous frame of image is the image in the video to be noise-reduced , The previous image that is continuous with the current image; calculate the noise estimate of the current image, and determine the fusion weight according to the first noise value and the noise estimate of the previous frame image; use the fusion weight to obtain the second denoising image corresponding to the current image And the second noise value; continue to perform noise reduction processing on the next frame of image according to the second denoised image and the second noise value, until each frame of the video to be denoised is traversed; where the next frame of image is to be denoised The next image in the noise reduction video that is continuous with the current image.
  • the terminal when the terminal performs noise reduction processing on the video to be denoised, it can use the denoised image and noise value of the previous frame of image denoising, combined with the noise estimate of the current image, to determine the fusion weight, so that the fusion weight can be used
  • the weight completes the fusion and noise reduction processing of the current image, and can continue to use the denoised image and noise value of the current image after denoising, and continue to denoise the next frame of image, until each frame in the video to be denoised is traversed image.
  • the fusion weight is determined based on the noise value corresponding to the previous frame image and the noise value corresponding to the current image. Therefore, when performing noise reduction processing, it can effectively improve the multi-frame video image.
  • the integrated noise reduction effect improves the video quality.
  • the embodiment of the present application provides a computer-readable storage medium on which a program is stored, and when the program is executed by a processor, the above-mentioned noise reduction method is implemented.
  • the program instructions corresponding to a noise reduction method in this embodiment can be stored on storage media such as optical disks, hard disks, USB flash drives, etc.
  • storage media such as optical disks, hard disks, USB flash drives, etc.
  • the previous frame of image is the previous one in the video to be noise-reduced that is continuous with the current image image;
  • next frame of image Is the next image in the video to be noise-reduced that is continuous with the current image.
  • this application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of hardware embodiments, software embodiments, or embodiments combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, optical storage, etc.) containing computer-usable program codes.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device realizes the functions specified in one or more processes in the schematic diagram and/or one block or more in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in one or more processes in the schematic diagram and/or one block or more in the block diagram.
  • the terminal uses the first noise-reduction image of the previous frame of image to perform registration processing on the current image to obtain the registered image; wherein, the previous frame of image is the video to be noise-reduced and is continuous with the current image.
  • Calculate the noise estimation value of the current image and determine the fusion weight according to the first noise value and noise estimation value of the previous frame of image; use the fusion weight to obtain the second noise reduction image and the second noise value corresponding to the current image ;
  • the terminal when the terminal performs noise reduction processing on the video to be denoised, it can use the denoised image and noise value of the previous frame of image denoising, combined with the noise estimate of the current image, to determine the fusion weight, so that the fusion weight can be used
  • the weight completes the fusion and noise reduction processing of the current image, and can continue to use the denoised image and noise value of the current image after denoising, and continue to denoise the next frame of image, until each frame in the video to be denoised is traversed image.
  • the fusion weight is determined based on the noise value corresponding to the previous frame image and the noise value corresponding to the current image. Therefore, when performing noise reduction processing, it can effectively improve the multi-frame video image.
  • the integrated noise reduction effect improves the video quality.

Abstract

本申请实施例公开了一种降噪方法、终端及存储介质,该降噪方法包括:利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,上一帧图像为待降噪视频中的、与当前图像连续的前一个图像;计算当前图像的噪声估计值,并根据上一帧图像的第一噪声值和噪声估计值确定融合权重;利用融合权重获得当前图像对应的第二降噪图像和第二噪声值;继续根据第二降噪图像和第二噪声值对下一帧图像进行降噪处理,直到遍历完待降噪视频中的每一帧图像;其中,下一帧图像为待降噪视频中的、与当前图像连续的后一个图像。

Description

降噪方法、终端及存储介质
本申请基于申请号为201911253681.1、申请日为2019年12月09日、申请名称为“降噪方法、终端及存储介质”的在先中国专利申请提出,并要求该在先中国专利申请的优先权,该在先中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请实施例涉及图像处理技术领域,尤其涉及一种降噪方法、终端及存储介质。
背景技术
多帧视频降噪是基于多帧降噪技术实现的,就是在采集多帧照片或者影像之后,在不同的帧数下找到不同的带有噪点性质的像素点,通过加权合成后得到一张较为干净、纯净的图像。为了保证融合后的图像的整体亮度不发生变化,多采用α融合的方式,其中,权重的选取可以直接决定降噪后的效果。
但是,对于有局部运动物体的场景,采用α融合的方式进行降噪处理容易出现鬼影,降低了视频降噪的效果。
发明内容
本申请实施例提供了一种降噪方法、终端及存储介质,在进行降噪处理时,可以有效提高多帧视频图像的融合降噪效果,提升视频画质。
本申请实施例的技术方案是这样实现的:
第一方面,本申请实施例提供了一种降噪方法,所述方法包括:
利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,所述上一帧图像为待降噪视频中的、与所述当前图像连续的前一个图像;
计算所述当前图像的噪声估计值,并根据所述上一帧图像的第一噪声值和所述噪声估计值确定融合权重;
利用所述融合权重获得所述当前图像对应的第二降噪图像和第二噪声值;
继续根据所述第二降噪图像和所述第二噪声值对下一帧图像进行降噪处理,直到遍历完所述待降噪视频中的每一帧图像;其中,所述下一帧图像为所述待降噪视频中的、与所述当前图像连续的后一个图像。
第二方面,本申请实施例提供了一种终端,所述终端包括:获取部分,计算部分,确定部分,降噪部分,
所述获取部分,配置为利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,所述上一帧图像为待降噪视频中的、与所述当前图像连续的前一个图像;
所述计算部分,配置为计算所述当前图像的噪声估计值;
所述确定部分,配置为根据所述上一帧图像的第一噪声值和所述噪声估计值确定融合权重;
所述获取部分,还配置为利用所述融合权重获得所述当前图像对应的第二降噪图像和第二噪声值;
所述降噪部分,配置为继续根据所述第二降噪图像和所述第二噪声值对下一帧图像进行降噪处理,直到遍历完所述待降噪视频中的每一帧图像;其中,所述下一帧图像为所述待降噪视频中的、与所述当前图像连续的后一个图像。
第三方面,本申请实施例提供了一种终端,所述终端包括处理器、存储有所述处理器可执行指令的存储器,当所述指令被所述处理器执行时,实现如上所述的降噪方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,其上存储有程序,应用于终端中,所述程序被处理器执行时,实现如上所述的降噪方法。
本申请实施例提供了一种降噪方法、终端及存储介质,终端利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,上一帧图像为待降噪视频中的、与当前图像连续的前一个图像;计算当前图像的噪声估计值,并根据上一帧图像的第一噪声值和噪声估计值确定融合权重;利用融合权重获得当前图像对应的第二降噪图像和第二噪声值;继续根据第二降噪图像和第二噪声值对下一帧图像进行降噪处理,直到遍历完待降噪视频中的每一帧图像;其中,下一帧图像为待降噪视频中的、与当前图像连续的后一个图像。也就是说,终端在对待降噪视频进行降噪处理时,可以利用上一帧图像降噪后的降噪图像和噪声值,结合当前图像的噪声估计值,确定出融合权重,从而可以利用融合权重完成对当前图像的融合降噪处理,并可以继续利用当前图像降噪后的降噪图像和噪声值,继续对下一帧图像进行降噪,直到遍历完待降噪视频中的每一帧图像。也就是说,本申请提出的降噪方法,融合权重是基于前一帧图像对应的噪声值和当前图像对应的噪声值共同确定的,因此在进行降噪处理时,可以有效提高多帧视频图像的融合降噪效果,提升视频画质。
附图说明
图1为降噪方法的实现流程示意图一;
图2为待降噪视频的组成示意图;
图3为像素点的示意图;
图4为降噪方法的实现流程示意图二;
图5为降噪方法的实现流程示意图三;
图6为融合降噪处理的结构框图;
图7为融合降噪处理的示意图;
图8为终端的组成结构示意图一;
图9为终端的组成结构示意图二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。可以理解的是,此处所描述的具体实施例仅仅用于解释相关申请,而非对该申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关申请相关的部分。
在视频领域中,当环境亮度较差时,采集到的影像会伴随较多的噪点,如果通过亮度提升对原视频进行亮度增强处理,那么影像中的噪点也会相应的增强,严重影响视频的视觉效果。一般的图像降噪方式,仅针对单帧图像,且计算复杂度较高,应用于视频图像时降噪效果较差。多帧视频降噪是一种视频降噪的常见手法,目前,多帧视频降噪具体可以包括以下过程:利用图像配准技术对齐图像内容,然后将视频的像素进行融合,由于图像的噪声信号多为随机信号,可以用多帧融合的方式降低噪声强度。
可以理解的是,多帧视频降噪是基于多帧降噪技术实现的,所谓多帧降噪技术,就是在采集多帧照片或者影像之后,在不同的帧数下找到不同的带有噪点性质的像素点,通过加权合成后得到一张较为干净、纯净的图像。通俗地说,就是终端在拍摄的时候,会进行多个帧数的噪点数量和位置的计算和筛选,将有噪点的地方用没有噪点的帧数替换位置,经过反复加权、替换,就得到一张很干净的图像,其实最终成像的图像是由多个帧数的影像合成的,所以在某些场合我们隐约可以看到部分物体的重影,当然只要不对图像的主体产生影响,它就可以忽略。
然而,为了保证融合后的图像的整体亮度不发生变化,多采用α融合的方式,该方法为一种加权求和的方式,融合权重的选择往往采用固定的参数,或者根据融合像素的差值进行选取,具体地,权重的选取可以直接决定降噪后的效果。例如,如果前一帧图像的权重较大,则图像的降噪效果便较好。但是,对于有局部运动物体的场景,采用α融合的方式进行降噪处理容易出现鬼影,降低了视频降噪的效果。
为了克服上述缺陷,本申请利用预设噪声模型估计的噪声估计值可以更加精确的确定出融合权重,从而可以基于噪声较低的像素采取较高权重的原则进行融合降噪,提高了视频降噪效果和画面的纯净度。也就是说, 终端在对待降噪视频进行降噪处理时,可以利用上一帧图像降噪后的降噪图像和噪声值,结合当前图像的噪声估计值,确定出融合权重,从而可以利用融合权重完成对当前图像的融合降噪处理,并可以继续利用当前图像降噪后的降噪图像和噪声值,继续对下一帧图像进行降噪,直到遍历完待降噪视频中的每一帧图像。也就是说,本申请提出的降噪方法,融合权重是基于前一帧图像对应的噪声值和当前图像对应的噪声值共同确定的,因此在进行降噪处理时,可以有效提高多帧视频图像的融合降噪效果,提升视频画质。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
本申请一实施例提供了一种降噪方法,该降噪方法应用于终端中,图1为降噪方法的实现流程示意图一,如图1所示,在本申请的实施例中,终端进行降噪的方法可以包括以下步骤:
步骤101、利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,上一帧图像为待降噪视频中的、与当前图像连续的前一个图像。
在本申请的实施例中,终端在对待降噪视频中的当前图像进行降噪处理时,可以先利用上一帧图像的第一降噪图像对当前图像进行配准处理,便可以获得配准后图像。
需要说明的是,在本申请的实施例中,终端可以为任何具备通信和存储功能的设备,例如:平板电脑、手机、电子阅读器、遥控器、个人计算机(Personal Computer,PC)、笔记本电脑、车载设备、网络电视、可穿戴设备等设备。
可以理解的是,在本申请的实施例中,终端可以配置有拍摄装置,从而可以利用拍摄装置采集获得待降噪视频。其中,拍摄装置可以为图像传感器。示例性的,拍摄装置可以为终端配置的电荷耦合器件(Charge-coupled Device,CCD)或者互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)。
进一步地,在本申请的实施例中,待降噪视频还可以为终端接收或获取的,具体地,终端既可以从其他设备接收待降噪视频,也可以从服务器中下载待降噪视频,还可以从本地存储器中读取待降噪视频。
需要说明的是,在本申请的实施例中,待降噪视频中可以包括有连续多帧图像。其中,该多帧图像可以为沿时间轴排列的图像序列。
具体地,在本申请的实施例中,图2为待降噪视频的组成示意图,如图2所示,待降噪视频中可以包括沿时间t1、t2、t3、t4……t(n-1)、tn分别排列的图像1、图像2、图像3、图像4……图像(n-1)、图像n的图像序列,其中,n为大于5的整数,上一帧图像可以为待降噪视频中的、与当前图像连续的前一个图像,例如,当前图像为图像3,则上一帧图像为图像2, 下一帧图像为图像4。
进一步地,在本申请的实施例中,终端在对待降噪视频进行降噪处理时,可以按照待降噪视频中的时间轴顺序依次对图像序列进行降噪,也就是说,终端按照图像1、图像2、图像3、图像4、图像5、图像6的顺序依次对每一帧图像进行降噪处理。因此,当终端对待降噪视频的当前图像进行降噪时,当前图像之前的所有前序图像均已完成了降噪处理。
可以理解的是,在本申请的实施例中,由于终端是按照待降噪视频中的时间轴顺序依次对图像序列进行降噪的,因此,终端在对当前图像进行降噪之前,可以先获得上一帧图像对应的降噪后的图像,即第一降噪图像,还可以获得上一帧图像对应的噪声值,即第一噪声值,从而便可以根据上一帧图像的第一降噪图像和第一噪声值进一步对当前图像进行降噪处理。也就是说,在利用上一帧图像对应的第一降噪图像对当前图像进行配准处理,获得配准后图像之前,即步骤101之前,终端可以先读取上一帧图像的第一降噪图像和第一噪声值。
需要说明的是,在本申请的实施例中,终端在利用上一帧图像的第一降噪图像对当前图像进行配准处理时,可以利用目前的图像配准技术配准当前图像和上一帧图像的第一降噪图像。其中,终端可以按照多种图像配准方法进行配准处理,比如光流法、块搜索法以及特征匹配法等,此处采用效果较好的图像配准方案即可,不限定具体的配准方式。
可以理解的是,在本申请的实施例中,终端对当前图像和第一降噪图像进行配准处理的目的是对齐当前图像和第一降噪图像的图像内容。也就是说,在本申请中,终端配准获得的配准后图像的图像内容,是与第一降噪图像的图像内容相对应的。
步骤102、计算当前图像的噪声估计值,并根据上一帧图像的第一噪声值和噪声估计值确定融合权重。
在本申请的实施例中,终端在利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像之后,可以先计算当前图像的噪声估计值,然后根据上一帧图像的第一噪声值和噪声估计值,进一步确定融合权重。
需要说明的是,在本申请的实施例中,上一帧图像的第一噪声值可以用于对第一降噪图像的噪声进行估计。也就是说,第一噪声值表征第一降噪图像的噪声强度,即表征对上一帧图像进行降噪处理后的噪声强度。
进一步地,在本申请的实施例中,终端计算当前图像的噪声估计值时,可以利用预设噪声模型对当前图像进行噪声估计。其中,预设噪声模型是与拍摄待降噪视频的拍摄装置相对应的,因此,预设噪声模型可以用于对噪声值与拍摄装置的信号之间的关系进行确定。也就是说,在本申请中,对于固定的拍摄装置,可以对应获得或计算预设噪声模型。
需要说明的是,在本申请的实施例中,终端在计算当前图像的噪声估 计值时,可以先确定当前图像对应的信号强度,然后将信号强度输入至预设噪声模型中,从而便可以输出噪声估计值。
进一步地,在本申请的实施例中,终端在计算当前图像的噪声估计值之后,可以继续根据上一帧图像的第一噪声值和噪声估计值对融合权重进行计算,其中,融合权重可以用于对当前图像进行融合降噪处理。
具体地,在本申请的实施例中,终端在根据上一帧图像的第一噪声值和噪声估计值确定融合权重时,可以先利用第一噪声值和噪声估计值计算获得融合参数,然后再进一步根据融合参数进行融合权重的确定。
进一步地,在本申请的实施例中,终端在计算融合权重时,还需要对第一降噪图像和配准后图像进行像素值的比较,然后根据比较后获得的比较结果进行融合权重的计算。
具体地,在本申请中,终端在对第一降噪图像和配准后图像进行像素值的比较时,可以先选择相同的像素坐标,即第一像素坐标,分别在第一降噪图像和配准后图像中,确定出第一像素坐标上的第一像素和第二像素。图3为像素点的示意图,如图3所示,终端可以将第一降噪图像中的第一像素坐标对应的像素点A确定为第一像素,同时,终端可以将配准后图像中的第一像素坐标对应的像素点B确定为第二像素。
可以理解的是,在本申请的实施例中,由于终端先利用第一降噪图像对当前图像进行了图像配准处理,从而使配准后图像和第一降噪图像完成了图像内容的对齐,因此,对于相同的像素坐标,终端可以分别在第一降噪图像和配准后图像中确定出互相对应的两个不同像素。
进一步地,在本申请的实施例中,终端在分别从第一降噪图像和配准后图像中确定出第一像素和第二像素之后,便可以继续获取第一像素的第一像素值,和第二像素的第二像素值,从而可以进一步根据第一像素值和第二像素值实现对第一降噪图像和配准后图像进行像素值的比较。
需要说明的是,在本申请的实施例中,终端在基于第一像素值和第二像素值对第一降噪图像和配准后图像进行像素值的比较时,可以计算两个像素值之间的差值参数,其中,差值参数可以为第一像素值和第二像素值之间的差值的绝对值。
也就是说,在本申请的实施例中,终端在根据融合参数确定融合权重之前,需要先基于第一像素坐标,从第一降噪图像和配准后图像中获取第一像素的第一像素值和第二像素的第二像素值,然后在进行融合权重的确定时,便可以计算第一像素值和第二像素值之间的差值参数,接着将差值参数和融合参数输入至预设权重模型中,从而便可以获得融合权重。
可以理解的是,在本申请的实施例中,预设权重模型可以为终端存储的、表征融合权重与像素差值的具体地函数。其中,对于预设权重模型,需要遵循像素差值越小,融合权重越大的原则,即差值参数和融合权重之间呈反比。
需要说明的是,在本申请的实施例中,融合权重为大于或者等于0且小于或者等于1的自然数。
步骤103、利用融合权重获得当前图像对应的第二降噪图像和第二噪声值。
在本申请的实施例中,终端在根据上一帧图像的第一噪声值和噪声估计值确定融合权重之后,便可以利用融合权重,分别获得当前图像对应的第二降噪图像和第二噪声值。
可以理解的是,在本申请的实施例中,第二降噪图像可以为对当前图像进行融合降噪处理之后获得的图像,第二噪声值可以用于对第二降噪图像的噪声进行估计。也就是说,第二噪声值表征第二降噪图像的噪声强度,即表征对当前图像进行降噪处理后的噪声强度。
进一步地,在本申请的实施例中,终端在利用融合权重获得当前图像对应的第二降噪图像时,具体可以根据融合权重对配准后图像进行融合降噪处理,从而可以获得第二降噪图像。
具体地,在本申请的实施例中,终端在根据融合权重对配准后图像进行融合降噪处理,获得第二降噪图像时,可以先根据融合权重、第一像素值以及第二像素值,确定出第一像素坐标对应的融合像素值,然后可以接着遍历配准后图像和第一降噪图中的、第一坐标以外的其他像素坐标,获得其他像素坐标对应的其他像素值,从而便可以基于融合像素值和其他像素值,生成第二降噪图像。
也就是说,在本申请的实施例中,终端可以先基于一个像素坐标,利用该像素坐标对应的、第一降噪图像与配准后图像上的两个相应像素的像素值差值参数,融合获得该像素坐标对应的融合像素值,然后采用相同的融合方法,对第一降噪图像与配准后图像上的其他像素坐标上的相应像素也进行像素值差值参数的确定,获得其他像素坐标对应的其他融合像素值,最终便可以利用全部像素坐标对应的全部融合像素值,生成当前图像对应的融合降噪后图像,即第二降噪图像,以完成当前图像的降噪处理。
可以理解的是。在本申请的实施例中,终端先利用第一降噪图像对当前图像进行了配准处理,使得第一降噪图像和配准后图像具有相互对齐的图像内容,因此,基于第一降噪图像和配准后图像获得的第二降噪图像,也与上述两个图像具有相互对齐的图像内容。
进一步地,在本申请的实施例中,终端在利用融合权重获得当前图像对应的第二噪声值,具体可以根据融合权重、第一噪声值以及噪声估计值,生成第二噪声值。也就是说,在本申请中,噪声估计值用于对当前图像进行噪声强度的确定,第二噪声值用于对当前图像降噪后的第二降噪图像进行噪声强度的确定。因此,在完成当前图像的融合降噪之后,可以选择将第二噪声值作为降噪结果,输入至下一帧图像的降噪处理流程中。同时,也需要将第二降噪图像作为降噪结果,进行下一帧图像的降噪处理。
步骤104、继续根据第二降噪图像和第二噪声值对下一帧图像进行降噪处理,直到遍历完待降噪视频中的每一帧图像;其中,下一帧图像为待降噪视频中的、与当前图像连续的后一个图像。
在本申请的实施例中,终端在利用融合权重获得当前图像对应的第二降噪图像和第二噪声值之后,即按照上述步骤101至步骤103完成了当前图像的融合降噪之后,便可以继续根据第二降噪图像和第二噪声值,进一步对下一帧图像进行降噪处理,直到遍历完待降噪视频的每一帧图像,以完成待降噪视频中的降噪处理。
可以理解的是,在本申请的实施例中,终端在完成对当前图像的融合降噪处理,获得第二降噪图像和第二噪声值之后,便可以继续利用第二降噪图像对下一帧图像进行配准处理,获得配准后图像;然后计算下一帧图像的噪声估计值,并根据第二噪声值和噪声估计值确定融合权重;最后便可以利用融合权重获得下一帧图像对应的第三降噪图像和第三噪声值,完成对下一帧图像的融合降噪处理,并将第三降噪图像和第三噪声值,作为下一帧图像的降噪结果,继续输入值下一次的融合降噪处理流程中,直到完成对待降噪视频的降噪处理。
也就是说,在本申请的实施例中,终端按照上述步骤101至步骤104的方法,遍历完待降噪视频的每一帧图像之后,可以获得全部帧图像对应的全部降噪图像,然后便可以继续沿时间轴的顺序,基于全部降噪图像生成降噪后视频。
图4为降噪方法的实现流程示意图二,如图4所示,终端在利用融合权重获得当前图像对应的第二降噪图像和第二噪声值之后,即步骤103之后,终端进行降噪处理的方法还可以包括以下步骤:
步骤105、存储第二降噪图像,并利用第二噪声值更新第一噪声值。
也就是说,在本申请中,终端在完成对每一帧图像的融合降噪处理,获得降噪图像和噪声值之后,可以对降噪图像进行存储,以用于后续降噪后视频的生成,同时,终端可以直接利用新的噪声值更新上一帧对应的噪声值。例如,终端在完成对当前图像的降噪处理之后,获得第二降噪图像和第二噪声值之后,可以对第二降噪图像进行存储,同时,可以利用第二噪声值对第一噪声值进行更新处理。
在本申请的实施例中,进一步地,图5为降噪方法的实现流程示意图三,如图5所示,终端在继续根据第二降噪图像和第二噪声值对下一帧图像进行降噪处理,直到遍历完待降噪视频中的每一帧图像之前,即步骤104之前,终端进行降噪处理的方法还可以包括以下步骤:
步骤106、若当前图像为待降噪视频的第一帧图像,则根据预设噪声模型确定第二噪声值,并将当前图像确定为第二降噪图像。
也就是说,对于待降噪视频中的第一帧视频,终端可以直接将其作为降噪后的第二降噪视频,同时,根据预设噪声模型确定第二噪声值,然后 将第二降噪图像和第二噪声值输入至下一帧图像的降噪流程中,以完成对待降噪视频中的第二帧图像的降噪处理。
在本申请的实施例中,基于上述步骤101至步骤106所提出的降噪方法,终端在对待降噪视频进行降噪处理时,可以利用预设噪声模型对当前图像的噪声进行估计,然后进一步根据噪声估计值和上一帧图像的第一噪声值度融合降噪的融合权重进行确定,并利用融合权重,对基于上一帧图像的第一降噪图像进行配准处理的配准后图像进行融合降噪,获得当前图像的第二降噪图像和第二噪声值,进一步地,终端还可以继续利用第二降噪图像和第二噪声值对下一帧图像进行融合降噪,直到完成对待降噪视频的降噪处理。其中,利用预设噪声模型估计的噪声估计值可以更加精确的确定出融合权重,从而可以基于噪声较低的像素采取较高权重的原则进行融合降噪,提高了视频降噪效果和画面的纯净度。
本申请实施例提出的一种降噪方法,终端利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,上一帧图像为待降噪视频中的、与当前图像连续的前一个图像;计算当前图像的噪声估计值,并根据上一帧图像的第一噪声值和噪声估计值确定融合权重;利用融合权重获得当前图像对应的第二降噪图像和第二噪声值;继续根据第二降噪图像和第二噪声值对下一帧图像进行降噪处理,直到遍历完待降噪视频中的每一帧图像;其中,下一帧图像为待降噪视频中的、与当前图像连续的后一个图像。也就是说,终端在对待降噪视频进行降噪处理时,可以利用上一帧图像降噪后的降噪图像和噪声值,结合当前图像的噪声估计值,确定出融合权重,从而可以利用融合权重完成对当前图像的融合降噪处理,并可以继续利用当前图像降噪后的降噪图像和噪声值,继续对下一帧图像进行降噪,直到遍历完待降噪视频中的每一帧图像。也就是说,本申请提出的降噪方法,融合权重是基于前一帧图像对应的噪声值和当前图像对应的噪声值共同确定的,因此在进行降噪处理时,可以有效提高多帧视频图像的融合降噪效果,提升视频画质。
基于上述实施例,在本申请的另一实施例中,终端可以利用预设噪声模型对当前图像进行噪声估计,从而计算获得当前图像的噪声估计值时。其中,预设噪声模型是与拍摄待降噪视频的拍摄装置相对应的,具体地,对于固定的拍摄装置,可以获得对应的预设噪声模型,示例性的,在本申请中,预设噪声模型符合正态分布,可以表示如下式:
y~N(μ=x,σ 2=λ readshotX)       (1)
其中,X为真实信号的强度,对于特定的拍摄装置,噪声可以主要分为读取噪声read noise和散粒噪声shot noise两种。参照下式,读取噪声对应的模型参数λ read和散粒噪声对应的模型参数λ shot都可以在实际中测量得出:
Figure PCTCN2020121645-appb-000001
λ shot=g aσ a         (3)
其中,gd为拍摄装置信号的数位增益,ga为拍摄装置信号的模拟增益。
进一步地,在本申请中,预设噪声模型可以估计当前图像的噪声,预设噪声模型可以通过在暗室拍摄chart表来计算,或者直接从拍摄装置的制造厂商直接获得。
可以理解的是,在本申请的实施例中,终端可以利用当前图像的信号的标准差(standard deviation)来确定噪声强度,因此,对于当前图像中的、信号强度为X的一个像素,可以将信号强度X输入至如上述公式(1)的预设噪声模型中,计算获得该像素对应的标准差,从而可以进一步获得当前图像的噪声估计值。也就是说,由于预设噪声模型可以用于对噪声值与拍摄待降噪视频的拍摄装置的信号之间的关系进行确定,因此,终端在计算噪声估计值时,可以先确定当前图像对应的信号强度,从而基于预设噪声模型计算获得噪声估计值。
在本申请的实施例中,进一步地,终端在对第一降噪图像和配准后图像进行融合时,可以先根据融合权重、第一像素值以及第二像素值,确定第一像素坐标对应的融合像素值,示例性的,如图3中的像素点A和像素点B,终端可以先基于像素值P A和像素值P B对像素点A和像素点B进行融合,从而可以获得融合像素值的像素值P,具体如下式:
P=w×P A+(1-w)×P B        (4)
其中,P A和P B分别为像素点A与像素点B的像素值,w为融合权重。
需要说明的是,在本申请的实施例中,融合权重可以通过融合的两个像素的像素值的差值参数进行确定,具体地,在本申请中,终端在计算融合权重时,需要对第一降噪图像和配准后图像进行像素值的比较。具体地,在对第一降噪图像和配准后图像进行像素值的比较时,对于第一降噪图像和配准后图像,可以将第一像素坐标的像素点A与像素点B,对应的像素值之间的差值绝对值作为差值参数,即将|P A-P B|作为P A和P B之间的差值参数。
示例性的,在本申请中,终端可以将第一像素值和第二像素值之间的差值参数和融合参数输入至预设权重模型中,以计算获得融合权重。其中,预设权重模型可以为终端存储的、表征融合权重与像素差值的具体地函数。其中,对于预设权重模型,需要遵循像素差值越小,融合权重越大的原则,例如,终端可以通过以下公式进行融合权重w的计算:
Figure PCTCN2020121645-appb-000002
其中,λ为融合参数,d为第一像素值和第二像素值之间的差值参数,示例性的,d=|P A-P B|。可见,差值参数和融合权重之间呈反比。终端可以通过σ来对d与w之间的变化关系进行调节。
需要说明的是,在本申请的实施例中,w必须在[0,1]的范围内。
示例性的,在本申请的实施例中,融合参数可以通过第一噪声值和噪声估计值计算获得,示例性的,在本申请中,终端可以利用如下公式对融合参数λ进行计算:
Figure PCTCN2020121645-appb-000003
其中,S A为像素A的信号强度的标准差,也即是说,S A为第一降噪图像中的第一像素对应的噪声值,S B为像素B的信号强度的标准差,也即是说,S B为配准后图像中的第二像素对应的噪声值。
也就是说,在本申请中,融合参数λ可以通过第一降噪图像的第一噪声值和配准后图像的噪声估计值计算获得。
在本申请的实施例中,进一步地,终端在利用融合权重获得当前图像对应的第二噪声值时,可以根据融合权重、第一噪声值以及噪声估计值,计算获得第二噪声值。
示例性的,在本申请中,可以利用像素的标准差来表征当前图像进行融合降噪后的第二降噪图像中的像素的噪声值S,具体计算方式如下式:
Figure PCTCN2020121645-appb-000004
由此可见,在本申请中,终端可以先基于一个像素坐标,利用该像素坐标对应的、第一降噪图像与配准后图像上的两个相应像素的像素值差值参数,融合获得该像素坐标对应的融合像素值,然后采用相同的融合方法,对第一降噪图像与配准后图像上的其他像素坐标上的相应像素也进行像素值差值参数的确定,获得其他像素坐标对应的其他融合像素值,最终便可以利用全部像素坐标对应的全部融合像素值,生成当前图像对应的融合降噪后图像,即第二降噪图像,以完成当前图像的降噪处理。
本申请实施例提出的一种降噪方法,终端利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,上一帧图像为待降噪视频中的、与当前图像连续的前一个图像;计算当前图像的噪声估计值,并根据上一帧图像的第一噪声值和噪声估计值确定融合权重;利用融合权重获得当前图像对应的第二降噪图像和第二噪声值;继续根据第二降噪图像和第二噪声值对下一帧图像进行降噪处理,直到遍历完待降噪视频中的每一帧图像;其中,下一帧图像为待降噪视频中的、与当前图像连续的后一个图像。也就是说,终端在对待降噪视频进行降噪处理时,可以利用上一帧图像降噪后的降噪图像和噪声值,结合当前图像的噪声估计值,确定出融合权重,从而可以利用融合权重完成对当前图像的融合降噪处理,并可以继续利用当前图像降噪后的降噪图像和噪声值,继续对下一帧图像进行降噪,直到遍历完待降噪视频中的每一帧图像。也就是说,本申请提出的降噪方法,融合权重是基于前一帧图像对应的噪声值和当前图像对应 的噪声值共同确定的,因此在进行降噪处理时,可以有效提高多帧视频图像的融合降噪效果,提升视频画质。
基于上述实施例,在本申请的另一实施例中,图6为融合降噪处理的结构框图,如图6所示,终端对待降噪视频中的当前图像进行降噪时,可以先获取上一帧图像的降噪结果,包括上一帧图像对应的第一降噪图像和第一噪声值,然后利用第一降噪图像对当前图像进行图像配准处理(步骤201),便可以获得当前图像对应的配准后图像,同时,终端可以基于预设噪声模型对当前图像进行噪声估计(步骤202),获得噪声估计值,并基于噪声估计值结合第一降噪图像的第一噪声值,进一步计算获得融合权重。
需要说明的是,在进行融合权重的计算时,可以利用噪声估计值和第一噪声值计算获得融合参数(步骤203),同时,可以分别在第一降噪图像和配准后图像中确定相同像素坐标上的第一像素和第二像素,并对一像素和第二像素进行像素值比较(步骤204),获得像素值的差值参数,从而可以将差值参数和融合参数输入至预设权重模型进行权重计算(步骤205),获得融合权重。
进一步地,在本申请的实施例中,终端在计算获得融合权重之后,一方面,可以根据融合权重对配准后图像进行融合降噪处理(步骤206),获得第二降噪图像;另一方面,可以根据融合权重、第一噪声值以及噪声估计值进行噪声计算(步骤207),生成第二噪声值。从而可以获得当前图像的降噪结果,即第二降噪图像和第二噪声值。
可以理解的是,在本申请中,终端在完成对当前图像的融合降噪处理,获得第二降噪图像和第二噪声值之后,便可以继续利用第二降噪和第二噪声值完成对下一帧图像的融合降噪处理,直到遍历完待降噪视频的每一帧图像之后,就可以获得全部帧图像对应的全部降噪图像,然后便可以继续沿时间轴的顺序,基于全部降噪图像生成降噪后视频。
在本申请的实施例中,进一步地,图7为融合降噪处理的示意图,如图7所示,本申请提出的降噪方法,可以将拍摄装置采集的多帧图像沿时间轴排列,获得图像序列,在进行降噪处理时,选取上一帧图像的降噪结果,即第一降噪图像和第一噪声值。将第一降噪图像与当前图像进行图像配准后(步骤301),获得配准后图像,然后再利用第一噪声值和第一降噪图像对配准后图像进行融合降噪(步骤302),获得当前图像的第二降噪图像和第二噪声值。
可以理解的是,在本申请中,终端在获得当前图像的第二降噪图像和第二噪声值之后,可以存储第二降噪图,同时利用第二噪声值对第一噪声值进行更新。
进一步地,在本申请的实施例中,第二降噪图像和第二噪声值将作为下一帧图像的降噪处理的输入参与降噪处理。通过这样的方式一直迭代,直到处理完整个待降噪视频,获得对应降噪后视频。
由此可见,在本申请中,终端在对待降噪视频进行降噪处理时,可以利用预设噪声模型对当前图像的噪声进行估计,然后进一步根据噪声估计值和上一帧图像的第一噪声值度融合降噪的融合权重进行确定,并利用融合权重,对基于上一帧图像的第一降噪图像进行配准处理的配准后图像进行融合降噪,获得当前图像的第二降噪图像和第二噪声值,进一步地,终端还可以继续利用第二降噪图像和第二噪声值对下一帧图像进行融合降噪,直到完成对待降噪视频的降噪处理。其中,利用预设噪声模型估计的噪声估计值可以更加精确的确定出融合权重,从而可以基于噪声较低的像素采取较高权重的原则进行融合降噪,提高了视频降噪效果和画面的纯净度。
本申请实施例提出的一种降噪方法,终端利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,上一帧图像为待降噪视频中的、与当前图像连续的前一个图像;计算当前图像的噪声估计值,并根据上一帧图像的第一噪声值和噪声估计值确定融合权重;利用融合权重获得当前图像对应的第二降噪图像和第二噪声值;继续根据第二降噪图像和第二噪声值对下一帧图像进行降噪处理,直到遍历完待降噪视频中的每一帧图像;其中,下一帧图像为待降噪视频中的、与当前图像连续的后一个图像。也就是说,终端在对待降噪视频进行降噪处理时,可以利用上一帧图像降噪后的降噪图像和噪声值,结合当前图像的噪声估计值,确定出融合权重,从而可以利用融合权重完成对当前图像的融合降噪处理,并可以继续利用当前图像降噪后的降噪图像和噪声值,继续对下一帧图像进行降噪,直到遍历完待降噪视频中的每一帧图像。也就是说,本申请提出的降噪方法,融合权重是基于前一帧图像对应的噪声值和当前图像对应的噪声值共同确定的,因此在进行降噪处理时,可以有效提高多帧视频图像的融合降噪效果,提升视频画质。
基于上述实施例,在本申请的再一实施例中,图8为终端的组成结构示意图一,如图8所示,本申请实施例提出的终端10可以包括获取部分11,计算部分12,确定部分13,降噪部分14,存储部分15以及更新部分16。
所述获取部分11,配置为利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,所述上一帧图像为待降噪视频中的、与所述当前图像连续的前一个图像;
所述计算部分12,配置为计算所述当前图像的噪声估计值;
所述确定部分13,配置为根据所述上一帧图像的第一噪声值和所述噪声估计值确定融合权重;
所述获取部分11,还配置为利用所述融合权重获得所述当前图像对应的第二降噪图像和第二噪声值;
所述降噪部分14,配置为继续根据所述第二降噪图像和所述第二噪声值对下一帧图像进行降噪处理,直到遍历完所述待降噪视频中的每一帧图像;其中,所述下一帧图像为所述待降噪视频中的、与所述当前图像连续 的后一个图像。
进一步地,在本申请的实施例中,所述计算部分12,具体配置为确定所述当前图像对应的信号强度;将所述信号强度输入至预设噪声模型中,输出所述噪声估计值;其中,所述预设噪声模型用于对噪声值与拍摄所述待降噪视频的拍摄装置的信号之间的关系进行确定。
进一步地,在本申请的实施例中,所述确定部分13,具体配置为基于所述第一噪声值和所述噪声估计值,计算获得融合参数;根据所述融合参数确定所述融合权重。
进一步地,在本申请的实施例中,所述确定部分13,还配置为根据所述融合参数确定所述融合权重之前,在所述第一降噪图像中确定第一像素,在所述配准后图像中确定第二像素;其中,所述第一像素和所述第二像素为第一像素坐标对应的两个像素。
进一步地,在本申请的实施例中,所述获取部分11,还配置为获取第一像素的第一像素值,获取第二像素的第二像素值。
进一步地,在本申请的实施例中,所述确定部分13,还具体配置为计算所述第一像素值和所述第二像素值之间的差值参数;将所述差值参数和所述融合参数输入至预设权重模型中,输出所述融合权重。
进一步地,在本申请的实施例中,所述获取部分11,具体配置为根据所述融合权重对所述配准后图像进行融合降噪处理,获得所述第二降噪图像。
进一步地,在本申请的实施例中,所述获取部分11,还具体配置为根据所述融合权重、所述第一像素值以及所述第二像素值,确定所述第一像素坐标对应的融合像素值;遍历所述配准后图像和所述第一降噪图中的其他像素坐标,获得所述其他像素坐标对应的其他像素值;基于所述融合像素值和所述其他像素值,生成所述第二降噪图像。
进一步地,在本申请的实施例中,所述获取部分11,还具体配置为根据所述融合权重、所述第一噪声值以及所述噪声估计值,生成所述第二噪声值。
进一步地,在本申请的实施例中,所述存储部分15,配置为利用所述融合权重获得所述当前图像对应的第二降噪图像和第二噪声值之后,存储所述第二降噪图像。
进一步地,在本申请的实施例中,所述更新部分16,还配置为利用所述融合权重获得所述当前图像对应的第二降噪图像和第二噪声值之后,利用所述第二噪声值更新所述第一噪声值。
进一步地,在本申请的实施例中,所述获取部分11,还配置为利用上一帧图像对应的第一降噪图像对当前图像进行配准处理,获得配准后图像之前,读取所述第一降噪图像和所述第一噪声值。
进一步地,在本申请的实施例中,所述确定部分13,还配置为继续根 据所述第二降噪图像和所述第二噪声值对下一帧图像进行降噪处理,直到遍历完所述待降噪视频中的每一帧图像之前,若所述当前图像为所述待降噪视频的第一帧图像,则根据所述预设噪声模型确定所述第二噪声值,并将所述当前图像确定为所述第二降噪图像。
图9为终端的组成结构示意图二,如图9所示,本申请实施例提出的终端10还可以包括处理器17、存储有处理器17可执行指令的存储器18,进一步地,终端10还可以包括通信接口19,和用于连接处理器17、存储器18以及通信接口19的总线110。
在本申请的实施例中,上述处理器17可以为特定用途集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理装置(Digital Signal Processing Device,DSPD)、可编程逻辑装置(ProgRAMmable Logic Device,PLD)、现场可编程门阵列(Field ProgRAMmable Gate Array,FPGA)、中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器中的至少一种。可以理解地,对于不同的设备,用于实现上述处理器功能的电子器件还可以为其它,本申请实施例不作具体限定。终端10还可以包括存储器18,该存储器18可以与处理器17连接,其中,存储器18用于存储可执行程序代码,该程序代码包括计算机操作指令,存储器18可能包含高速RAM存储器,也可能还包括非易失性存储器,例如,至少两个磁盘存储器。
在本申请的实施例中,总线110用于连接通信接口19、处理器17以及存储器18以及这些器件之间的相互通信。
在本申请的实施例中,存储器18,用于存储指令和数据。
进一步地,在本申请的实施例中,上述处理器17,用于利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,所述上一帧图像为待降噪视频中的、与所述当前图像连续的前一个图像;计算所述当前图像的噪声估计值,并根据所述上一帧图像的第一噪声值和所述噪声估计值确定融合权重;利用所述融合权重获得所述当前图像对应的第二降噪图像和第二噪声值;继续根据所述第二降噪图像和所述第二噪声值对下一帧图像进行降噪处理,直到遍历完所述待降噪视频中的每一帧图像;其中,所述下一帧图像为所述待降噪视频中的、与所述当前图像连续的后一个图像。
在实际应用中,上述存储器18可以是易失性存储器(volatile memory),例如随机存取存储器(Random-Access Memory,RAM);或者非易失性存储器(non-volatile memory),例如只读存储器(Read-Only Memory,ROM),快闪存储器(flash memory),硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD);或者上述种类的存储器的组合,并向处理器17提供指令和数据。
另外,在本实施例中的各功能模块可以集成在一个处理单元中,也可 以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例提出的一种终端,该终端利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,上一帧图像为待降噪视频中的、与当前图像连续的前一个图像;计算当前图像的噪声估计值,并根据上一帧图像的第一噪声值和噪声估计值确定融合权重;利用融合权重获得当前图像对应的第二降噪图像和第二噪声值;继续根据第二降噪图像和第二噪声值对下一帧图像进行降噪处理,直到遍历完待降噪视频中的每一帧图像;其中,下一帧图像为待降噪视频中的、与当前图像连续的后一个图像。也就是说,终端在对待降噪视频进行降噪处理时,可以利用上一帧图像降噪后的降噪图像和噪声值,结合当前图像的噪声估计值,确定出融合权重,从而可以利用融合权重完成对当前图像的融合降噪处理,并可以继续利用当前图像降噪后的降噪图像和噪声值,继续对下一帧图像进行降噪,直到遍历完待降噪视频中的每一帧图像。也就是说,本申请提出的降噪方法,融合权重是基于前一帧图像对应的噪声值和当前图像对应的噪声值共同确定的,因此在进行降噪处理时,可以有效提高多帧视频图像的融合降噪效果,提升视频画质。
本申请实施例提供一种计算机可读存储介质,其上存储有程序,该程序被处理器执行时实现如上所述的降噪方法。
具体来讲,本实施例中的一种降噪方法对应的程序指令可以被存储在光盘,硬盘,U盘等存储介质上,当存储介质中的与一种降噪方法对应的程序指令被一电子设备读取或被执行时,包括如下步骤:
利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,所述上一帧图像为待降噪视频中的、与所述当前图像连续的前一个图像;
计算所述当前图像的噪声估计值,并根据所述上一帧图像的第一噪声值和所述噪声估计值确定融合权重;
利用所述融合权重获得所述当前图像对应的第二降噪图像和第二噪声 值;
继续根据所述第二降噪图像和所述第二噪声值对下一帧图像进行降噪处理,直到遍历完所述待降噪视频中的每一帧图像;其中,所述下一帧图像为所述待降噪视频中的、与所述当前图像连续的后一个图像。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的实现流程示意图和/或方框图来描述的。应理解可由计算机程序指令实现流程示意图和/或方框图中的每一流程和/或方框、以及实现流程示意图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在实现流程示意图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在实现流程示意图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在实现流程示意图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。
工业实用性
本申请实施例中,终端利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,上一帧图像为待降噪视频中的、与当前图像连续的前一个图像;计算当前图像的噪声估计值,并根据上一帧图像的第一噪声值和噪声估计值确定融合权重;利用融合权重获得当前图像对应的第二降噪图像和第二噪声值;继续根据第二降噪图像和第二噪声值对下一帧图像进行降噪处理,直到遍历完待降噪视频中的每一帧图像;其中,下一帧图像为待降噪视频中的、与当前图像连续的后一个图像。也就是说,终端在对待降噪视频进行降噪处理时,可以利用上一帧图像降噪 后的降噪图像和噪声值,结合当前图像的噪声估计值,确定出融合权重,从而可以利用融合权重完成对当前图像的融合降噪处理,并可以继续利用当前图像降噪后的降噪图像和噪声值,继续对下一帧图像进行降噪,直到遍历完待降噪视频中的每一帧图像。也就是说,本申请提出的降噪方法,融合权重是基于前一帧图像对应的噪声值和当前图像对应的噪声值共同确定的,因此在进行降噪处理时,可以有效提高多帧视频图像的融合降噪效果,提升视频画质。

Claims (14)

  1. 一种降噪方法,所述方法包括:
    利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,所述上一帧图像为待降噪视频中的、与所述当前图像连续的前一个图像;
    计算所述当前图像的噪声估计值,并根据所述上一帧图像的第一噪声值和所述噪声估计值确定融合权重;
    利用所述融合权重获得所述当前图像对应的第二降噪图像和第二噪声值;
    继续根据所述第二降噪图像和所述第二噪声值对下一帧图像进行降噪处理,直到遍历完所述待降噪视频中的每一帧图像;其中,所述下一帧图像为所述待降噪视频中的、与所述当前图像连续的后一个图像。
  2. 根据权利要求1所述的方法,其中,所述计算所述当前图像的噪声估计值,包括:
    确定所述当前图像对应的信号强度;
    将所述信号强度输入至预设噪声模型中,输出所述噪声估计值;其中,所述预设噪声模型用于对噪声值与拍摄所述待降噪视频的拍摄装置的信号之间的关系进行确定。
  3. 根据权利要求2所述的方法,其中,所述根据上一帧图像的第一噪声值和所述噪声估计值确定融合权重,包括:
    基于所述第一噪声值和所述噪声估计值,计算获得融合参数;
    根据所述融合参数确定所述融合权重。
  4. 根据权利要求3所述的方法,其中,所述根据所述融合参数确定所述融合权重之前,所述方法还包括:
    在所述第一降噪图像中确定第一像素,在所述配准后图像中确定第二像素;其中,所述第一像素和所述第二像素为第一像素坐标对应的两个像素;
    获取第一像素的第一像素值,获取第二像素的第二像素值。
  5. 根据权利要求4所述的方法,其中,所述根据所述融合参数确定所述融合权重,包括:
    计算所述第一像素值和所述第二像素值之间的差值参数;
    将所述差值参数和所述融合参数输入至预设权重模型中,输出所述融合权重。
  6. 根据权利要求4或5所述的方法,其中,所述利用所述融合权重获得所述当前图像对应的第二降噪图像,包括:
    根据所述融合权重对所述配准后图像进行融合降噪处理,获得所述第二降噪图像。
  7. 根据权利要求6所述的方法,其中,所述根据所述融合权重对所述配准后图像进行融合降噪处理,获得所述第二降噪图像,包括:
    根据所述融合权重、所述第一像素值以及所述第二像素值,确定所述第一像素坐标对应的融合像素值;
    遍历所述配准后图像和所述第一降噪图中的其他像素坐标,获得所述其他像素坐标对应的其他像素值;
    基于所述融合像素值和所述其他像素值,生成所述第二降噪图像。
  8. 根据权利要求3所述的方法,其中,所述利用所述融合权重获得所述当前图像对应的第二噪声值,包括:
    根据所述融合权重、所述第一噪声值以及所述噪声估计值,生成所述第二噪声值。
  9. 根据权利要求1所述的方法,其中,所述利用所述融合权重获得所述当前图像对应的第二降噪图像和第二噪声值之后,所述方法还包括:
    存储所述第二降噪图像,并利用所述第二噪声值更新所述第一噪声值。
  10. 根据权利要求1所述的方法,其中,所述利用上一帧图像对应的第一降噪图像对当前图像进行配准处理,获得配准后图像之前,所述方法还包括:
    读取所述第一降噪图像和所述第一噪声值。
  11. 根据权利要求1所述的方法,其中,所述继续根据所述第二降噪图像和所述第二噪声值对下一帧图像进行降噪处理,直到遍历完所述待降噪视频中的每一帧图像之前,所述方法还包括:
    若所述当前图像为所述待降噪视频的第一帧图像,则根据所述预设噪声模型确定所述第二噪声值,并将所述当前图像确定为所述第二降噪图像。
  12. 一种终端,所述终端包括:获取部分,计算部分,确定部分,降噪部分,
    所述获取部分,配置为利用上一帧图像的第一降噪图像对当前图像进行配准处理,获得配准后图像;其中,所述上一帧图像为待降噪视频中的、与所述当前图像连续的前一个图像;
    所述计算部分,配置为计算所述当前图像的噪声估计值;
    所述确定部分,配置为根据所述上一帧图像的第一噪声值和所述噪声估计值确定融合权重;
    所述获取部分,还配置为利用所述融合权重获得所述当前图像对应的第二降噪图像和第二噪声值;
    所述降噪部分,配置为继续根据所述第二降噪图像和所述第二噪声值对下一帧图像进行降噪处理,直到遍历完所述待降噪视频中的每一帧图像;其中,所述下一帧图像为所述待降噪视频中的、与所述当前图像连续的后一个图像。
  13. 一种终端,所述终端包括处理器、存储有所述处理器可执行指令 的存储器,当所述指令被所述处理器执行时,实现如权利要求1-11任一项所述的方法。
  14. 一种计算机可读存储介质,其上存储有程序,应用于终端中,所述程序被处理器执行时,实现如权利要求1-11任一项所述的方法。
PCT/CN2020/121645 2019-12-09 2020-10-16 降噪方法、终端及存储介质 WO2021114868A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911253681.1A CN111127347A (zh) 2019-12-09 2019-12-09 降噪方法、终端及存储介质
CN201911253681.1 2019-12-09

Publications (1)

Publication Number Publication Date
WO2021114868A1 true WO2021114868A1 (zh) 2021-06-17

Family

ID=70498037

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/121645 WO2021114868A1 (zh) 2019-12-09 2020-10-16 降噪方法、终端及存储介质

Country Status (2)

Country Link
CN (1) CN111127347A (zh)
WO (1) WO2021114868A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127347A (zh) * 2019-12-09 2020-05-08 Oppo广东移动通信有限公司 降噪方法、终端及存储介质
CN111583151B (zh) * 2020-05-09 2023-05-12 浙江大华技术股份有限公司 一种视频降噪方法及设备、计算机可读存储介质
CN114363693B (zh) * 2020-10-13 2023-05-12 华为技术有限公司 画质调整方法及装置
CN114679553A (zh) * 2020-12-24 2022-06-28 华为技术有限公司 视频降噪方法及装置
CN113375808B (zh) * 2021-05-21 2023-06-02 武汉博宇光电系统有限责任公司 一种基于场景的红外图像非均匀性校正方法
CN113674209A (zh) * 2021-07-20 2021-11-19 浙江大华技术股份有限公司 视频噪声检测方法、终端设备和计算机存储介质
CN113689373B (zh) * 2021-10-21 2022-02-11 深圳市慧鲤科技有限公司 图像处理方法、装置、设备及计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101010939A (zh) * 2004-09-03 2007-08-01 松下电器产业株式会社 图像处理装置以及图像处理程序
WO2014106470A1 (zh) * 2013-01-07 2014-07-10 华为终端有限公司 图像处理方法、装置和拍摄终端
CN104700405A (zh) * 2015-03-05 2015-06-10 苏州科达科技股份有限公司 一种前景检测方法和系统
CN107046648A (zh) * 2016-02-05 2017-08-15 芯原微电子(上海)有限公司 一种快速实现嵌入hevc编码单元的视频降噪的装置及方法
CN109639929A (zh) * 2019-01-11 2019-04-16 珠海全志科技股份有限公司 图像降噪方法、计算机装置及计算机可读存储介质
CN111127347A (zh) * 2019-12-09 2020-05-08 Oppo广东移动通信有限公司 降噪方法、终端及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014078985A1 (en) * 2012-11-20 2014-05-30 Thomson Licensing Method and apparatus for image regularization
CN104065854B (zh) * 2014-06-18 2018-08-10 联想(北京)有限公司 一种图像处理方法及一种电子设备
EP3537373A1 (en) * 2018-03-07 2019-09-11 Novaquark Non-local computation of gradient noise for computer-generated digital images
CN109743473A (zh) * 2019-01-11 2019-05-10 珠海全志科技股份有限公司 视频图像3d降噪方法、计算机装置及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101010939A (zh) * 2004-09-03 2007-08-01 松下电器产业株式会社 图像处理装置以及图像处理程序
WO2014106470A1 (zh) * 2013-01-07 2014-07-10 华为终端有限公司 图像处理方法、装置和拍摄终端
CN104700405A (zh) * 2015-03-05 2015-06-10 苏州科达科技股份有限公司 一种前景检测方法和系统
CN107046648A (zh) * 2016-02-05 2017-08-15 芯原微电子(上海)有限公司 一种快速实现嵌入hevc编码单元的视频降噪的装置及方法
CN109639929A (zh) * 2019-01-11 2019-04-16 珠海全志科技股份有限公司 图像降噪方法、计算机装置及计算机可读存储介质
CN111127347A (zh) * 2019-12-09 2020-05-08 Oppo广东移动通信有限公司 降噪方法、终端及存储介质

Also Published As

Publication number Publication date
CN111127347A (zh) 2020-05-08

Similar Documents

Publication Publication Date Title
WO2021114868A1 (zh) 降噪方法、终端及存储介质
WO2021208122A1 (zh) 基于深度学习的视频盲去噪方法及装置
JP6469678B2 (ja) 画像アーティファクトを補正するシステム及び方法
US9591237B2 (en) Automated generation of panning shots
AU2011216119B2 (en) Generic platform video image stabilization
Jinno et al. Multiple exposure fusion for high dynamic range image acquisition
JP5980294B2 (ja) データ処理装置、撮像装置、およびデータ処理方法
WO2022062346A1 (zh) 图像增强方法、装置及电子设备
US9514525B2 (en) Temporal filtering for image data using spatial filtering and noise history
CN107749987B (zh) 一种基于块运动估计的数字视频稳像方法
KR102512889B1 (ko) 이미지 융합 프로세싱 모듈
JP6308748B2 (ja) 画像処理装置、撮像装置及び画像処理方法
US20140078347A1 (en) Systems and Methods for Reducing Noise in Video Streams
CN108986197B (zh) 3d骨架线构建方法及装置
CN107481271B (zh) 一种立体匹配方法、系统及移动终端
WO2017113917A1 (zh) 成像方法、成像装置和终端
CN113315884A (zh) 一种实时视频降噪方法、装置、终端及存储介质
Ehmann et al. Real-time video denoising on mobile phones
US20150358547A1 (en) Image processing apparatus
CN115004227A (zh) 图像处理方法、装置及设备
CN112215906A (zh) 图像处理方法、装置和电子设备
KR102003460B1 (ko) 왜곡제거장치 및 방법
CN107295261B (zh) 图像去雾处理方法、装置、存储介质和移动终端
CN112419161B (zh) 图像处理方法及装置、存储介质及电子设备
CN115546043B (zh) 视频处理方法及其相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20899739

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20899739

Country of ref document: EP

Kind code of ref document: A1