WO2022217525A1 - 图像降噪处理方法、装置及成像装置 - Google Patents

图像降噪处理方法、装置及成像装置 Download PDF

Info

Publication number
WO2022217525A1
WO2022217525A1 PCT/CN2021/087390 CN2021087390W WO2022217525A1 WO 2022217525 A1 WO2022217525 A1 WO 2022217525A1 CN 2021087390 W CN2021087390 W CN 2021087390W WO 2022217525 A1 WO2022217525 A1 WO 2022217525A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
noise reduction
denoised
yuv
operations
Prior art date
Application number
PCT/CN2021/087390
Other languages
English (en)
French (fr)
Inventor
江君
邵明
程鹏远
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/087390 priority Critical patent/WO2022217525A1/zh
Publication of WO2022217525A1 publication Critical patent/WO2022217525A1/zh

Links

Images

Classifications

    • G06T5/70
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image processing method, an apparatus, and an imaging apparatus.
  • image noise reduction is to remove noise in the image to improve the quality of the image, but high-intensity noise reduction often leads to loss of image details, while low-intensity noise reduction results in more residual noise in the image.
  • the contradiction between preserving details and removing noise cannot be resolved in the prior art.
  • Embodiments of the present application provide an image processing method, a computer-readable storage medium, an image processing apparatus, and an imaging apparatus.
  • the embodiments of the present application provide an image processing method, including: obtaining an image to be denoised; performing a first noise reduction process on the image to be denoised to obtain a first YUV image; performing a second noise reduction process on the noisy image to obtain a second YUV image, wherein the intensity of the second noise reduction process is higher than the intensity of the first noise reduction process; combining the Y component of the first YUV image with the first noise reduction process The UV components of the two YUV images are fused to obtain a denoised image.
  • embodiments of the present application provide a computer-readable storage medium, where computer instructions are stored on the computer-readable storage medium, and when the computer instructions are executed, the above-mentioned first aspect is implemented.
  • Image noise reduction processing method
  • embodiments of the present application provide an image processing apparatus, including: one or more processors, where the one or more processors are configured to: obtain an image to be denoised; Perform a first noise reduction process on the noisy image to obtain a first YUV image; perform a second noise reduction process on the to-be-noised image to obtain a second YUV image, wherein the intensity of the second noise reduction process is higher than that of the first noise reduction process.
  • the intensity of noise processing; a noise-reduced image is obtained by fusing the Y component of the first YUV image and the UV component of the second YUV image.
  • embodiments of the present application provide an imaging device, comprising: a sensor for generating an image; one or more processors configured to: obtain images from the sensor performing a first noise reduction process on the image to be denoised to obtain a first YUV image; performing a second noise reduction process on the image to be denoised to obtain a second YUV image, wherein the second denoise
  • the intensity of the noise processing is higher than the intensity of the first noise reduction processing
  • a noise reduction image is obtained by fusing the Y component of the first YUV image and the UV component of the second YUV image.
  • the image noise reduction processing method, the computer-readable storage medium, the image noise reduction processing device and the imaging device provided by the present application respectively perform the first noise reduction processing and the second noise reduction processing on the image to be denoised and fuse the obtained images, thereby It can remove noise to a greater extent while preserving the details of the picture.
  • FIG. 1 is a schematic diagram of an exemplary imaging process
  • FIG. 2 is a schematic diagram of an exemplary image signal processing flow
  • Figure 3 is a schematic diagram of three channels of a YUV format image
  • FIG. 4 is a schematic diagram of a low signal-to-noise ratio image under night scene shooting
  • FIG. 5 is a flowchart of an image noise reduction processing method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of performing downsampling operation on an image to be denoised according to an embodiment of the present application
  • FIG. 7 is a schematic diagram of a downsampling operation performed in a second noise reduction process according to an embodiment of the present application.
  • FIG. 8 is a flowchart of an image noise reduction processing method according to yet another embodiment of the present application.
  • FIG. 9 is a flowchart of an image noise reduction processing method according to still another embodiment of the present application.
  • Fig. 10 is the effect schematic diagram of the operation of changing the linear characteristic of the image
  • FIG. 11 is a schematic diagram of an image signal processing apparatus according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of an imaging device according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present application.
  • FIG. 1 shows a schematic diagram of an exemplary imaging process.
  • an imaging sensor Sensor
  • a lens group Liens
  • the sensor After an external light signal reaches an imaging sensor (Sensor) through a lens group (Lens), the sensor generates an original image in RAW format, which is also often referred to in the art as a RAW image.
  • RAW format which is also often referred to in the art as a RAW image.
  • Bayer images raw images in RAW format need to undergo a series of image signal processing (ISP, Image Signal Processor) to obtain images that are generally visible to the human eye, such as images in RGB format and images in YUV format.
  • ISP Image Signal Processor
  • FIG. 2 is an exemplary image signal processing flow. It should be noted that it only shows some of the operations mentioned in the embodiments of the present application, and the specific operations included in the image signal processing are not performed in the present application. limited. At the same time, it is also necessary to pay special attention that the processing sequence and mutual relationship between various operations in image signal processing are not necessarily as shown in the figure. among.
  • a demosaic operation 202 (DEMOSIC) is used to obtain an RGB image after performing color interpolation processing on the original image in RAW format.
  • RGB2YUV operation 206 is used to convert RGB format to YUV format.
  • RGB format and YUV format usually only need simple formula operations to achieve mutual conversion. Therefore, in the entire image signal processing process, RGB format and YUV format Multiple conversions can be performed between formats.
  • the operation object when a specific format is not defined for a certain operation object, it usually means that the operation object can be any one of YUV format, RGB format, or other common picture formats. .
  • the gamma correction operation 204 is a commonly used operation for converting a linear image into a nonlinear image, where the linear image and the nonlinear image refer to whether the pixel value in the picture has a linear relationship with the original optical signal value.
  • the image signal processing flow may also include one or more other operations that can change the linear characteristics of the image, such as tone mapping.
  • the above-mentioned RGB format image is a three-channel image format, which is divided into red (R channel), green (G channel), and blue (B channel). Human eyes do not clearly distinguish the three channels of RGB.
  • the YUV format image is also a three-channel image format.
  • Figure 3 is a schematic diagram of the YUV image and three channels, which are divided into one luminance channel (Y channel) and two color channels (U, V channels).
  • the difference from the RGB format is that ,
  • the definition of the three channels of the YUV format image is related to the perception ability of the human eye to the picture.
  • the human eye is generally more sensitive to the resolution of brightness, so the brightness channel of the YUV format image includes almost all the details of the picture.
  • the obtained image will have large noise, that is, the signal-to-noise ratio is low, and one of the results may be the color distortion of the image, as shown in Figure 4.
  • the SLR camera Since the lens of the SLR camera usually has a higher sensitivity than the lens of the mobile phone, the SLR has a relatively high signal-to-noise ratio in night scene shooting, and the difference in the signal-to-noise ratio is This results in the difference in the color of the picture as shown in the box in Figure 4. In the image with a low signal-to-noise ratio obtained by the mobile phone camera, the color of the building is darker.
  • Image noise reduction is used to reduce the effect of noise in the image. It should be noted that although the image of the SLR camera shown in Figure 4 above has a higher signal-to-noise ratio, it does not mean that it does not require image noise reduction.
  • the image noise reduction process can reduce the influence of noise in the image, when the intensity of the image noise reduction process is high, the details in the image will also be smoothed out together with the noise, resulting in the loss of image details.
  • the intensity of image noise reduction processing is low, although more details can be retained, more noise points will remain in the image. This contradiction exists in all noise reduction processing processes, and the signal-to-noise ratio is low. It is particularly prominent when noise reduction is applied to the image.
  • the image noise reduction processing method, device, imaging device, and computer-readable storage medium for performing noise reduction processing on a single image are described in the following multiple embodiments of the present application, the image noise reduction according to the embodiments of the present application
  • the processing method, device, imaging device, and computer-readable storage medium can all be applied to noise reduction processing of video images.
  • those skilled in the art can choose to perform any of the following steps on each frame of the video or at least some of the images in the frames: Image noise reduction processing according to an embodiment.
  • FIG. 5 shows a flowchart of an image noise reduction processing method according to an embodiment of the present application.
  • the image noise reduction processing method according to the embodiment of the present application includes:
  • Step S502 obtaining an image to be denoised
  • Step S504 performing a first noise reduction process on the image to be denoised to obtain a first YUV image
  • Step S506 performing a second noise reduction process on the image to be denoised to obtain a second YUV image, wherein the intensity of the second noise reduction process is higher than the intensity of the first noise reduction process;
  • Step S508 Fusing the Y component of the first YUV image and the UV component of the second YUV image to obtain a denoised image.
  • the image to be denoised acquired in step S502 may be an image output by any operation in the image processing flow, and the image to be denoised may be in RGB format, YUV format or any other format.
  • the noise-reduced image may be the original image directly output by the sensor.
  • the image to be denoised may also be an image from any channel, but since the image details lost in the denoising process of the denoised image cannot be recovered, in order to obtain a better
  • the noise reduction processing effect the image to be noise reduction is preferably an image that has not undergone noise reduction processing.
  • step S504 and step S506 the image to be denoised is subjected to a first noise reduction process and a second noise reduction process to obtain a first YUV image and a second YUV image, wherein the intensity of the second noise reduction process is higher than that of the first noise reduction process.
  • the strength of the noise reduction processing is not limited to a first noise reduction process and a second noise reduction process.
  • the intensity of the second noise reduction process is higher than that of the first noise reduction process, so the first YUV image obtained by the first noise reduction process with a lower intensity will retain more image details, and correspondingly, it will also leave The second YUV image obtained by the second noise reduction process with higher intensity will have less noise, but will correspondingly lose some image details.
  • the noise reduction operations in the first noise reduction process and the second noise reduction process may use the same noise reduction algorithm, and the intensity of the first noise reduction process and the second noise reduction process may be adjusted by adjusting the The parameters and/or the number of operations in the noise reduction algorithm are adjusted, and the specific adjustment method will be described in detail below.
  • different noise reduction algorithms may also be used in the first noise reduction processing and the second noise reduction processing, for example, different noise reduction operation modules are set in the noise reduction algorithm, so that the intensity of the second noise reduction processing is higher than the intensity of the first noise reduction process.
  • the image output by the above noise reduction algorithm is an image in YUV format, that is, the first noise reduction processing and the second noise reduction processing are noise reduction processing performed in the YUV color space, but the first noise reduction processing and the second noise reduction processing are performed in the YUV color space.
  • Noise processing does not exclude noise reduction processing performed in other color spaces.
  • the noise reduction image is obtained in step S508, further noise reduction can still be performed on the noise reduction image in other color spaces, such as RGB space. deal with.
  • any other form of noise reduction processing is not performed on the image to be denoised before the first noise reduction processing is performed, so that the first YUV image obtained by the first noise reduction processing can retain more details as much as possible.
  • the first noise reduction process and the second noise reduction process may further include one or more other operations in the image signal processing flow, for example, when the image to be noise-reduced is in RAW format, RGB format or other formats , the first noise reduction processing and the second noise reduction processing include converting the format of the image to be noise reduction to YUV format, and then performing noise reduction processing.
  • the first noise reduction processing and the second noise reduction processing may also include the demosaicing operation, the gamma correction operation, and the RGB2YUV operation in the image processing flow shown in FIG. The mutual order of these operations is not specifically limited.
  • the first noise reduction processing and the second noise reduction processing may even include all operations that may be involved in image signal processing, that is, the first noise reduction processing.
  • the noise processing and the second noise reduction processing may be a complete image signal processing process including the noise reduction operation, and similarly, the order between the noise reduction operation and other operations is not limited.
  • a noise reduction image is obtained by fusing the Y component of the first YUV image and the UV component of the second YUV image.
  • the human eye is more sensitive to the image details in the Y channel of the YUV format image, so the Y component of the first YUV image that retains more image details and the second YUV image with better noise reduction effect are used.
  • the noise-reduced image obtained by fusing the UV components of the .
  • performing the second noise reduction process on the image to be denoised to obtain the second YUV image in step S506 may further include:
  • Step S5061 Perform at least one downsampling operation on the image to be denoised to obtain an image to be denoised with reduced resolution;
  • Step S5062 Perform the second noise reduction process and at least one upsampling operation on the reduced resolution image to be denoised to obtain a second YUV image, and the at least one upsampling operation enables the resolution of the second YUV image.
  • the ratio is higher than the resolution of the reduced-resolution image to be denoised.
  • step S5061 before performing the second noise reduction processing on the image to be denoised, first perform a downsampling operation on it at least once, and the downsampling operation refers to using one of the methods of downsampling the image well known to those skilled in the art Downsampling the image to be denoised.
  • the downsampling operation usually turns the image in the S*S window of the original image into 1 pixel, and the value of the pixel is all the pixels in the window. mean value of .
  • the operation of averaging is equivalent to averaging the noise and normal pixels in the original image.
  • the downsampling operation performed before the second noise reduction process actually increases the noise reduction intensity of the second noise reduction process in a disguised form, so that the second YUV image obtained by the second noise reduction process has less noise.
  • the image signal processing process is divided into two paths, one of which is to perform the first noise reduction processing for the original image to be denoised, and the other for the reduced resolution.
  • the second noise reduction process is performed on the image to be denoised.
  • step S5062 a second noise reduction process is performed on the reduced resolution image to be denoised, and then at least one upsampling operation is performed on it to obtain a second YUV image, so that the resolution of the second YUV image is higher than the reduced resolution The resulting image to be denoised.
  • the upsampling operation can also use the upsampling method well known to those skilled in the art, such as the interpolation method, on the basis of the original image pixels, a suitable interpolation algorithm is used to insert new elements between the pixel points, so that the resolution of the image is To improve, those skilled in the art can also choose to perform any number of upsampling operations.
  • At least one upsampling operation enables the second YUV image to have the same resolution as the first YUV image, so that in the subsequent step of fusing the Y component of the first YUV image and the UV component of the second YUV image More types of fusion algorithms can be applied in .
  • the second YUV image after at least one downsampling operation and at least one upsampling operation has the same resolution as the first YUV image
  • the number of downsampling operations and the The number of upsampling operations is not related. For example, those skilled in the art may choose to perform five downsampling operations and only perform one upsampling operation so that the second YUV image and the first YUV image have the same resolution.
  • the at least one upsampling operation may only cause the second YUV image to have a higher resolution than the reduced-resolution image to be denoised, without necessarily making the second YUV image and the first YUV image have the same resolution
  • an up-sampling operation or a down-sampling operation may be further performed on the second YUV image, so that The second YUV image and the first YUV image have the same resolution.
  • a fusion method that can fuse images of different resolutions may be selected.
  • the image to be denoised is obtained in step S702
  • the first YUV image is obtained by performing a first noise reduction process on the image to be denoised in step S704
  • the second denoising process is performed on the image to be denoised in step S706 processing to obtain a second YUV image
  • the intensity of the second noise reduction processing is higher than that of the first noise reduction processing
  • at least one downsampling operation is performed on the image to be denoised, that is, The above-mentioned at least one downsampling operation may be performed during the operation of the second noise reduction process.
  • the Y component of the first YUV image and the UV component of the second YUV image are fused to obtain a noise reduction image.
  • the first noise reduction process may include all operations in the second noise reduction process as shown in the figure. The point is that the downsampling operation and the upsampling operation are not inserted between these operations in the first noise reduction process.
  • the second noise reduction process is shown in FIG. 7 to include noise reduction operations 1-4, in fact the second noise reduction process may also include other suitable operations, and also between these operation steps. Inserting at least one down-sampling operation or at least one up-sampling operation is not particularly limited. Similarly, performing at least one downsampling operation during the second noise reduction process can also make the second YUV image have less noise.
  • step S706 during the second noise reduction process on the image to be denoised, at least one downsampling operation is performed on the image to be denoised and then at least one upsampling operation is performed, and the at least one upsampling operation makes The second YUV image and the first YUV image maintain the same resolution.
  • the downsampling operation and the upsampling operation are inserted in the process of the second noise reduction process. Likewise, only the downsampling operation is selected in the second noise reduction process and not selected in the first noise reduction process. A downsampling operation is added to the processing. Further, it can be understood that those skilled in the art can arbitrarily combine the setting methods of the down-sampling operation and the up-sampling operation in the above-mentioned embodiments. noise image, and then continue to insert down-sampling and up-sampling operations in the second noise reduction process.
  • the method for making the intensity of the second noise reduction process higher than the intensity of the first noise reduction process may be: adjusting the parameters and parameters of the noise reduction algorithms used in the first noise reduction process and the second noise reduction process respectively. / or number of operations.
  • the noise reduction algorithms used in the first noise reduction process and the second noise reduction process may include temporal noise reduction operations and/or spatial domain noise reduction operations.
  • the method for the time-domain noise reduction operation and/or the spatial-domain noise reduction operation may be any method well known to those skilled in the art, which is not specifically limited.
  • the first noise reduction process and the second noise reduction process may also include any other noise reduction operations that can be performed in the YUV space.
  • the first noise reduction process may only include a time domain noise reduction operation and a spatial domain noise reduction operation
  • the second noise reduction process may include other noise reduction operations that can be Noise reduction operations performed in YUV space.
  • the noise reduction algorithms used in the first noise reduction processing and the second noise reduction processing respectively may be to adjust the time domain noise reduction operation and/or the spatial domain noise reduction operation in the noise reduction algorithm. number of times.
  • more time-domain noise reduction operations and spatial-domain noise reduction operations may be selected in the second noise reduction process, so that the intensity of the second noise reduction process is higher than that of the first noise reduction process.
  • the number of time-domain noise reduction operations and the number of spatial-domain noise reduction operations can be adjusted separately.
  • the number of time-domain noise reduction operations and the number of spatial-domain noise reduction operations can also be performed simultaneously. Adjustment.
  • the number of time-domain noise reduction operations and the number of control noise reduction operations in the noise reduction algorithm may be fixed, and the noise reduction algorithms of the first noise reduction process and the second noise reduction process are adjusted separately on the whole. The number of runs, so that the intensity of the second noise reduction process is higher than the intensity of the first noise reduction process. It can be understood that in the case of the same parameters, a relatively high number of operations will result in a corresponding increase in the intensity of noise reduction.
  • the parameters in adjusting the parameters of the noise reduction algorithm used in the first noise reduction process and the second noise reduction process respectively, may include parameters of the time domain noise reduction operation in the noise reduction algorithm.
  • an optional parameter is to adjust the number of sampling frames in the time-domain noise reduction operation.
  • the time-domain noise reduction operation parameter in the second noise reduction process is set to 5 frames, and the time domain If the noise reduction operation parameter is set to 3 frames, then the single time domain noise reduction operation of the second noise reduction process will have a higher noise reduction intensity than the single time domain noise reduction operation of the first noise reduction process.
  • optional parameters in the temporal noise reduction operation also include optical flow algorithm parameters, motion thresholds, pixel difference thresholds, etc.
  • optional parameters in the temporal noise reduction operation further include filtering strengths of one or more filters, parameters of a pixel fusion algorithm, and the like. Those skilled in the art can select one or more suitable parameters for adjustment according to the specific operation method of the time-domain noise reduction operation, which is not specifically limited.
  • the parameters in adjusting the parameters of the noise reduction algorithm used in the first noise reduction process and the second noise reduction process respectively, may include parameters of the spatial noise reduction operation in the noise reduction algorithm.
  • selectable parameters also include relative luminance thresholds, absolute luminance thresholds, edge preservation thresholds, filtering strengths of one or more filters, and the like. Those skilled in the art can select one or more suitable parameters for adjustment according to the specific operation method of the spatial noise reduction operation, which is not specifically limited.
  • the first noise reduction process one time-domain noise reduction operation and one spatial-domain noise reduction operation are set, wherein the time-domain noise reduction operation is set.
  • the number of sampling frames in the domain noise reduction operation is set to one frame before and after.
  • the second noise reduction process 5 time domain noise reduction operations and 3 spatial domain noise reduction operations are set.
  • the number of sampling frames in the time domain noise reduction operation It is set to 3 frames before and after, so that the intensity of the second noise reduction process is higher than that of the first noise reduction process.
  • those skilled in the art can also choose to add one or more other noise reduction operations in the second noise reduction process based on the above settings.
  • the noise reduction operation performed in the space only needs to ensure that the output image after the second noise reduction process is in YUV format.
  • the same noise reduction algorithm in the first noise reduction process and the second noise reduction process but only adjust the parameters and/or the number of operations, that is, in the first noise reduction process
  • the same time-domain noise reduction operation method and/or spatial-domain noise reduction operation method are used in the noise processing and the second noise reduction processing, so that the same architecture can be used to complete the first noise reduction only by adjusting the number of operations and/or parameters
  • the intensity adjustment of the processing and the second noise reduction processing avoids designing noise reduction algorithms separately for the first noise reduction processing and the second noise reduction processing.
  • the methods that can be used include a pyramid fusion method, or an image stitching fusion method (Blend fusion). Those skilled in the art can also select other suitable fusion methods according to actual needs, which are not specifically limited.
  • the image to be de-noised may be an original image generated by a sensor, that is, in step S802 to obtain the image to be de-noised, the RAW format image generated by the sensor is directly used as the image to be de-noised, in step S802
  • a first noise reduction process is performed on the image to be denoised to obtain a first YUV image.
  • a second YUV image is obtained by performing a second noise reduction process on the image to be denoised.
  • the processing may be a complete image signal processing procedure including noise reduction operations, ie, the first noise reduction processing and the second noise reduction processing may include all other operations in the image signal processing procedure.
  • Operation 1 and operation 2 shown in FIG. 8 may refer to, for example, the operation shown in FIG. 2 and any other possible operations, such as black level operation, white balance operation, dead pixel correction operation, etc., in such an embodiment , for the RAW format image generated by the sensor, it is equivalent to performing two complete image signal processing on it.
  • step S808 the finally obtained Y component of the first YUV image and the UV component of the second YUV image are fused.
  • first noise reduction processing and the second noise reduction processing shown in FIG. 8 include exactly the same operations, since the first noise reduction processing and the second noise reduction processing in such an embodiment actually The above are two completely independent image signal processing flows.
  • the first noise reduction processing and the second noise reduction processing may also include different operations, for example, adding at least one downsampling as described in the foregoing content to the second noise reduction processing operation and at least one upsampling operation, or other operations added by those skilled in the art according to actual needs, and, for example, setting different operation orders for each operation in the first noise reduction processing and the second noise reduction processing, etc., those skilled in the art
  • the operations included in the first noise reduction processing and the second noise reduction processing can be arbitrarily adjusted according to actual needs, as long as the noise reduction operation intensity in the second noise reduction processing is higher than that in the first noise reduction processing.
  • the image to be denoised acquired in step S802 may also be a superimposed image of multiple original images generated by the sensor based on the same scene, that is, after receiving multiple original images in RAW format output by the sensor based on the same scene , perform a RAW Stack on it to generate a superimposed image, and the superimposed image is still an image in RAW format.
  • a superimposed image of multiple original images generated by the sensor based on the same scene, that is, after receiving multiple original images in RAW format output by the sensor based on the same scene , perform a RAW Stack on it to generate a superimposed image, and the superimposed image is still an image in RAW format.
  • an image processing method 900 is also provided. Referring to FIG. 9 , in step S901 , an original image generated by a sensor or a superimposed image of multiple original images generated by a sensor based on the same scene is received. In step S902, the first-stage image signal processing is performed on the original image or the superimposed image to obtain an image to be denoised.
  • step S903 a first noise reduction process is performed on the image to be denoised to obtain a first YUV image
  • a second YUV image is obtained by performing a second noise reduction process on the image to be denoised, and the intensity of the second noise reduction process is higher than
  • step S905 the Y component of the first YUV image and the UV component of the second YUV image are fused to obtain a noise reduction image.
  • the first-stage image signal processing is performed on the original image or the superimposed image.
  • the first-stage image signal processing may include one or more operations other than noise reduction operations in the image signal processing flow, that is, in this
  • the first-stage image signal processing is performed on the original image and the superimposed image generated by the sensor to obtain the image to be denoised, and then the image to be de-noised is subjected to the first denoising process and the second denoising process respectively to obtain the first denoising process.
  • the image signal processing of the first stage may include one or more operations as shown in FIG. 2, so that in such an embodiment, these operations will not be included in the first noise reduction processing and the second noise reduction processing, That is, one or more operations unrelated to the noise reduction operations in the first noise reduction processing and the second noise reduction processing are completed to obtain the image to be denoised, and then the first noise reduction processing and the second noise reduction processing are respectively performed on the image to be denoised.
  • the noise reduction processing saves the operation steps in the first noise reduction processing and the second noise reduction processing, so that the efficiency of the image noise reduction processing is improved.
  • the image signal processing in the first stage includes at least one operation of changing the format, so that the image to be denoised is in RGB format or YUV format.
  • the original image or the superimposed image received in step S901 is in RAW format
  • the image finally obtained in the first noise reduction process and the second noise reduction process is in YUV format.
  • the RAW format image is usually demosaiced and then converted into an RGB format image
  • the RGB format image is converted into a YUV format image.
  • the image signal processing in the first stage may include at least one operation of changing the format, so that the image to be denoised is an image in RGB format or YUV format.
  • the image signal processing in the first stage may include a demosaicing operation.
  • the image to be denoised is an image in RGB format, so that demosaicing is unnecessary in the first noise reduction processing and the second noise reduction processing.
  • the image signal processing in the first stage can also include other operations on RAW format images and RGB format images, thereby reducing the noise reduction process of the entire image between RGB format images and YUV format images. Conversion times, further saving operation steps.
  • the image signal processing in the first stage may further include a demosaicing operation and an RGB2YUV operation, so that the image to be denoised is an image in YUV format, so that the first noise reduction processing and the second noise reduction processing It is not necessary to perform image format conversion operations in these embodiments, and similarly, the first-stage image signal processing in these embodiments may also include other operations performed on RAW format images and RGB format images.
  • the image signal processing in the first stage may include at least one operation of changing the linear characteristics of the image, and the image to be denoised is a non-linear image.
  • the operation of changing the linear characteristics of the image can be any operation that changes the relationship between the pixel value in the image and the original photometric signal value, such as the gamma correction operation, which converts a linear RGB image into a nonlinear RGB image, and then For example, in tone mapping operations, the Tone curve is used to perform nonlinear operations on pixel values.
  • Figure 10 shows a schematic diagram of the change of pixel values after the linear features of an image are changed. It can be seen that most of the pixel values in the figure have different degrees of magnification, and this also includes noise, that is, in the When performing the operation of changing the linear characteristics of the image, it is inevitable to amplify some noise points. Therefore, in the first stage of image signal processing, add at least one operation to change the linear characteristics of the image, and then perform the first step. The noise reduction processing and the second noise reduction processing can make the amplified noise also get noise reduction, which further improves the effect of the image noise reduction processing.
  • the image noise reduction processing method 900 may further include step S906 , performing a second stage of image signal processing on the noise reduction image to obtain an output image.
  • the image signal processing in the second stage may be a continuous image signal processing operation on the noise-reduced image
  • the noise-reduced image may be an image in YUV format, that is, the Y component of the first YUV image is fused with the UV component of the second YUV image.
  • the noise-reduced image obtained after this process is no longer format-converted.
  • the noise-reduced image may also be an image in RGB format, that is, after the Y component of the first YUV image and the UV component of the second YUV image are fused, format conversion is performed to obtain a noise-reduced image in RGB format. Therefore, the image signal processing in the second stage may include operations performed on images in RGB format and may also include operations performed on images in YUV format.
  • the image signal processing in the second phase may include At least one of the following: an operation for adjusting color saturation, an operation for edge enhancement, an operation for image compression, and any other suitable operation selected by those skilled in the art according to actual needs.
  • the first noise reduction processing and the second noise reduction processing may only include noise reduction operations, and all other operations in the image signal processing flow are placed in the first stage of image signal processing or In the second stage of image signal processing, in the entire image noise reduction processing, only the first noise reduction processing and the second noise reduction processing need to be performed separately on the image to be denoised, that is, only the noise reduction operation needs to be run twice, and All other operations only need to be performed once, which greatly saves the operation steps.
  • Embodiments of the present application further provide an image noise reduction processing apparatus 1100, referring to FIG. 11, including one or more processors 1110, the one or more processors 1110 being configured to: obtain an image to be denoised;
  • a denoised image is obtained by fusing the Y component of the first YUV image and the UV component of the second YUV image.
  • the one or more processors 1110 are further configured to: when performing a second noise reduction process on the to-be-reduced image to obtain a second YUV image:
  • the at least one upsampling operation maintains the second YUV image at the same resolution as the first YUV image.
  • the one or more processors 1110 are further configured to perform at least one downsampling operation on the image to be denoised during the second denoising process.
  • the one or more processors 1110 are further configured to perform at least one upsampling operation after performing at least one downsampling operation on the image to be denoised in the second noise reduction process, the at least one upsampling operation The operation keeps the second YUV image and the first YUV image at the same resolution.
  • the one or more processors 1110 are further configured to adjust the parameters and/or the number of operations of the noise reduction algorithm used in the first noise reduction process and the second noise reduction process, respectively, so that the second noise reduction process The intensity is higher than that of the first noise reduction process.
  • the noise reduction algorithm includes a temporal noise reduction operation and/or a spatial noise reduction operation.
  • the number of operations includes:
  • the number of operations of the time domain noise reduction operation and/or the spatial domain noise reduction operation included in the noise reduction algorithm is the number of operations of the time domain noise reduction operation and/or the spatial domain noise reduction operation included in the noise reduction algorithm.
  • the parameters include:
  • At least one of the following parameters of the temporal noise reduction operation in the noise reduction algorithm sampling frame number, motion threshold, pixel difference threshold, optical flow algorithm parameters, filtering strength of one or more filters, and pixel fusion algorithm parameters.
  • the parameters include:
  • At least one of the following parameters of the spatial noise reduction operation in the noise reduction algorithm sampling radius, filtering strength of one or more filters, relative brightness threshold, absolute brightness threshold and edge preservation threshold.
  • a denoised image is obtained by fusing the Y component of the first YUV image and the UV component of the second YUV image using a pyramid fusion method or an image stitching fusion method.
  • the image to be denoised is an original image generated by a sensor, or a superimposed image of multiple original images generated by a sensor based on the same scene.
  • the one or more processors 1110 are further configured to, when obtaining the image to be denoised:
  • the first-stage image signal processing is performed on the original image or the superimposed image to obtain the image to be denoised.
  • the image signal processing in the first stage includes at least one operation of changing the image format, so that the image to be denoised is an RGB format image or a YUV format image.
  • the image signal processing in the first stage further includes at least one operation of changing the linear characteristics of the image, so that the image to be denoised is a non-linear image.
  • the one or more processors 1110 are further configured to: after the noise reduction image is obtained by fusing the Y component of the first YUV image and the UV component of the second YUV image, perform a second-stage process on the noise reduction image. Image signal processing to obtain the output image.
  • the image signal processing in the second stage includes at least one of the following:
  • Embodiments of the present application further provide an imaging device 1200, referring to FIG. 12, including:
  • One or more processors 1220 one or more processors 1220 configured to:
  • a denoised image is obtained by fusing the Y component of the first YUV image and the UV component of the second YUV image.
  • the imaging device 1200 may be any electronic device with imaging function, such as a camera, a mobile phone, a tablet computer, a laptop computer, a game console, a head-mounted display device, an access control system, an ATM, etc., which are not limited herein.
  • imaging device 1200 may be mounted on a suitable movable platform, which may include unmanned aerial vehicles (UAVs), automobiles, ships, airplanes, remote-controlled vehicles, and even living beings.
  • UAVs unmanned aerial vehicles
  • the imaging device 1200 may be electrically coupled to a portion of the movable platform (eg, processing unit, control system, data storage) so that data collected by the imaging device can be used for various functions of the movable platform (eg, a processing unit, a control system, a data store) , navigation, control, propulsion, communication with users or other devices, etc.).
  • the imaging device may be operably coupled to a portion of the movable platform (eg, processing unit, control system, data storage).
  • the steps described above by one or more processors 1220 may also be performed by one or more processors of the removable platform.
  • the sensor 1210 may be any sensor capable of generating an image, such as a CCD image sensor or a CMOS image sensor, and the sensor 1210 is used to generate an image. Specifically, the image generated by the sensor 1210 is an original image in RAW format.
  • the one or more processors 1220 receive the raw images generated by the sensor 1210 as the image to be denoised. In some embodiments, the one or more processors 1220 receive multiple raw images generated by the sensor 1210 based on the same scene The superimposed image of the image is used as the image to be denoised.
  • one or more processors 1220 receive the original image generated by the sensor 1210 or receive the superimposed image of multiple original images generated by the sensor 1210 based on the same scene, and then process the original image.
  • the image or the superimposed image is subjected to the first-stage image signal processing to obtain the image to be denoised.
  • the specific implementation method refer to the above-mentioned image denoising processing method, which will not be repeated here.
  • the one or more processors 1220 may include field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), digital signal processors (DSPs), central processing units (CPUs), graphics processing units (GPU), Vision Processing Unit (VPU), Complex Programmable Logic Device (CPLD), etc.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • DSPs digital signal processors
  • CPUs central processing units
  • GPU graphics processing units
  • VPU Vision Processing Unit
  • CPLD Complex Programmable Logic Device
  • the one or more processors 1220 may be one or more processors of the terminal with an imaging function, that is, a built-in ISP is used, of course, a One or more processors 1220 may also be ISP chips specially used for image signal processing, that is, an external ISP, which is not specifically limited.
  • the one or more processors 1220 are further configured to perform a second noise reduction process on the to-be-reduced image to obtain a second YUV image:
  • the at least one upsampling operation maintains the second YUV image at the same resolution as the first YUV image.
  • the one or more processors 1220 are further configured to perform at least one downsampling operation on the image to be denoised during the second denoising process.
  • the one or more processors 1220 are further configured to perform at least one upsampling operation after performing at least one downsampling operation on the image to be denoised in the second noise reduction process, the at least one upsampling operation The operation keeps the second YUV image and the first YUV image at the same resolution.
  • the one or more processors 1220 are further configured to adjust the parameters and/or the number of operations of the noise reduction algorithm used in the first noise reduction process and the second noise reduction process, respectively, so that the second noise reduction process The intensity is higher than that of the first noise reduction process.
  • the noise reduction algorithm includes a temporal noise reduction operation and/or a spatial noise reduction operation.
  • the number of operations includes:
  • the number of operations of the time domain noise reduction operation and/or the spatial domain noise reduction operation included in the noise reduction algorithm is the number of operations of the time domain noise reduction operation and/or the spatial domain noise reduction operation included in the noise reduction algorithm.
  • the parameters include:
  • At least one of the following parameters of the temporal noise reduction operation in the noise reduction algorithm sampling frame number, motion threshold, pixel difference threshold, optical flow algorithm parameters, filtering strength of one or more filters, and pixel fusion algorithm parameters.
  • the parameters include:
  • At least one of the following parameters of the spatial noise reduction operation in the noise reduction algorithm sampling radius, filtering strength of one or more filters, relative brightness threshold, absolute brightness threshold, and edge preservation threshold.
  • the denoised image is obtained by fusing the Y component of the first YUV image and the UV component of the second YUV image using a pyramid fusion method or an image stitching fusion method.
  • the image to be denoised is an original image generated by the sensor 1210, or a superimposed image of multiple original images generated by the sensor 1210 based on the same scene.
  • the one or more processors 1220 are further configured to: when obtaining the image to be denoised:
  • the first-stage image signal processing is performed on the original image or the superimposed image to obtain the image to be denoised.
  • the image signal processing in the first stage includes at least one operation of changing the image format, so that the image to be denoised is an RGB format image or a YUV format image.
  • the image signal processing in the first stage further includes at least one operation of changing the linear characteristics of the image, so that the image to be denoised is a nonlinear image.
  • the one or more processors 1220 are further configured to: after the noise reduction image is obtained by fusing the Y component of the first YUV image and the UV component of the second YUV image, perform a second-stage process on the noise reduction image. Image signal processing to obtain the output image.
  • the image signal processing in the second stage includes at least one of the following:
  • Embodiments of the present application further provide a computer-readable storage medium 1300.
  • the computer-readable storage medium stores computer instructions 1310.
  • the computer instructions 1310 When the computer instructions 1310 are executed, any one of the above-described image noise reduction processing methods is implemented.
  • Computer-readable storage media may include volatile or non-volatile, magnetic, semiconductor, magnetic tape, optical, removable, non-removable, or other types of computer-readable storage media or computer-readable storage devices.
  • a computer-readable storage medium may be a storage unit or storage module having computer instructions stored thereon.
  • the computer-readable storage medium may be a disk or flash drive on which computer instructions are stored.
  • modules/unit may be implemented by one or more processors, such that the one or more processors become one or more special purpose processors for executing software instructions stored in a computer-readable storage medium to perform Dedicated functions for modules/units.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, where the module, segment, or portion of code includes one or more functions for implementing the specified logical function(s). executable instructions.
  • the functions noted in the blocks may also occur out of the order noted in the figures. For example, two consecutive blocks may, in fact, be executed substantially concurrently, or may sometimes be executed in the reverse order, depending upon the functionality involved.
  • embodiments of the present application may be embodied as a method, system, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware to allow dedicated components to perform the functions described above. Furthermore, embodiments of the present application may take the form of a computer program product embodied in one or more tangible and/or non-transitory computer readable storage media containing computer readable program code.
  • Non-transitory computer readable media in general form include, for example, floppy disks, flexible disks, hard disks, solid state drives, magnetic tapes or any other magnetic data storage medium, CD-ROM, any other optical data storage medium, any physical medium in the form of a hole, RAM, PROM and EPROM, FLASH-EPROM or any other flash memory, NVRAM, cache, registers, any other memory chip or film, and their networked versions.
  • Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus and computer program products according to embodiments of the application. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of multiple processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a computer, an embedded processor or other programmable data processing apparatus to create a special purpose machine such that execution of the instructions via the processor of the computer or other programmable data processing apparatus creates a process for implementing processes One or more of the flowcharts in the figures and/or one or more blocks of the block diagrams specify the functional means.
  • These computer program instructions may also be stored in a computer-readable memory that directs a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising the instruction means that implement the processes.
  • a computer device includes one or more central processing units (CPUs), input/output interfaces, network interfaces, and memory.
  • Memory may include forms of volatile memory, random access memory (RAM), and/or non-volatile memory, such as read-only memory (ROM) or flash RAM in a computer-readable storage medium.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash RAM
  • a computer-readable storage medium refers to any type of physical memory that can store information or data readable by a processor. Accordingly, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processors to perform steps or stages consistent with the embodiments described herein.
  • Computer-readable media includes non-volatile and volatile media as well as removable and non-removable media, wherein storage of information can be implemented in any method or technology. Information may be modules of computer readable instructions, data structures and programs, or other data.
  • non-transitory computer-readable media include, but are not limited to, phase-change random access memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash memory or other memory technologies, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical memory , cartridges, magnetic tape or disk storage or other magnetic storage devices, caches, registers, or any other non-transmission medium that can be used to store information that can be accessed by a computer device.
  • Computer readable storage media are non-transitory and do not include transitory media such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

图像降噪处理方法、计算机可读存储介质、图像降噪处理装置和成像装置。图像降噪处理方法包括:获得待降噪图像;对所述待降噪图像进行第一降噪处理获得第一YUV图像;对所述待降噪图像进行第二降噪处理获得第二YUV图像,其中所述第二降噪处理的强度高于所述第一降噪处理的强度;将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像。本申请提供的这种图像降噪处理方法、计算机可读存储介质、图像降噪处理装置和成像装置能够在较大程度的去除噪声的同时保留图片的细节。

Description

图像降噪处理方法、装置及成像装置 技术领域
本申请涉及图像处理技术领域,具体涉及一种图像处理方法、装置以及成像装置。
背景技术
图像的降噪处理是为了去除图像中的噪点以提高图像的质量,但是高强度的降噪处理往往会导致图像的细节丢失,低强度的降噪处理则会导致图像中噪点残留较多,现有技术中无法解决保留细节和去除噪点之间的矛盾。
发明内容
本申请实施例提出一种图像处理方法、计算机可读存储介质、图像处理装置以及成像装置。
第一个方面,本申请的实施例提供了一种图像处理方法,包括:获得待降噪图像;对所述待降噪图像进行第一降噪处理获得第一YUV图像;对所述待降噪图像进行第二降噪处理获得第二YUV图像,其中所述第二降噪处理的强度高于所述第一降噪处理的强度;将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像。
第二个方面,本申请的实施例提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时,实现上述第一个方面所述的图像降噪处理方法。
第三个方面,本申请的实施例提供了一种图像处理装置,包括:一个或多个处理器,所述一个或多个处理器被配置成:获得待降噪图像;对所述待降噪图像进行第一降噪处理获得第一YUV图像;对所述待降噪图像进行第二降噪处理获得第二YUV图像,其中所述第二降噪处理的强度高于所述第一降噪处理的强度;将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像。
第四个方面,本申请的实施例提供了一种成像装置,包括:传感 器,用于生成图像;一个或多个处理器,所述一个或多个处理器被配置成:获得来自所述传感器的待降噪图像;对所述待降噪图像进行第一降噪处理获得第一YUV图像;对所述待降噪图像进行第二降噪处理获得第二YUV图像,其中所述第二降噪处理的强度高于所述第一降噪处理的强度;将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像。
本申请提供的图像降噪处理方法、计算机可读存储介质、图像降噪处理装置和成像装置对待降噪图像分别进行第一降噪处理和第二降噪处理并将获得的图像进行融合,从而能够在较大程度去除噪声的同时保留图片的细节。
附图说明
图1为一种示例性的成像流程示意图;
图2为一种示例性的图像信号处理流程示意图;
图3为YUV格式图像三个通道示意图;
图4为夜景拍摄下的低信噪比图像示意图;
图5为根据本申请一个实施例的图像降噪处理方法流程图;
图6为根据本申请一个实施例的对待降噪图像进行下采样运算的示意图;
图7为根据本申请一个实施例的在第二降噪处理过程中进行下采样运算的示意图;
图8为根据本申请又一个实施例的图像降噪处理方法流程图;
图9为根据本申请再一个实施例的图像降噪处理方法流程图;
图10为改变图像线性特征的运算的效果示意图;
图11为根据本申请实施例的图像信号处理装置示意图;
图12为根据本申请实施例的成像装置示意图;
图13为根据本申请实施例的计算机可读存储介质示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申 请实施例的附图,对本申请的技术方案进行清楚、完整地描述。显然,所描述的实施例是本申请的一个实施例,而不是全部的实施例。基于所描述的本申请的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,除非另外定义,本申请使用的技术术语或者科学术语应当为本申请所属领域内具有一般技能的人士所理解的通常意义。若全文中涉及“第一”、“第二”等描述,则该“第一”、“第二”等描述仅用于区别类似的对象,而不能理解为指示或暗示其相对重要性、先后次序或者隐含指明所指示的技术特征的数量,应该理解为“第一”、“第二”等描述的数据在适当情况下可以互换。若全文中出现“和/或”,其含义为包括三个并列方案,以“A和/或B”为例,包括A方案,或B方案,或A和B同时满足的方案。
图1示出了一种示例性的成像流程示意图,外界的光信号经过镜头组(Lens)到达成像传感器(Sensor)后,由传感器生成RAW格式的原始图像,本领域中也常将其称为Bayer图像,RAW格式的原始图像需要在中经过一系列的图像信号处理(ISP,Image Signal Processor)获得通常意义上人眼可见的图像,例如RGB格式的图像和YUV格式的图像。
图2为一种示例性的图像信号处理流程,需要注意的是,其仅示出了部分本申请实施例中提及到的运算,本申请中并不对图像信号处理中所包括的具体运算进行限定。同时还需要特别注意的是,图像信号处理中各种运算之间的处理顺序以及相互关系也不必如图中所示,本申请实施例的图像降噪处理方法可以应用在任何形式的图像处理流程之中。
在图2中示出的图像信号处理流程中,去马赛克运算202(DEMOSIC)用于将RAW格式的原始图像进行颜色插值处理后获得RGB图像。
RGB2YUV运算206用于将RGB格式转换为YUV格式,在不伴随其他处理时,RGB格式与YUV格式通常仅需要简单的公式运算即可实现相互转换,因此在整个图像信号处理过程中RGB格式与YUV格式 之间可以进行多次的转换,在本申请实施例中针对某一操作对象未限定具体格式时,通常表示该操作对象可以是YUV格式、RGB格式或其他常见的图片格式中的任意一种。
伽玛校正运算204(GAMMA)是一种常用的将线性图像转换成非线性图像的运算,在这里线性图像和非线性图像是指图片中的像素值同原始的光信号值是否成线性关系。图像信号处理流程中还可能包括一个或多个能够改变图像线性特征的其他运算,例如色调调和运算(Tone mapping)。
上述的RGB格式图像为一种三通道图片格式,分为红色(R通道),绿色(G通道),蓝色(B通道),人眼对RGB的三个通道区分并不明显。YUV格式图像也是一种三通道图片格式,图3为YUV图像以及三个通道的示意图,分为一个亮度通道(Y通道)和两个色彩通道(U、V通道),与RGB格式不同的是,YUV格式图像的三个通道定义同人眼对图片的感知能力相关,人眼一般对于亮度的解析度更加敏感,因此YUV格式图像的亮度通道几乎包括了图片的所有细节。
在诸如夜间拍摄等拍摄场景中,由于光信号较弱,获得的图像将会拥有较大的噪声,即,信噪比较低,所导致的结果之一可能是图像的色彩失真,图4示出了单反相机和手机基于同一夜景所拍摄的图像示例,由于单反相机的镜头通常具有比手机镜头更高的感光度,因此在夜景拍摄中单反拥有相对较高的信噪比,信噪比差异导致了如图4的方框中示出的图片颜色的差异,手机镜头获得的信噪比较低的图像中,大楼的颜色更为昏暗。
图像降噪用于减少图像中噪点的影响,需要注意的是,尽管上述图4中示出的单反相机的图像拥有较高的信噪比,但是并不代表其不需要图像降噪处理。
尽管图像降噪处理可以减少图像中噪点的影响,但是当图像降噪处理的强度较高时,图像之中的细节也将会与噪点一同被抹平,导致图像的细节丢失。而图像降噪处理的强度较低时,尽管能够保留更多的细节,但是图像中将会残留较多的噪点,这样的矛盾存在于所有降噪处理过程中,并且在对信噪比较低的图像进行降噪处理时尤为突出。
尽管下述的多个本申请的实施例中描述了对单张图像进行降噪处理的图像降噪处理方法、装置、成像装置和计算机可读存储介质,但是根据本申请实施例的图像降噪处理方法、装置、成像装置和计算机可读存储介质完全可以应用在对视频图像进行降噪处理,例如,本领域技术人员可以选择对视频中的每一帧或至少部分帧中的图像进行如下任一实施例所述的图像降噪处理。
本申请的实施例提出了一种图像降噪处理方法,图5示出了根据本申请一个实施例的图像降噪处理方法的流程图。根据本申请实施例的图像降噪处理方法包括:
步骤S502:获得待降噪图像;
步骤S504:对所述待降噪图像进行第一降噪处理获得第一YUV图像;
步骤S506:对所述待降噪图像进行第二降噪处理获得第二YUV图像,其中所述第二降噪处理的强度高于所述第一降噪处理的强度;
步骤S508:将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像。
在一些实施例中,步骤S502获取的待降噪图像可以是图像处理流程中任一运算输出的图像,待降噪图像可以是RGB格式、YUV格式或其他任意格式,在一些实施例中,待降噪图像可以是由传感器直接输出的原始图像。在一些实施例中,待降噪图像还可以是来自任一渠道的一张图像,但是由于已经经过降噪处理的图像在降噪处理过程中丢失的图像细节无法挽回,因此为了获得更好的降噪处理效果,待降噪图像优选的为未经过降噪处理的图像。
在步骤S504和步骤S506中,分别对待降噪图像进行第一降噪处理和第二降噪处理以获得第一YUV图像和第二YUV图像,其中,第二降噪处理的强度高于第一降噪处理的强度。
第二降噪处理的强度高于第一降噪处理的强度,从而强度较低的第一降噪处理所获得的第一YUV图像将会保留更多的图像细节,相应的,也会留下较多的噪点,而强度较高的第二降噪处理所获得的第二YUV图像将会拥有更少的噪点,但也会相应的失去部分图像细节。
在一些实施例中,第一降噪处理和第二降噪处理中的降噪运算可以使用相同的降噪算法,第一降噪处理和第二降噪处理的强度可以通过调整其所使用的降噪算法中的参数和/或运算次数来进行调节,具体的调节方式将在下文中进行详细的阐述。在一些实施例中,第一降噪处理和第二降噪处理中还可以使用不同的降噪算法,例如在降噪算法中设置不同的降噪运算模块,以使得第二降噪处理的强度高于第一降噪处理的强度。
进一步,上述降噪算法输出的图像为YUV格式的图像,即,第一降噪处理和第二降噪处理是在YUV色彩空间中进行的降噪处理,但是第一降噪处理和第二降噪处理并不排斥在其他色彩空间中进行的降噪处理,例如,在步骤S508中获得了降噪图像后,仍然可以选择在其他色彩空间,例如RGB空间中对降噪图像进行进一步的降噪处理。优选地,在对待降噪图像进行第一降噪处理前不对其进行其他任何形式的降噪处理,使得第一降噪处理获得的第一YUV图像能够尽可能的保留更多的细节。
在一些实施例中,第一降噪处理和第二降噪处理中还可以包括一个或多个图像信号处理流程中的其他运算,例如当待降噪图像为RAW格式、RGB格式或其他格式时,第一降噪处理和第二降噪处理包括将待降噪图像的格式转换为YUV格式,然后进行降噪处理。又例如,第一降噪处理和第二降噪处理在降噪运算外还可以包括如图2中示出的图像处理流程中的去马赛克运算、伽马校正运算、RGB2YUV运算,降噪运算与这些运算之间的相互顺序不做具体的限定,在一些实施例中,第一降噪处理和第二降噪处理甚至可以包括图像信号处理中的所有可能涉及到的运算,即,第一降噪处理和第二降噪处理可以是包括了降噪运算的完整的图像信号处理过程,同样的,并不限定降噪运算与其他运算之间的顺序。
在步骤S508中,将第一YUV图像的Y分量与第二YUV图像的UV分量进行融合获得降噪图像。结合前述内容,人眼对于YUV格式图像中的Y通道中的图像细节更为敏感,因此,将保留了更多图像细节的第一YUV图像的Y分量与降噪效果更好的第二YUV图像的UV分 量进行融合后获得的降噪图像能够在获得较好的降噪效果的同时保留更多的细节,在保留细节和去除噪点之间做到两全其美。
在一些实施例中,步骤S506中的对待降噪图像进行第二降噪处理获得第二YUV图像可以进一步包括:
步骤S5061:将所述待降噪图像进行至少一次下采样运算获得降低分辨率的待降噪图像;
步骤S5062:对所述降低分辨率的待降噪图像进行所述第二降噪处理和至少一次上采样运算获得第二YUV图像,所述至少一次上采样运算使所述第二YUV图像的分辨率高于所述降低分辨率的待降噪图像的分辨率。
在步骤S5061中,在对待降噪图像进行第二降噪处理之前,首先对其进行至少一次的下采样运算,下采样运算是指使用本领域技术人员熟知的对图像进行下采样的方法之一对待降噪图像进行下采样,例如对于矩阵形式的图像而言,下采样运算通常是将原图像的S*S窗口内的图像变成1个像素点,该像素点的值就是窗口内所有像素的均值。
可以理解地,噪点在图像中是随机出现的,上述窗口内的像素点可能包括噪点与正常的像素点,因此这样取均值的操作相当于将原图像中的噪点与正常像素点进行了均值处理,实际上也相当于一种降噪处理,本领域技术人员可以选择对待降噪图像进行任意次数的下采样运算,对此不做具体的限定。在进行第二降噪处理之前进行的下采样运算实际上变相的增加了第二降噪处理的降噪强度,使得第二降噪处理获得的第二YUV图像中的噪点更少。
进一步,参照图6,本领域技术人员可以选择在进行第二降噪处理之前的任何合适的阶段进行下采样处理,然而需要注意的是,由于下采样运算基于像素均值来进行,其必然将会导致图像细节的丢失,因此仅在第二降噪处理中针对降低分辨率的待降噪图像进行处理,而在第一降噪处理的过程中,针对原待降噪图像进行处理,也就是说,在这样的实施例中,自第一次下采样运算起,图像信号处理过程被分成了两路,其中一路针对原待降噪图像进行第一降噪处理,而另一路针对降低分辨率后的待降噪图像进行第二降噪处理。
在步骤S5062中,对降低分辨率后的待降噪图像进行第二降噪处理,而后将其进行至少一次上采样运算获得第二YUV图像,使得第二YUV图像的分辨率高于降低分辨率后的待降噪图像。
上采样运算同样可以使用本领域技术人员熟知的上采样方法,例如采用内插值的方法,在原有图像像素的基础上在像素点之间采用合适的插值算法插入新的元素,使得图像的分辨率提升,本领域技术人员同样可以选择进行任意次数的上采样运算。
在一些实施例中,至少一次上采样运算使得第二YUV图像拥有与第一YUV图像相同的分辨率,从而在后续将第一YUV图像的Y分量与第二YUV图像的UV分量进行融合的步骤中能够适用更多类型的融合算法,需要注意的是,尽管经过至少一次下采样运算和至少一次上采样运算的第二YUV图像拥有与第一YUV图像相同的分辨率,下采样运算的次数和上采样运算的次数之间并无关联,例如,本领域技术人员可以选择进行5次下采样运算,而只进行1次上采样运算使第二YUV图像与第一YUV图像拥有相同的分辨率。
在一些实施例中,至少一次上采样运算可以仅使得第二YUV图像拥有比降低分辨率的待降噪图像更高的分辨率,而不必使得第二YUV图像和第一YUV图像拥有相同的分辨率,在这样的实施例中,可以在将第一YUV图像的Y分量和第二YUV图像的UV分量进行融合的步骤中再进一步的对第二YUV图像进行上采样运算或者下采样运算,使第二YUV图像和第一YUV图像具有相同的分辨率。或者,可以在将第一YUV图像的Y分量和第二YUV图像的UV分量进行融合的步骤中,选择能够对不同分辨率的图像进行融合的融合方法。
在一些实施例中,参照图7,步骤S702中获得待降噪图像,步骤S704中对待降噪图像进行第一降噪处理获得第一YUV图像,步骤S706中对待降噪图像进行第二降噪处理获得第二YUV图像,第二降噪处理的强度高于第一降噪处理,其中,对待降噪图像进行第二降噪处理的过程中对待降噪图像进行至少一次下采样运算,即,上述至少一次下采样运算可以是在第二降噪处理的运算过程中进行,而后,在步骤S708中,将第一YUV图像的Y分量和第二YUV图像的UV分量 融合获得降噪图像。
需要注意的是,尽管图7中并未示出第一降噪处理中具体的降噪运算,但是第一降噪处理可以包括如图中所示的第二降噪处理中的所有运算,区别在于,第一降噪处理中的这些运算之间不插入下采样运算和上采样运算。还需要注意的是,尽管图7中示出了第二降噪处理包括降噪运算1‐4,实际上第二降噪处理还可以包括其他合适的运算,并且在这些运算步骤之间也可以插入至少一次下采样运算或至少一次上采样运算,对此不做具体的限定。同样的,在第二降噪处理的过程中进行至少一次下采样运算也能够使得第二YUV图像拥有更少的噪点。
在一些实施例中,步骤S706中,对待降噪图像进行第二降噪处理过程中对所述待降噪图像进行至少一次下采样运算后还进行至少一次上采样运算,至少一次上采样运算使所述第二YUV图像和所述第一YUV图像保持相同的分辨率。
在这些实施例中,下采样运算和上采样运算被插入在第二降噪处理的过程中,同样的,仅选择在第二降噪处理过程中加入下采样运算而不选择在第一降噪处理中加入下采样运算。进一步,可以理解地,本领域技术人员可以对上述实施例中的下采样运算和上采样运算的设置方式进行任意的组合,例如,对待降噪图像进行下采样运算后获得降低分辨率的待降噪图像,而后在第二降噪处理的过程中再继续插入下采样运算和上采样运算。
在一些实施例中,使第二降噪处理的强度高于第一降噪处理的强度的方法可以是:分别调整第一降噪处理和第二降噪处理所使用的降噪算法的参数和/或运算次数。
第一降噪处理和第二降噪处理中所使用的降噪算法可以包括时域降噪运算和/或空域降噪运算。时域降噪运算和/或空域降噪运算的方法可以是本领域技术人员熟知的任何方法,对此不做具体的限定。在一些实施例中,第一降噪处理和第二降噪处理还可以包括其他任何能够在YUV空间中进行的降噪运算。在一些实施例中,第一降噪处理可以仅包括时域降噪运算和空域降噪运算,而第二降噪处理在时域 降噪运算和空域降噪运算之外还可以包括其他能够在YUV空间中进行的降噪运算。
在一些实施例中,分别调整第一降噪处理和第二降噪处理所使用的降噪算法的运算次数时,可以是调整降噪算法中的时域降噪运算和/或空域降噪运算的次数。
例如,可以选择在第二降噪处理中设置更多的时域降噪运算次数和空域降噪运算次数,以使得第二降噪处理的强度高于第一降噪处理的强度。在一些实施例中,时域降噪运算的次数和空域降噪运算的次数可以分别的进行调整,在一些实施例中,时域降噪运算的次数和空域降噪运算的次数也可以同时进行调整。在一些实施例中,降噪算法中的时域降噪运算次数和控制降噪运算次数可以是固定的,而分别调整第一降噪处理和第二降噪处理的降噪算法在整体上的运行次数,从而使得第二降噪处理的强度高于第一降噪处理的强度。可以理解地,在参数相同的情况下,相对较高的运算次数将会使得降噪的强度相应地提高。
在一些实施例中,分别调整第一降噪处理和第二降噪处理所使用的降噪算法的参数中,所述参数可以包括降噪算法中的时域降噪运算的参数。
经典的时域降噪运算中利用时序信息,通过待降噪图像的前后相邻帧信息来进行降噪。因此,可选择的一种参数是调节时域降噪运算中的采样帧数,例如将第二降噪处理中的时域降噪运算参数定为5帧,第一降噪处理中的时域降噪运算参数定为3帧,则第二降噪处理的单次时域降噪运算将会比第一降噪处理中的单次时域降噪运算拥有更高的降噪强度。在一些实施例中,时域降噪运算中可选用的参数还包括光流算法参数、动作阈值、像素差阈值等,这些参数可以为降噪处理中的特征点追踪和噪点判断设定阈值,从而影响降噪的强度。在一些实施例中,时域降噪运算中可选用的参数还包括一个或多个滤波器的滤波强度以及像素融合算法的参数等。本领域技术人员可以根据所使用的时域降噪运算的具体运算方法来选择一个或多个合适的参数进行调整,对此不做具体的限定。
在一些实施例中,分别调整第一降噪处理和第二降噪处理所使用的降噪算法的参数中,所述参数可以包括降噪算法中的空域降噪运算的参数。
经典的空域降噪运算中针对单帧图像来进行处理,利用周围区域的像素信息来进行降噪,因此,可选择的一种参数是空域降噪运算的采样半径。在一些实施例中,可选择的参数还包括相对亮度阈值、绝对亮度阈值、边缘保留阈值、一个或多个滤波器的滤波强度等。本领域技术人员可以根据所使用的空域降噪运算的具体运算方法来选择一个或多个合适的参数进行调整,对此不做具体的限定。
进一步的,本领域技术人员可以对上述实施例中所阐述的方法进行任意的组合使用,例如,在第一降噪处理中设置1次时域降噪运算和1次空域降噪运算,其中时域降噪运算中采样帧数设定为前后1帧,在第二降噪处理中,设置5次时域降噪运算和3次空域降噪运算,其中,时域降噪运算中采样帧数设定为前后3帧,从而使得第二降噪处理的强度高于第一降噪处理的强度。又例如,本领域技术人员还可以选择在上述设置的基础上,在第二降噪处理中额外加入一种或多种其他的降噪运算,对于额外加入的降噪运算,可以是在其他色彩空间中进行的降噪运算,只需保证第二降噪处理结束后输出的图像为YUV格式。
在一些实施例中,较优的选择是在第一降噪处理和第二降噪处理中使用相同的降噪算法,而仅在参数和/或运算次数上进行调整,即,在第一降噪处理和第二降噪处理中使用相同的时域降噪运算方法和/或空域降噪运算方法,由此可使用同样的架构,仅凭借调整运算次数和/或参数来完成第一降噪处理和第二降噪处理的强度调整,避免了为第一降噪处理和第二降噪处理单独设计降噪算法。
在一些实施例中,将第一YUV图像的Y分量和第二YUV图像的UV分量进行融合获得降噪图像时,可采用的方法包括金字塔融合方法,或者图像拼接融合方法(Blend融合),本领域技术人员也可以根据实际需求选择其他合适的融合方法,对此不做具体的限定。
在一些实施例中,参照图8,待降噪图像可以是传感器生成的原 始图像,即,在步骤S802获得待降噪图像中,直接以传感器生成的RAW格式图像作为待降噪图像,在步骤S804中对待降噪图像进行第一降噪处理获得第一YUV图像,步骤S806中对待降噪图像进行第二降噪处理获得第二YUV图像,此时,第一降噪处理和第二降噪处理可以是包括了降噪运算的完整的图像信号处理过程,即,第一降噪处理和第二降噪处理可以包括图像信号处理过程中的其他所有运算。
图8中示出的运算1和运算2可以泛指例如图2中示出的运算以及其他任何可能的运算,例如黑电平运算、白平衡运算、坏点校正运算等,在这样的实施例中,对于传感器生成的RAW格式图像而言,相当于对其进行了两次完整的图像信号处理。在步骤S808中将最终得到的第一YUV图像的Y分量和第二YUV图像的UV分量进行融合。
需要注意的是,尽管图8中示出的第一降噪处理和第二降噪处理中包括了完全相同的运算,但是由于这样的实施例中第一降噪处理和第二降噪处理实际上是两条完全独立的图像信号处理流程,第一降噪处理和第二降噪处理中还可以包括不同的运算,例如在第二降噪处理中加入如前述内容所述的至少一次下采样运算和至少一次上采样运算,或本领域技术人员根据实际需求增加的其他运算,又例如为第一降噪处理和第二降噪处理中的各运算设置不同的运算顺序等,本领域技术人员可以根据实际需求对第一降噪处理和第二降噪处理中所包含的运算进行任意的调整,只需保证第二降噪处理中的降噪运算强度高于第一降噪处理。
在一些实施例中,步骤S802中获取的待降噪图像还可以是传感器基于同一场景生成的多个原始图像的叠加图像,即,在接收到传感器基于同一场景输出的多张RAW格式原始图像后,对其进行堆栈叠加(RAW Stack)生成一张叠加图像,叠加图像仍然为RAW格式的图像,进一步的处理可参照上述图8中示出的实施例,在此不再赘述。
在一些实施例中还提供一种图像处理方法900,参照图9,在步骤S901中,接收传感器生成的原始图像或者接收传感器基于同一场景生成的多个原始图像的叠加图像。在步骤S902中,对原始图像或叠加图像进行第一阶段的图像信号处理获得待降噪图像。而后在步骤 S903中对待降噪图像进行第一降噪处理获得第一YUV图像,在步骤S904中对待降噪图像进行第二降噪处理获得第二YUV图像,第二降噪处理的强度高于第一降噪处理,在步骤S905中,将第一YUV图像的Y分量和第二YUV图像的UV分量融合获得降噪图像。
步骤S902中,对原始图像或叠加图像进行第一阶段的图像信号处理,第一阶段的图像信号处理可以包括图像信号处理流程中的除降噪运算外的一个或多个运算,即,在本实施例中,对传感器生成的原始图像和叠加图像先进行第一阶段的图像信号处理后获得待降噪图像,然后再对待降噪图像分别进行第一降噪处理和第二降噪处理获得第一YUV图像和第二YUV图像。
第一阶段的图像信号处理可以包括如图2中示出的一种或多种运算,从而在这样的实施例中,第一降噪处理和第二降噪处理中将不再包括这些运算,即,先完成与第一降噪处理和第二降噪处理中的降噪运算无关的一个或多个运算获得待降噪图像,然后再分别对待降噪图像进行第一降噪处理和第二降噪处理,从而节省了第一降噪处理和第二降噪处理中的运算步骤,使得图像降噪处理的效率得到提高。
在一些实施例中,第一阶段的图像信号处理包括至少一次改变格式的运算,使得待降噪图像为RGB格式或YUV格式。可以理解的,步骤S901中接收的原始图像或叠加图像均为RAW格式,而第一降噪处理和第二降噪处理中最终获得的为YUV格式的图像。而在通常使用的图像信号处理流程中,通常会对RAW格式图像进行去马赛克运算后转换为RGB格式的图像,然后再将RGB格式的图像转换为YUV格式的图像,相对于RGB格式与YUV格式的相互转换而言,RAW格式向RGB格式转换的运算步骤更为复杂,因此,第一阶段的图像信号处理可以包括至少一次改变格式的运算,使得待降噪图像为RGB格式或YUV格式图像。
例如,在一些实施例中,第一阶段的图像信号处理可以包括去马赛克运算,此时待降噪图像为RGB格式的图像,使得第一降噪处理和第二降噪处理中不必进行去马赛克运算,节省运算步骤,当然,第一阶段的图像信号处理还可以包括其他对RAW格式图像和RGB格式 图像进行的运算,从而减少了整个图像降噪处理中RGB格式图像和YUV格式图像之间的转换次数,进一步节省运算步骤。
又例如,在一些实施例中,第一阶段的图像信号处理还可以包括包括去马赛克运算和RGB2YUV运算,使得待降噪图像为YUV格式的图像,使得第一降噪处理和第二降噪处理中不必再进行图像格式的转换的运算,同样的,这些实施例中第一阶段的图像信号处理还可以包括其他对RAW格式图像和RGB格式图像进行的运算。
在一些实施例中,第一阶段的图像信号处理可以包括至少一次改变图像线性特征的运算,待降噪图像为非线性图像。
改变图像线性特征的运算可以是任何会改变图像中的像素值与原始的光度信号值之间的关系的运算,例如伽马校正运算中,将线性的RGB图片转换为非线性的RGB图片,又例如在色调映射运算中,使用Tone曲线对像素值进行非线性的运算。
图10中示出了一种图像的线性特征发生改变后的像素值变化示意图,可以得知,图中的大部分像素值都有不同程度的放大,而这其中也包括了噪点,即,在进行图像使图像线性特征发生改变的运算时,无法避免的会将一些噪点也进行放大,因此,在第一阶段的图像信号处理中,加入至少一次改变图像线性特征的运算,而后再进行第一降噪处理和第二降噪处理,可以使得被放大的噪点也得到降噪,进一步提升了图像降噪处理的效果。
在一些实施例中,仍然参照图9,图像降噪处理方法900还可以包括步骤S906,对降噪图像进行第二阶段的图像信号处理获得输出图像。
第二阶段的图像信号处理可以是对降噪图像继续进行的图像信号处理运算,降噪图像可以是YUV格式的图像,即,将第一YUV图像的Y分量与第二YUV图像的UV分量融合后获得的降噪图像,不再进行格式转换。降噪图像还可以是RGB格式的图像,即,将第一YUV图像的Y分量与第二YUV图像的UV分量融合后,再进行格式转换,获得RGB格式的降噪图像。因此,在第二阶段的图像信号处理中,可以包括针对RGB格式图像进行的运算,也可以包括针对YUV格式 图像进行的运算,例如,在一些实施例中,第二阶段的图像信号处理可以包括以下至少之一:用于调整色彩饱和度的运算,用于边缘增强的运算,用于图像压缩的运算,以及本领域技术人员根据实际需求选择的其他任何合适的运算。
在上述图像降噪处理方法900中,第一降噪处理和第二降噪处理可以仅包括降噪运算,而将图像信号处理流程中的其他所有运算均放在第一阶段的图像信号处理或第二阶段的图像信号处理中进行,使得整个图像降噪处理中,只有第一降噪处理和第二降噪处理需要分别对待降噪图像进行,即,只有降噪运算需要运行两次,而其他所有的运算均只需要进行一次,从而大大节省了运算的步骤。
本申请的实施例还提供一种图像降噪处理装置1100,参照图11,包括,一个或多个处理器1110,所述一个或多个处理器1110被配置成:获得待降噪图像;
对待降噪图像进行第一降噪处理获得第一YUV图像;
对待降噪图像进行第二降噪处理获得第二YUV图像,其中第二降噪处理的强度高于所述第一降噪处理的强度;
将第一YUV图像的Y分量和第二YUV图像的UV分量融合获得降噪图像。
在一些实施例中,一个或多个处理器1110还被配置成在对所述待降噪图像进行第二降噪处理获得第二YUV图像时:
对待降噪图像进行至少一次下采样运算获得降低分辨率的待降噪图像;
对所述降低分辨率的待降噪图像进行第二降噪处理和至少一次上采样运算获得第二YUV图像,所述至少一次上采样运算使第二YUV图像的分辨率高于所述降低分辨率的待降噪图像。
在一些实施例中,所述至少一次上采样运算使第二YUV图像与第一YUV图像保持相同的分辨率。
在一些实施例中,一个或多个处理器1110还被配置成在第二降噪处理过程中对待降噪图像进行至少一次下采样运算。
在一些实施例中,一个或多个处理器1110还被配置成在第二降 噪处理过程中对待降噪图像进行至少一次下采样运算后还进行至少一次上采样运算,所述至少一次上采样运算使第二YUV图像和第一YUV图像保持相同的分辨率。
在一些实施例中,一个或多个处理器1110还被配置成分别调整第一降噪处理和第二降噪处理所使用降噪算法的参数和/或运算次数,使第二降噪处理的强度高于第一降噪处理的强度。
在一些实施例中,降噪算法包括:时域降噪运算和/或空域降噪运算。
在一些实施例中,分别调整所述第一降噪处理和所述第二降噪处理所使用降噪算法的运算次数中,运算次数包括:
降噪算法包括的时域降噪运算和/或空域降噪运算的运算次数。
在一些实施例中,分别调整所述第一降噪处理和所述第二降噪处理所使用降噪算法的参数中,参数包括:
降噪算法中的时域降噪运算的以下参数的至少之一:采样帧数、动作阈值、像素差阈值、光流算法参数、一个或多个滤波器的滤波强度和像素融合算法参数。
在一些实施例中,分别调整第一降噪处理和第二降噪处理所使用的降噪算法的参数中,参数包括:
降噪算法中的所述空域降噪运算的以下参数的至少之一:采样半径、一个或多个滤波器的滤波强度、相对亮度阈值、绝对亮度阈值和边缘保留阈值。
在一些实施例中,使用金字塔融合方法或图像拼接融合方法将第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像。
在一些实施例中,待降噪图像为传感器生成的原始图像,或者传感器基于同一场景生成的多个原始图像的叠加图像。
在一些实施例中,一个或多个处理器1110还备配置成在获得待降噪图像时:
接收传感器生成的原始图像,或者,接收传感器基于同一场景生成的多个原始图像的叠加图像;
对原始图像或叠加图像进行第一阶段的图像信号处理获得待降噪图像。
在一些实施例中,第一阶段的图像信号处理包括至少一次改变图像格式的运算,使待降噪图像为RGB格式图像或YUV格式图像。
在一些实施例中,第一阶段的图像信号处理还包括至少一次改变图像线性特征的运算,使待降噪图像为非线性图像。
在一些实施例中,一个或多个处理器1110还被配置成:将第一YUV图像的Y分量和第二YUV图像的UV分量融合获得降噪图像后,对降噪图像进行第二阶段的图像信号处理,获得输出图像。
在一些实施例中,第二阶段的图像信号处理包括以下至少之一:
用于调整色彩饱和度的运算、用于边缘增强的运算和用于图像压缩的运算。
需要说明的是,上述任一实施例中对图像降噪处理方法的描述,也适用于本申请实施方式的图像降噪处理装置1100,其实现原理类似,在此不再赘述。
本申请的实施例还提供一种成像装置1200,参照图12,包括:
传感器1210,用于生成图像;
一个或多个处理器1220,一个或多个处理器1220被配置成:
获得来自传感器1210的待降噪图像;
对待降噪图像进行第一降噪处理获得第一YUV图像;
对待降噪图像进行第二降噪处理获得第二YUV图像,其中第二降噪处理的强度高于第一降噪处理的强度;
将第一YUV图像的Y分量和第二YUV图像的UV分量融合获得降噪图像。
成像装置1200可以是任何具有成像功能的电子设备,例如相机、手机、平板电脑、手提电脑、游戏机、头显设备、门禁系统、柜员机等,在此不作限制。在一些实施例中,成像装置1200可以搭载在合适的可移动平台上,可移动平台可以包括无人飞行器(UAV)、汽车、轮船、飞机、遥控车,甚至是生物。在一些实施例中,成像装置1200可以与可移动平台的一部分(例如,处理单元、控制系统、数据存储 器)电耦合,从而使成像装置收集的数据能够用于可移动平台的各种功能(例如,导航、控制、推进、与用户或者其他装置通信等)。成像装置可以可操作地耦合于可移动平台的一部分(例如,处理单元、控制系统、数据存储器)。在一些实施例中,上述由一个或多个处理器1220完成的步骤也可以由可移动平台的一个或多个处理器完成。
传感器1210可以是任何能够生成图像的传感器,例如CCD图像传感器或CMOS图像传感器,传感器1210用于生成图像,具体地,传感器1210生成的图像为RAW格式的原始图像。在一些实施例中,一个或多个处理器1220接收传感器1210生成的原始图像作为待降噪图像,在一些实施例中,一个或多个处理器1220接收传感器1210基于同一场景生成的多张原始图像的叠加图像作为待降噪图像,在一些实施例中,一个或多个处理器1220接收传感器1210生成的原始图像或者接收传感器1210基于同一场景生成的多张原始图像的叠加图像后,对原始图像或叠加图像进行第一阶段的图像信号处理获得待降噪图像,具体的实现方法参照上述图像降噪处理方法,在此不再赘述。
一个或多个处理器1220可以包括现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、数字信号处理器(DSP)、中央处理单元(CPU)、图形处理单元(GPU)、视觉处理单元(VPU)、复杂可编程逻辑装置(CPLD)等等。当成像装置1200为相机时,一个或多个处理器1220可以是相机的处理器。当成像装置1200为手机、电脑、平板电脑等具有成像功能的终端时,一个或多个处理器1220可以是具有成像功能的终端的一个或多个处理器,即,采用内置ISP,当然,一个或多个处理器1220也可以是专门用于进行图像信号处理的ISP芯片,即,外置ISP,对此不做具体的限定。
在一些实施例中,一个或多个处理器1220还被配置成在对所述待降噪图像进行第二降噪处理获得第二YUV图像时:
对待降噪图像进行至少一次下采样运算获得降低分辨率的待降噪图像;
对所述降低分辨率的待降噪图像进行第二降噪处理和至少一次上采样运算获得所述第二YUV图像,所述至少一次上采样运算使第 二YUV图像的分辨率高于所述降低分辨率的待降噪图像。
在一些实施例中,所述至少一次上采样运算使第二YUV图像与第一YUV图像保持相同的分辨率。
在一些实施例中,一个或多个处理器1220还被配置成在第二降噪处理过程中对待降噪图像进行至少一次下采样运算。
在一些实施例中,一个或多个处理器1220还被配置成在第二降噪处理过程中对待降噪图像进行至少一次下采样运算后还进行至少一次上采样运算,所述至少一次上采样运算使第二YUV图像和第一YUV图像保持相同的分辨率。
在一些实施例中,一个或多个处理器1220还被配置成分别调整第一降噪处理和第二降噪处理所使用降噪算法的参数和/或运算次数,使第二降噪处理的强度高于第一降噪处理的强度。
在一些实施例中,降噪算法包括:时域降噪运算和/或空域降噪运算。
在一些实施例中,分别调整第一降噪处理和第二降噪处理所使用降噪算法的运算次数中,运算次数包括:
降噪算法包括的时域降噪运算和/或空域降噪运算的运算次数。
在一些实施例中,分别调整第一降噪处理和第二降噪处理所使用降噪算法的参数中,参数包括:
降噪算法中的时域降噪运算的以下参数的至少之一:采样帧数、动作阈值、像素差阈值、光流算法参数、一个或多个滤波器的滤波强度和像素融合算法参数。
在一些实施例中,分别调整第一降噪处理和第二降噪处理所使用的降噪算法的参数中,参数包括:
降噪算法中的空域降噪运算的以下参数的至少之一:采样半径、一个或多个滤波器的滤波强度、相对亮度阈值、绝对亮度阈值和边缘保留阈值。
在一些实施例中,使用金字塔融合方法或图像拼接融合方法将第一YUV图像的Y分量和第二YUV图像的UV分量融合获得降噪图像。
在一些实施例中,待降噪图像为传感器1210生成的原始图像, 或者传感器1210基于同一场景生成的多个原始图像的叠加图像。
在一些实施例中,一个或多个处理器1220还备配置成在所述获得待降噪图像时:
接收传感器1210生成的原始图像,或者,接收传感器1210基于同一场景生成的多个原始图像的叠加图像;
对原始图像或叠加图像进行第一阶段的图像信号处理获得待降噪图像。
在一些实施例中,第一阶段的图像信号处理包括至少一次改变图像格式的运算,使待降噪图像为RGB格式图像或YUV格式图像。
在一些实施例中,第一阶段的图像信号处理还包括至少一次改变图像线性特征的运算,使待降噪图像为非线性图像。
在一些实施例中,一个或多个处理器1220还被配置成:将第一YUV图像的Y分量和第二YUV图像的UV分量融合获得降噪图像后,对降噪图像进行第二阶段的图像信号处理,获得输出图像。
在一些实施例中,第二阶段的图像信号处理包括以下至少之一:
用于调整色彩饱和度的运算、用于边缘增强的运算和用于图像压缩的运算。
本申请的实施例还提供一种计算机可读存储介质1300,参照图13,计算机可读存储介质上存储有计算机指令1310,计算机指令1310被执行时实现如上任一所述的图像降噪处理方法。计算机可读存储介质可以包括易失性或非易失性、磁性、半导体、磁带、光学、可移除、不可移除或其他类型的计算机可读存储介质或计算机可读存储装置。例如,如所公开的,计算机可读存储介质可以是其上存储有计算机指令的存储单元或存储模块。在一些实施例中,计算机可读存储介质可以是其上存储有计算机指令的盘或闪存驱动器。
本领域技术人员还将理解,参考本申请所描述的各种示例性的逻辑块、模块、电路和算法步骤可以被实现为专用电子硬件、计算机软件或二者的组合。例如,模块/单元可以由一个或多个处理器来实现,以使该一个或多个处理器成为一个或多个专用处理器,用于执行存储在计算机可读存储介质中的软件指令以执行模块/单元的专用功能。
附图中的流程图和框图示出了根据本申请的多个实施例的系统和方法的可能实现的系统架构、功能和操作。就这一点而言,流程图或框图中的每个框可以表示一个模块、一个程序段或代码的一部分,其中模块、程序段或代码的一部分包括用于实现指定的逻辑功能的一个或多个可执行指令。还应该注意的是,在一些备选实施方式中,框中标记的功能还可以以与附图中标记的顺序不同的顺序发生。例如,实际上可以基本并行地执行两个连续的块,并且有时也可以以相反的顺序执行,这取决于所涉及的功能。框图和/或流程图中的每个框以及框图和/或流程图中的框的组合可以由用于执行相应的功能或操作的专用的基于硬件的系统来实现,或者可以通过专用硬件和计算机指令的组合来实现。
如本领域技术人员将理解的,本申请的实施例可以体现为方法、系统、或计算机程序产品。因此,本申请的实施例可以采取完全硬件实施例、完全软件实施例或组合了软件和硬件的实施例的形式,以允许专用部件来执行上述功能。此外,本申请的实施例可以采取计算机程序产品的形式,其体现在包含计算机可读程序代码的一个或多个有形和/或非暂时性计算机可读存储介质中。一般形式的非暂时性计算机可读介质包括例如软盘、柔性盘、硬盘、固态驱动器、磁带或其它任何磁性数据存储介质、CD‐ROM、任何其它光学数据存储介质、具有孔形式的任何物理介质、RAM、PROM和EPROM、FLASH‐EPROM或任何其他闪存存储器、NVRAM、高速缓存、寄存器、任何其他存储芯片或胶卷、以及它们的联网版本。
参照根据本申请的实施例的方法、装置和计算机程序产品的流程图和/或框图,来描述本申请的各实施例。应当理解,流程图和/或框图中的每个流程和/或框,以及流程图和/或框图中的多个流程和/或框的组合,可以通过计算机程序指令来实现。这些计算机程序指令可以提供给计算机的处理器、嵌入式处理器或其他可编程数据处理装置以产生专用机器,使得经由计算机的处理器或其他可编程数据处理装置执行的这些指令创建用来实现流程图中的一个或多个流程和/或框图中的一个或多个框中指定的功能的装置。
这些计算机程序指令也可以存储在指导计算机或其他可编程数据处理装置以特定方式运行的计算机可读存储器中,使得计算机可读存储器中存储的指令产生包括指令装置的制造产品,该指令装置实现流程图中的一个或多个流程和/或框图中的一个或多个框中指定的功能。
这些计算机程序指令也可以装载在计算机或其他可编程数据处理装置中,使一系列可操作步骤在计算机或其他可编程装置上执行以产生由计算机实现的处理,使得在计算机或其他可编程装置上执行的指令提供用于实现流程图中的一个或多个流程和/或框图中的一个或多个框中指定的功能的步骤。在典型配置中,计算机装置包括一个或多个中央处理单元(CPU)、输入/输出接口、网络接口和存储器。存储器可以包括易失性存储器、随机存取存储器(RAM)和/或非易失性存储器等形式,例如计算机可读存储介质中的只读存储器(ROM)或闪存RAM。存储器是计算机可读存储介质的示例。
计算机可读存储介质是指可以存储处理器可读的信息或数据的任何类型的物理存储器。因此,计算机可读存储介质可以存储用于由一个或多个处理器执行的指令,包括用于使处理器执行与本文描述的实施例一致的步骤或阶段的指令。计算机可读介质包括非易失性和易失性介质以及可移除和不可移除介质,其中信息存储可以用任何方法或技术来实现。信息可以是计算机可读指令的模块、数据结构和程序、或其他数据。非暂时性计算机可读介质的示例包括但不限于:相变随机存取存储器(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其它类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、闪存或其它存储器技术、光盘只读存储器(CD‐ROM)、数字多功能盘(DVD)或其他光存储器、盒式磁带、磁带或磁盘存储器或其他磁存储装置、高速缓存、寄存器或可用于存储能够被计算机装置访问的信息的任何其他非传输介质。计算机可读存储介质是非暂时性的,并且不包括诸如调制数据信号和载波之类的暂时性介质。
尽管本文描述了所公开的原理的示例和特征,但是在不脱离所公 开的实施例的精神和范围的情况下,可以进行修改、适应性改变和其他实现。此外,词语“包含”、“具有”、“包含有”和“包括”以及其它类似形式旨在在含义上是等同的并且是开放性的,这些词语中的任何一个之后的一个或多个项目并不意在作为这样的一个或多个项目的详尽列表,也并不意在仅限于所列出的一个或多个项目。还必须注意,如本文和所附权利要求书中所使用的,除非上下文另有明确说明,否则单数形式“一”、“一个”和“所述”包括复数指示物。
应该理解的是,本申请不限于上面已经描述并在附图中示出的确切结构,并且可以在不脱离本申请范围的情况下进行各种修改和变化。用意在于,本申请的范围应当仅由所附权利要求限定。

Claims (52)

  1. 一种图像降噪处理方法,包括:
    获得待降噪图像;
    对所述待降噪图像进行第一降噪处理获得第一YUV图像;
    对所述待降噪图像进行第二降噪处理获得第二YUV图像,其中所述第二降噪处理的强度高于所述第一降噪处理的强度;
    将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像。
  2. 根据权利要求1所述的图像降噪处理方法,其中,所述对所述待降噪图像进行第二降噪处理获得第二YUV图像包括:
    对所述待降噪图像进行至少一次下采样运算获得降低分辨率的待降噪图像;
    对所述降低分辨率的待降噪图像进行所述第二降噪处理和至少一次上采样运算获得所述第二YUV图像,所述至少一次上采样运算使所述第二YUV图像的分辨率高于所述降低分辨率的待降噪图像的分辨率。
  3. 根据权利要求2所述的图像降噪处理方法,其中,所述至少一次上采样运算使所述第二YUV图像与所述第一YUV图像保持相同的分辨率。
  4. 根据权利要求1所述的图像降噪处理方法,其中,在所述第二降噪处理过程中对所述待降噪图像进行至少一次下采样运算。
  5. 根据权利要求4所述的图像降噪处理方法,其中,在所述第二降噪处理过程中对所述待降噪图像进行至少一次下采样运算后还进行至少一次上采样运算,所述至少一次上采样运算使所述第二YUV图像和所述第一YUV图像保持相同的分辨率。
  6. 根据权利要求1至5中任意一项所述的图像降噪处理方法,其中,分别调整所述第一降噪处理和所述第二降噪处理所使用降噪算法的参数和/或运算次数,使所述第二降噪处理的强度高于所述第一降噪处理的强度。
  7. 根据权利要求6所述的图像降噪处理方法,其中,所述降噪算法包括:
    时域降噪运算和/或空域降噪运算。
  8. 根据权利要求7所述的图像降噪处理方法,其中,所述分别调整所述第一降噪处理和所述第二降噪处理所使用降噪算法的运算次数中,所述运算次数包括:
    所述降噪算法中的所述时域降噪运算和/或所述空域降噪运算的运算次数。
  9. 根据权利要求7所述的图像降噪处理方法,其中,所述分别调整所述第一降噪处理和所述第二降噪处理所使用降噪算法的参数中,所述参数包括:
    所述降噪算法中的所述时域降噪运算的以下参数的至少之一:采样帧数、动作阈值、像素差阈值、光流算法参数、一个或多个滤波器的滤波强度和像素融合算法参数。
  10. 根据权利要求7所述的图像降噪处理方法,其中,所述分别调整所述第一降噪处理和所述第二降噪处理所使用的降噪算法的参数中,所述参数包括:
    所述降噪算法中的所述空域降噪运算的以下参数的至少之一:采样半径、一个或多个滤波器的滤波强度、相对亮度阈值、绝对亮度阈值和边缘保留阈值。
  11. 根据权利要求1所述的图像降噪处理方法,其中,使用金字塔融合方法或图像拼接融合方法将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像。
  12. 根据权利要求1所述的图像降噪处理方法,其中,所述待降噪图像为传感器生成的原始图像,或者
    传感器基于同一场景生成的多个原始图像的叠加图像。
  13. 根据权利要求1所述的图像降噪处理方法,所述获得待降噪图像包括:
    接收传感器生成的原始图像,或者,接收传感器基于同一场景生成的多个原始图像的叠加图像;
    对所述原始图像或所述叠加图像进行第一阶段的图像信号处理获得所述待降噪图像。
  14. 根据权利要求13所述的图像降噪处理方法,其中,所述第一阶段的图像信号处理包括至少一次改变图像格式的运算,使所述待降噪图像为RGB格式图像或YUV格式图像。
  15. 根据权利要求14所述的图像降噪处理方法,其中,所述第一阶段的图像信号处理还包括至少一次改变图像线性特征的运算,使所述待降噪图像为非线性图像。
  16. 根据权利要求13‐15中任意一项所述的图像降噪处理方法,所述将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像后,还包括:
    对所述降噪图像进行第二阶段的图像信号处理,获得输出图像。
  17. 根据权利要求16所述的图像降噪处理方法,其中,所述第二阶段的图像信号处理包括以下至少之一:
    用于调整色彩饱和度的运算,用于边缘增强的运算,用于图像压缩的运算。
  18. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时,实现权利要求1‐17任意一项所述的图像降噪处理方法。
  19. 一种图像降噪处理装置,包括:
    一个或多个处理器,所述一个或多个处理器被配置成:
    获得待降噪图像;
    对所述待降噪图像进行第一降噪处理获得第一YUV图像;
    对所述待降噪图像进行第二降噪处理获得第二YUV图像,其中所述第二降噪处理的强度高于所述第一降噪处理的强度;
    将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像。
  20. 根据权利要求19所述的图像降噪处理装置,其中,所述一个或多个处理器还被配置成在所述对所述待降噪图像进行第二降噪处理获得第二YUV图像时:
    对所述待降噪图像进行至少一次下采样运算获得降低分辨率的待降噪图像;
    对所述降低分辨率的待降噪图像进行所述第二降噪处理和至少一次上采样运算获得所述第二YUV图像,所述至少一次上采样运算使所述第二YUV图像的分辨率高于所述降低分辨率的待降噪图像。
  21. 根据权利要求20所述的图像降噪处理装置,其中,所述至少一次上采样运算使所述第二YUV图像与所述第一YUV图像保持相同的分辨率。
  22. 根据权利要求19所述的图像降噪处理装置,其中,所述一 个或多个处理器还被配置成:
    在所述第二降噪处理过程中对所述待降噪图像进行至少一次下采样运算。
  23. 根据权利要求22所述的图像降噪处理装置,其中,所述一个或多个处理器还被配置成:
    在所述第二降噪处理过程中对所述待降噪图像进行至少一次下采样运算后还进行至少一次上采样运算,所述至少一次上采样运算使所述第二YUV图像和所述第一YUV图像保持相同的分辨率。
  24. 根据权利要求19‐23中任意一项所述的图像降噪处理装置,其中,所述一个或多个处理器还被配置成:
    分别调整所述第一降噪处理和所述第二降噪处理所使用降噪算法的参数和/或运算次数,使所述第二降噪处理的强度高于所述第一降噪处理的强度。
  25. 根据权利要求24所述的图像降噪处理装置,其中,所述降噪算法包括:
    时域降噪运算和/或空域降噪运算。
  26. 根据权利要求25所述的图像降噪处理装置,其中,所述分别调整所述第一降噪处理和所述第二降噪处理所使用降噪算法的运算次数中,所述运算次数包括:
    所述降噪算法包括的所述时域降噪运算和/或所述空域降噪运算的运算次数。
  27. 根据权利要求25所述的图像降噪处理装置,其中,所述分别调整所述第一降噪处理和所述第二降噪处理所使用降噪算法的参数中,所述参数包括:
    所述降噪算法中的所述时域降噪运算的以下参数的至少之一:采 样帧数、动作阈值、像素差阈值、光流算法参数、一个或多个滤波器的滤波强度和像素融合算法参数。
  28. 根据权利要求25所述的图像降噪处理装置,其中,所述分别调整所述第一降噪处理和所述第二降噪处理所使用的降噪算法的参数中,所述参数包括:
    所述降噪算法中的所述空域降噪运算的以下参数的至少之一:采样半径、一个或多个滤波器的滤波强度、相对亮度阈值、绝对亮度阈值和边缘保留阈值。
  29. 根据权利要求19所述的图像降噪处理装置,其中,使用金字塔融合方法或图像拼接融合方法将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像。
  30. 根据权利要求19所述的图像降噪处理装置,其中,所述待降噪图像为传感器生成的原始图像,或者
    传感器基于同一场景生成的多个原始图像的叠加图像。
  31. 根据权利要求19所述的图像降噪处理装置,其中,所述一个或多个处理器还被配置成在所述获得待降噪图像时:
    接收传感器生成的原始图像,或者,接收传感器基于同一场景生成的多个原始图像的叠加图像;
    对所述原始图像或所述叠加图像进行第一阶段的图像信号处理获得所述待降噪图像。
  32. 根据权利要求31所述的图像降噪处理装置,其中,所述第一阶段的图像信号处理包括至少一次改变图像格式的运算,使所述待降噪图像为RGB格式图像或YUV格式图像。
  33. 根据权利要求32所述的图像降噪处理装置,其中,所述第 一阶段的图像信号处理还包括至少一次改变图像线性特征的运算,使所述待降噪图像为非线性图像。
  34. 根据权利要求31‐33中任意一项所述的图像降噪处理装置,其中,所述一个或多个处理器还被配置成:
    所述将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像后,对所述降噪图像进行第二阶段的图像信号处理,获得输出图像。
  35. 根据权利要求34所述的图像降噪处理装置,其中,所述第二阶段的图像信号处理包括以下至少之一:
    用于调整色彩饱和度的运算、用于边缘增强的运算和用于图像压缩的运算。
  36. 一种成像装置,包括:
    传感器,用于生成图像;
    一个或多个处理器,所述一个或多个处理器被配置成:
    获得来自所述传感器的待降噪图像;
    对所述待降噪图像进行第一降噪处理获得第一YUV图像;
    对所述待降噪图像进行第二降噪处理获得第二YUV图像,其中所述第二降噪处理的强度高于所述第一降噪处理的强度;
    将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像。
  37. 根据权利要求36所述的成像装置,其中,所述一个或多个处理器还被配置成在所述对所述待降噪图像进行第二降噪处理获得第二YUV图像时:
    对所述待降噪图像进行至少一次下采样运算获得降低分辨率的待降噪图像;
    对所述降低分辨率的待降噪图像进行所述第二降噪处理和至少 一次上采样运算获得所述第二YUV图像,所述至少一次上采样运算使所述第二YUV图像的分辨率高于所述降低分辨率的待降噪图像。
  38. 根据权利要求37所述的成像装置,其中,所述至少一次上采样运算使所述第二YUV图像与所述第一YUV图像保持相同的分辨率。
  39. 根据权利要求36所述的成像装置,其中,所述一个或多个处理器还被配置成:
    在所述第二降噪处理过程中对所述待降噪图像进行至少一次下采样运算。
  40. 根据权利要求39所述的成像装置,其中,所述一个或多个处理器还被配置成:
    在所述第二降噪处理过程中对所述待降噪图像进行至少一次下采样运算后还进行至少一次上采样运算,所述至少一次上采样运算使所述第二YUV图像和所述第一YUV图像保持相同的分辨率。
  41. 根据权利要求36‐40中任意一项所述的成像装置,其中,所述一个或多个处理器还被配置成:
    分别调整所述第一降噪处理和所述第二降噪处理所使用降噪算法的参数和/或运算次数,使所述第二降噪处理的强度高于所述第一降噪处理的强度。
  42. 根据权利要求41所述的成像装置,其中,所述降噪算法包括:
    时域降噪运算和/或空域降噪运算。
  43. 根据权利要求42所述的成像装置,其中,所述分别调整所述第一降噪处理和所述第二降噪处理所使用降噪算法的运算次数中, 所述运算次数包括:
    所述降噪算法包括的所述时域降噪运算和/或所述空域降噪运算的运算次数。
  44. 根据权利要求42所述的成像装置,其中,所述分别调整所述第一降噪处理和所述第二降噪处理所使用降噪算法的参数中,所述参数包括:
    所述降噪算法中的所述时域降噪运算的以下参数的至少之一:采样帧数、动作阈值、像素差阈值、光流算法参数、一个或多个滤波器的滤波强度和像素融合算法参数。
  45. 根据权利要求42所述的成像装置,其中,所述分别调整所述第一降噪处理和所述第二降噪处理所使用的降噪算法的参数中,所述参数包括:
    所述降噪算法中的所述空域降噪运算的以下参数的至少之一:采样半径、一个或多个滤波器的滤波强度、相对亮度阈值、绝对亮度阈值和边缘保留阈值。
  46. 根据权利要求36所述的成像装置,其中,使用金字塔融合方法或图像拼接融合方法将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像。
  47. 根据权利要求36所述的成像装置,其中,所述待降噪图像为所述传感器生成的原始图像,或者
    所述传感器基于同一场景生成的多个原始图像的叠加图像。
  48. 根据权利要求36所述的成像装置,其中,所述一个或多个处理器还被配置成在所述获得待降噪图像时:
    接收传感器生成的原始图像,或者,接收传感器基于同一场景生成的多个原始图像的叠加图像;
    对所述原始图像或所述叠加图像进行第一阶段的图像信号处理获得所述待降噪图像。
  49. 根据权利要求48所述的成像装置,其中,所述第一阶段的图像信号处理包括至少一次改变图像格式的运算,使所述待降噪图像为RGB格式图像或YUV格式图像。
  50. 根据权利要求49所述的成像装置,其中,所述第一阶段的图像信号处理还包括至少一次改变图像线性特征的运算,使所述待降噪图像为非线性图像。
  51. 根据权利要求48‐50中任意一项所述的成像装置,所述一个或多个处理器还被配置成:
    所述将所述第一YUV图像的Y分量和所述第二YUV图像的UV分量融合获得降噪图像后,对所述降噪图像进行第二阶段的图像信号处理,获得输出图像。
  52. 根据权利要求51所述的成像装置,其中,所述第二阶段的图像信号处理包括以下至少之一:
    用于调整色彩饱和度的运算,用于边缘增强的运算,用于图像压缩的运算。
PCT/CN2021/087390 2021-04-15 2021-04-15 图像降噪处理方法、装置及成像装置 WO2022217525A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/087390 WO2022217525A1 (zh) 2021-04-15 2021-04-15 图像降噪处理方法、装置及成像装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/087390 WO2022217525A1 (zh) 2021-04-15 2021-04-15 图像降噪处理方法、装置及成像装置

Publications (1)

Publication Number Publication Date
WO2022217525A1 true WO2022217525A1 (zh) 2022-10-20

Family

ID=83640010

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/087390 WO2022217525A1 (zh) 2021-04-15 2021-04-15 图像降噪处理方法、装置及成像装置

Country Status (1)

Country Link
WO (1) WO2022217525A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876252A (zh) * 2024-03-11 2024-04-12 上海玄戒技术有限公司 一种图像降噪方法、装置、设备、存储介质及芯片

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130064448A1 (en) * 2011-09-09 2013-03-14 Stmicroelectronics (Grenoble 2) Sas Image chroma noise reduction
CN106127698A (zh) * 2016-06-15 2016-11-16 深圳市万普拉斯科技有限公司 图像降噪处理方法和装置
CN106570850A (zh) * 2016-10-12 2017-04-19 成都西纬科技有限公司 一种图像融合方法
CN106937097A (zh) * 2017-03-01 2017-07-07 奇酷互联网络科技(深圳)有限公司 一种图像处理方法、系统及移动终端
CN107967668A (zh) * 2016-10-20 2018-04-27 上海富瀚微电子股份有限公司 一种图像处理方法及装置
CN111882504A (zh) * 2020-08-05 2020-11-03 展讯通信(上海)有限公司 图像中颜色噪声的处理方法、系统、电子设备和存储介质
US20210073956A1 (en) * 2018-02-12 2021-03-11 Gopro, Inc. Image processing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130064448A1 (en) * 2011-09-09 2013-03-14 Stmicroelectronics (Grenoble 2) Sas Image chroma noise reduction
CN106127698A (zh) * 2016-06-15 2016-11-16 深圳市万普拉斯科技有限公司 图像降噪处理方法和装置
CN106570850A (zh) * 2016-10-12 2017-04-19 成都西纬科技有限公司 一种图像融合方法
CN107967668A (zh) * 2016-10-20 2018-04-27 上海富瀚微电子股份有限公司 一种图像处理方法及装置
CN106937097A (zh) * 2017-03-01 2017-07-07 奇酷互联网络科技(深圳)有限公司 一种图像处理方法、系统及移动终端
US20210073956A1 (en) * 2018-02-12 2021-03-11 Gopro, Inc. Image processing
CN111882504A (zh) * 2020-08-05 2020-11-03 展讯通信(上海)有限公司 图像中颜色噪声的处理方法、系统、电子设备和存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876252A (zh) * 2024-03-11 2024-04-12 上海玄戒技术有限公司 一种图像降噪方法、装置、设备、存储介质及芯片

Similar Documents

Publication Publication Date Title
US11244209B2 (en) Image processing device, imaging device, and image processing method
JP7077395B2 (ja) 多重化高ダイナミックレンジ画像
CN107924554B (zh) 图像处理流水线中对图像数据的多速率处理
US8363123B2 (en) Image pickup apparatus, color noise reduction method, and color noise reduction program
US9094648B2 (en) Tone mapping for low-light video frame enhancement
CN102970549B (zh) 图像处理方法及装置
US9307212B2 (en) Tone mapping for low-light video frame enhancement
JP6485078B2 (ja) 画像処理方法および画像処理装置
US9148580B2 (en) Transforming wide dynamic range images to reduced dynamic range images
US8929683B2 (en) Techniques for registering and warping image stacks
TWI542224B (zh) 影像訊號處理方法以及影像訊號處理裝置
Andriani et al. Beyond the Kodak image set: A new reference set of color image sequences
US20120224766A1 (en) Image processing apparatus, image processing method, and program
US20200396397A1 (en) Multispectral Image Processing System and Method
CN111784603A (zh) 一种raw域图像去噪方法、计算机装置及计算机可读存储介质
WO2023130922A1 (zh) 图像处理方法与电子设备
US20190043171A1 (en) Image processing apparatus and image processing method
WO2022217525A1 (zh) 图像降噪处理方法、装置及成像装置
CN108122218B (zh) 基于颜色空间的图像融合方法与装置
US9460492B2 (en) Apparatus and method for image processing
WO2024055458A1 (zh) 图像降噪处理方法、装置、设备、存储介质和程序产品
CN104427321B (zh) 图像处理装置及其控制方法
JP5181913B2 (ja) 画像処理装置、撮像装置、画像処理方法及びプログラム
KR20230165686A (ko) 이미지 데이터에 대한 디노이징 처리를 수행하는 방법 및 전자 장치
JP6173027B2 (ja) 画像処理装置および画像処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21936413

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21936413

Country of ref document: EP

Kind code of ref document: A1