WO2022178683A1 - Method of generating target image, electrical device, and non-transitory computer readable medium - Google Patents

Method of generating target image, electrical device, and non-transitory computer readable medium Download PDF

Info

Publication number
WO2022178683A1
WO2022178683A1 PCT/CN2021/077519 CN2021077519W WO2022178683A1 WO 2022178683 A1 WO2022178683 A1 WO 2022178683A1 CN 2021077519 W CN2021077519 W CN 2021077519W WO 2022178683 A1 WO2022178683 A1 WO 2022178683A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
captured
blend
generate
blurry
Prior art date
Application number
PCT/CN2021/077519
Other languages
French (fr)
Inventor
Tetsuji Kamata
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp., Ltd. filed Critical Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority to CN202180084318.0A priority Critical patent/CN116636228A/en
Priority to PCT/CN2021/077519 priority patent/WO2022178683A1/en
Publication of WO2022178683A1 publication Critical patent/WO2022178683A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • G06T5/75Unsharp masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • H04N25/615Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4" involving a transfer function modelling the optical system, e.g. optical transfer function [OTF], phase transfer function [PhTF] or modulation transfer function [MTF]

Definitions

  • the present disclosure relates to a method of generating a target image, an electrical device, and a non-transitory computer readable medium.
  • Electrical devices such as smartphones and tablet terminals are widely used in our daily life.
  • many of the electrical devices are equipped with a camera assembly for capturing images.
  • Some of the electrical devices are portable and are thus easy to carry. Therefore, a user of the electrical device can easily take a picture of an object by using the camera assembly of the electrical device anytime, anywhere.
  • the present disclosure aims to solve at least one of the technical problems mentioned above. Accordingly, the present disclosure needs to provide a method of generating a target image, an electrical device and a non-transitory computer readable medium.
  • a method of generating a target image may include:
  • the restoration filter being a filter to restore a sharpness of the captured blurry image based on optical characteristics of the optics
  • noise reduction process applying a noise reduction process to the captured blurry image to reduce noise in the captured blurry image and to generate a noise reduction image, wherein the noise reduction process is adjusted based on adjusting information depending on filtering characteristics of the restoration filter and/or the optical characteristics of the optics;
  • an electrical device may include:
  • a camera assembly including an optics
  • a processor configured to:
  • the restoration filter being a filter to restore a sharpness of the captured blurry image based on optical characteristics of the optics
  • noise reduction process to the captured blurry image to reduce noise in the captured blurry image and to generate a noise reduction image, wherein the noise reduction process is adjusted based on adjusting information depending on filtering characteristics of the restoration filter and/or the optical characteristics of the optics;
  • a non-transitory computer readable medium including program instructions stored thereon, wherein, when the program instructions are executed by an electrical device, the program instructions cause the electrical device to perform at least the following:
  • the restoration filter being a filter to restore a sharpness of the captured blurry image based on optical characteristics of the optics
  • noise reduction process applying a noise reduction process to the captured blurry image to reduce noise in the captured blurry image and to generate a noise reduction image, wherein the noise reduction process is adjusted based on adjusting information depending on filtering characteristics of the restoration filter and/or the optical characteristics of the optics;
  • FIG. 1 is a plan view of a first side of an electrical device according to an embodiment of the present disclosure
  • FIG. 2 is a plan view of a second side of the electrical device according to the embodiment of the present disclosure.
  • FIG. 3 is a block diagram of the electrical device according to the embodiment of the present disclosure.
  • FIG. 4 is an explanatory drawing of an optical aberration in an optics
  • FIG. 5 illustrates a formula which indicates a relationship between a captured blurry image and an ideal sharp image
  • FIG. 6 is a visual explanation of a manner to improve a sharpness of the captured blurry image in the electrical device according to the embodiment of the present disclosure
  • FIG. 7 illustrates a cost function c (L) including a fidelity term and a regularization term
  • FIG. 8 is a formula showing a manner of computing an inverse filter based on the cost function c (L) ;
  • FIG. 9 shows an image of a blur kernel K and an image of an inverse filter in the spatial domain
  • FIG. 10 illustrates a problem of a restored sharp image
  • FIG. 11A illustrates an outline of a blending process according to the electrical device of the embodiment of the present disclosure
  • FIG. 11B shows a formula of a bilateral filter which is one example of filters used in a noise reduction process.
  • FIG. 11C shows one example of a part of the captured blurry image.
  • FIG. 11D shows one example of a formula of a spatial weight W s and an intensity weight W i in the bilateral filter.
  • FIG. 11E shows a captured blurry image before applying the noise reduction process by using the bilateral filter and a noise reduction image after applying the noise reduction process by using the bilateral filter.
  • FIG. 11F shows one example of a restoration filter array of the restoration filer.
  • FIG. 12 illustrates one example of how to generate a regular blend mask in the electrical device according to the embodiment of the present disclosure
  • FIG. 13 illustrates one example of a look-up table to modulate the captured blurry image nonlinearly
  • FIG. 14 is a visual explanation of a lens shading model in the electrical device of the embodiment of the present disclosure.
  • FIG. 15 is a visual explanation of a first option of reflecting shading characteristics on a regular blend mask
  • FIG. 16 is a visual explanation of a second option of reflecting the shading characteristics on the regular blend mask
  • FIG. 17 is a visual explanation of the third option of reflecting the shading characteristics on the regular blend mask.
  • FIG. 18 is a visual explanation of a smooth transition in a final blend mask
  • FIG. 19 is a visual explanation of a manner to generate a target image in the electrical device according to the embodiment of the present disclosure.
  • FIG. 20 is a flowchart of a target image generation process in the electrical device according to the embodiment of the present disclosure.
  • FIG. 21 illustrates a comparison between a target image generated by the prior art technology and a target image generated by the electrical device according to the embodiment of the present disclosure.
  • FIG. 22 shows one example in a case where the noise reduction process is applied to chroma components (U, V) in addition to luminance component (Y) .
  • FIG. 1 is a plan view of a first side of an electrical device 10 according to an embodiment of the present disclosure
  • FIG. 2 is a plan view of a second side of the electrical device 10 according to the embodiment of the present disclosure.
  • the first side may be referred to as a back side of the electrical device 10 whereas the second side may be referred to as a front side of the electrical device 10.
  • the electrical device 10 may include a display 20 and a camera assembly 30.
  • the camera assembly 30 includes a first main camera 32, a second main camera 34 and a sub camera 36.
  • the first main camera 32 and the second main camera 34 can capture an image in the first side of the electrical device 10 and the sub camera 36 can capture an image in the second side of the electrical device 10. Therefore, the first main camera 32 and the second main camera 34 are so-called out-cameras whereas the sub camera 36 is a so-called in-camera.
  • the electrical device 10 can be a mobile phone, a tablet computer, a personal digital assistant, and so on.
  • Each of the first main camera 32, the second main camera 34 and the sub camera 36 has an imaging sensor which converts a light which has passed a color filter to an electrical signal.
  • a signal value of the electrical signal depends on an amount of the light which has passed the color filter.
  • the electrical device 10 may have less than three cameras or more than three cameras.
  • the electrical device 10 may have two, four, five, and so on, cameras.
  • FIG. 3 is a block diagram of the electrical device 10 according to the present embodiment.
  • the electrical device 10 may include a main processor 40, an image signal processor 42, a memory 44, a power supply circuit 46 and a communication circuit 48.
  • the display 20, the camera assembly 30, the main processor 40, the image signal processor 42, the memory 44, the power supply circuit 46 and the communication circuit 48 are connected with each other via a bus 50.
  • the main processor 40 executes one or more program instructions stored in the memory 44.
  • the main processor 40 implements various applications and data processing of the electrical device 10 by executing the program instructions.
  • the main processor 40 may be one or more computer processors.
  • the main processor 40 is not limited to one CPU core, but it may have a plurality of CPU cores.
  • the main processor 40 may be a main CPU of the electrical device 10, an image processing unit (IPU) or a DSP provided with the camera assembly 30.
  • the image signal processor 42 controls the camera assembly 30 and processes various kinds of image data captured by the camera assembly 30 to generate a target image data.
  • the image signal processor 42 can apply a demosaicing process, a noise reduction process, an auto exposure process, an auto focus process, an auto white balance process, a high dynamic range process and so on, to the image data captured by the camera assembly 30.
  • the main processor 40 and the image signal processor 42 collaborate with each other to generate a target image data of the object captured by the camera assembly 30. That is, the main processor 40 and the image signal processor 42 are configured to capture the image of the object by means of the camera assembly 30 and apply various kinds of image processing to the captured image data.
  • the memory 44 stores program instructions to be executed by the main processor 40, and various kinds of data. For example, data of the captured image are also stored in the memory 44.
  • the memory 44 may include a high-speed RAM memory, and/or a non-volatile memory such as a flash memory and a magnetic disk memory. That is, the memory 44 may include a non-transitory computer readable medium in which the program instructions are stored.
  • the power supply circuit 46 may have a battery such as a lithium-ion rechargeable battery and a battery management unit (BMU) for managing the battery.
  • BMU battery management unit
  • the communication circuit 48 is configured to receive and transmit data to communicate with base stations of the telecommunication network system, the Internet or other devices via wireless communication.
  • the wireless communication may adopt any communication standard or protocol, including but not limited to GSM (Global System for Mobile communication) , CDMA (Code Division Multiple Access) , LTE (Long Term Evolution) , LTE-Advanced, 5th generation (5G) .
  • the communication circuit 48 may include an antenna and an RF (radio frequency) circuit.
  • FIG. 4 is an explanatory drawing of an optical aberration. That is, when capturing an image in the camera assembly 30, a sharpness of the image is degraded by the optical aberration in an optics, such as coma, astigmatism and so on. Therefore, a point light source image is spread by the optical aberration in the optics, and the point light source image is not a point anymore on an image plane.
  • an optics such as coma, astigmatism and so on. Therefore, a point light source image is spread by the optical aberration in the optics, and the point light source image is not a point anymore on an image plane.
  • the spread point light source is modeled by a function referred to as Point Spread Function (PSF) which represents a manner to degrade the captured image, and its degradation characteristics. This is known as an optical blur.
  • PSF Point Spread Function
  • the blurry image captured by the camera assembly 30 is also referred to as the captured blurry image.
  • FIG. 5 illustrates a formula which indicates a relationship between the captured blurry image B and an ideal sharp image L.
  • the sharpness of the captured blurry image B is expressed by K*L+n.
  • the K indicates a blur kernel which is the same as the PSF.
  • the L indicates the ideal sharp image which has no noise, i.e., which is ideal.
  • the n indicates a noise.
  • the "*" indicates a circular convolution.
  • the formula shows that the image captured by the camera assembly 30 always includes the noise. This noise is also referred to as a shot noise because the noise is unavoidable when taking an image.
  • FIG. 6 is a visual explanation of a manner to improve the sharpness of the captured blurry image in the electrical device 10 according to the embodiment of the present disclosure.
  • the electrical device 10 obtains a restored sharp image L_res via a filtering restoration process which uses an inverse filter computed from the PSF.
  • the restored sharp image L_res is closer to the ideal sharp image L.
  • the noise in the captured blurry image B is also filtered through the filtering restoration process and the noise is also increased therethrough.
  • the inverse filter is one of the examples of restoration filters which are filters to restore a sharpness of the captured blurry image based on optical characteristics of the optics.
  • FIG. 7 illustrates a cost function c (L) in the present embodiment.
  • the cost function c (L) is basically expressed by an argument of minimum of "the captured blurry image B -the blur kernel K *the ideal sharp image L" . That is, this term is a fidelity term.
  • a regularization term is also introduced into the cost function c (L) in order to apply a penalty. That is, by introducing the regularization term into the cost function c (L) , overfitting can be avoided. As a result, it can be suppressed to increase the noise in the restored sharp image L_res.
  • D indicates a regularization function, and ⁇ indicates a regularization gain.
  • FIG. 8 is a formula showing a manner of computing the inverse filter based on the cost function c (L) .
  • Fourier transform F (L) in FIG. 8 can be obtained by solving the cost function c (L) in FIG. 7 in the frequency domain.
  • the restored sharp image L_res can be obtained by inversely transforming the Fourier transform F (L) . Since the restored sharp image L_res can be obtained by the inverse filter *the captured blurry image B, the inverse filter can be specified in this formula.
  • FIG. 9 shows an image of the blur kernel K and an image of the inverse filter in the spatial domain.
  • the reason why a hat is added to the inverse filter K -1 is that the blur kernel K is not an actual measurement value, but a designed value.
  • the designed value of the blur kernel K is not equal to the actual measurement value of the blur kernel K because of an assembling error and a dimensional error of components of the camera assembly 30 and the electrical device 10. Therefore, the inverse filter is slightly different from the actual inverse filter K -1 of the electrical device 10.
  • the electrical device 10 computes the inverse filter based on the designed value of the blur kernel K.
  • FIG. 10 illustrates a problem of the restored sharp image.
  • the captured blurry image contains the noise.
  • the circular convolution is applied to the captured blurry image with the inverse filter the sharpness of the restored sharp image is improved though the noise in the captured blurry image is increased.
  • the captured blurry image contains noise and therefore it is desirable to reduce the noise in the captured blurry image as much as possible before generating the target image.
  • the noise can be reduced by applying a noise reduction process to the captured blurry image and then a noise reduction image can be generated.
  • the electrical device 10 introduces a blending process for blending the restored sharp image to which the filtering restoration process is applied and the noise reduction image to which not the filtering restoration process but the noise reduction process is applied.
  • FIG. 11A illustrates an outline of the blending process according to the electrical device 10 of the embodiment of the present disclosure.
  • the circular convolution is applied to the captured blurry image with the inverse filter to generate the restored sharp image.
  • a low frequency area in the restored sharp image is masked with a regular blend mask to generate a first intermediate image.
  • the captured blurry image also includes the noise. Therefore, in order to reduce the noise in the captured blurry image, the noise reduction process is applied to the captured blurry image. After applying the noise reduction process to the captured blurry image, the noise reduction image in which the noise is reduced is obtained.
  • the noise reduction image is masked with an inverse blend mask to generate a second intermediate image.
  • the inverse blend mask is a mask in which the regular blend mask is inversed.
  • the inverse blend mask can be obtained by inversing the regular blend mask.
  • the low frequency area including the noise in the restored sharp image is replaced with the low frequency area not including the noise in the noise reduction image because of the noise reduction process. Therefore, the noise is not contained in the low frequency area in the target image.
  • the sharpness in the high frequency area in the target image is improved because of the restoration process. That is, a texture in the high frequency area in the target image can be fine.
  • the noise in the low frequency area in the captured blurry image is reduced by the noise reduction process to generate the noise reduction image. Therefore, the noise in the low frequency area in the target image generated by combining the restored sharp image and the noise reduction image is very little.
  • the texture in the high frequency area is degraded through the noise reduction process.
  • the texture in the high frequency area in the noise reduction image is replaced with the texture in the high frequency are in the restored sharp image. Therefore, the noise reduction process can be adjusted so that the noise in the low frequency area of the captured blurry image can effectively be reduced. Therefore, the user satisfaction with the target image is improved.
  • FIG. 11B shows a formula of a bilateral filter which is one example of filters used in the noise reduction process.
  • FIG. 11C shows one example of a part of the captured blurry image.
  • the bilateral filter is known to be able to remove the noise but preserve image edges.
  • the bilateral filter is just one example for being used in the noise reduction process, and other methods for reducing the noise may be applied to the noise reduction process.
  • the bilateral filter can be expressed by:
  • the q i, j indicates a denoised pixel and a center pixel value in a filter kernel.
  • the first item of the right side of the formula is the filter kernel which is a normalized weight
  • the second item of the image block I (I, j) of the right side of the formula is an image block to be processed in the captured blurry image.
  • the i and the j indicate an x-position and a y-position in the captured blurry image, respectively, and a center coordinate of the filter kernel.
  • the ". *" indicates an element-wise multiplication.
  • the W s indicates a spatial weight and the W i indicates an intensity weight.
  • FIG. 11D shows one example of a formula of the spatial weight W s and the intensity weight W i in the bilateral filter.
  • the spatial weight W s is a predetermined gaussian filter kernel and a size of the gaussian filter kernel is the same as a size of the image block I (I, j) .
  • the H indicates a size in an x-direction and the V indicates a size in a y-direction of the gaussian filter kernel.
  • the spatial weight W s is expressed by:
  • n indicates an x-position of the gaussian filter kernel and the m indicates a y-position in the gaussian filter kernel.
  • the parameter ⁇ s indicates an intensity weight variance.
  • the spatial weight W s can be controlled by ⁇ s . That is, the larger the parameter ⁇ s is, the stronger the effectiveness of smoothing is.
  • the intensity weight W i is expressed by:
  • the parameter ⁇ i indicates a spatial weight variance and the square indicates an element-wise multiplication.
  • the (I (i, j) -p i, j ) 2 indicates an intensity difference between an intensity of each pixel and a value of the p i, j .
  • the intensity weight W i can be controlled by the parameter ⁇ i . That is, the smaller the parameter ⁇ i is, the stronger the effectiveness of edge preserving is. In other words, the smaller the parameter ⁇ i is, the weaker the effectiveness of smoothing is.
  • FIG. 11E shows a captured blurry image before applying the noise reduction process by using the bilateral filter mentioned above and the noise reduction image after applying the noise reduction process by using the bilateral filter mentioned above.
  • the spatial weight W s and the intensity weight W i in the bilateral filter can be controlled by the parameter ⁇ s and the parameter ⁇ i . Therefore, in the present embodiment, the parameter ⁇ s and the parameter ⁇ i are adjusted based on adjusting information depending on filtering characteristics of the restoration filter used by the restoration process and/or the optical characteristics of the optics in the camera assembly 30.
  • the adjusting information may depend on a kernel size of the restoration filer, a filter strength of the restoration filer, frequency characteristics of the restoration filter, spatial characteristics of the optics of the camera assembly 30, and/or a point spread function (PSF) of the optics of the camera assembly 30.
  • a kernel size of the restoration filer a filter strength of the restoration filer
  • frequency characteristics of the restoration filter frequency characteristics of the restoration filter
  • spatial characteristics of the optics of the camera assembly 30 e.g., a point spread function (PSF) of the optics of the camera assembly 30.
  • PSF point spread function
  • FIG. 11F shows one example of a restoration filter array of the restoration filer.
  • the point spread function PSF
  • the parameter ⁇ s can be small to be less smoothing and the parameter ⁇ i can be large to be less edge preserving.
  • the point spread function (PSF) is large and anisotropic and it has low brightness (lens shading) . Therefore, the good sharpness recovering cannot be expected even after applying the restoration process. Moreover, much noise is generated due to the low brightness. As a result, the parameter ⁇ s should be large to be more smoothing and the parameter ⁇ i should be small to be more edge preserving.
  • the parameter ⁇ s may become larger gradually and the parameter ⁇ i may become smaller gradually.
  • FIG. 12 illustrates one example of how to generate the regular blend mask in the electrical device 10 according to the embodiment of the present disclosure.
  • the regular blend mask is generated based on the captured blurry image. More specifically, the electrical device 10 obtains the captured blurry image, for example, from the camera assembly 30 or the image signal processor 42. Thereafter, the electrical device 10 applies the circular convolution to the captured blurry image with the inverse filter to improve the sharpness of the captured blurry image and to generate the restored sharp image. However, if the restored sharp image has already been generated in another process, this process can be omitted.
  • the electrical device 10 executes an averaging process against the captured blurry image to degrade the sharpness of the captured blurry image and to generate an averaged image.
  • the electrical device 10 subtracts the restored sharp image from the averaged image to generate a subtracted image and then calculates an absolute value of the subtracted image to generate a difference image.
  • the difference image which can be obtained through these processes shows how much the brightness of the pixels is changed by the filtering restoration process with the inverse filter. In other words, each pixel in the difference image has a specific value which indicates a level of the brightness change due to the filtering restoration process.
  • the gray areas indicate areas where the brightness has significantly been changed by the filtering restoration process with the inverse filter.
  • the black areas indicate areas where the brightness has not been changed so much by the filtering restoration process with the inverse filter.
  • the electrical device 10 nonlinearly modulates the captured blurry image to generate a temporary threshold map.
  • a look-up table LUT is used to modulate the captured blurry image nonlinearly.
  • FIG. 13 illustrates one example of the look-up table LUT to nonlinearly modulate the captured blurry image.
  • the look-up table if the value of the pixel of the captured blurry image is low, the value of the pixel of the temporary threshold map is raised from its original value. On the other hand, if the value of the pixel of the captured blurry image is within the middle range, the value of the pixel of the temporary threshold map is lowered from its original value. Furthermore, if the value of the pixel of the captured blurry image is more than a certain value, the value of the pixel of the temporary threshold map is capped at a certain value.
  • FIG. 13 is one of the examples of the look-up table LUT, the look-up table not being limited to the example shown in FIG. 13. Moreover, the method to nonlinearly modulate the captured blurry image is not limited to using the look-up table LUT. Other various methods may be applied to nonlinearly modulate the captured blurry image.
  • the electrical device 10 multiplies the temporary threshold map by a certain value to adjust a gain of the temporary threshold map, and then a final threshold map can be calculated.
  • the certain value may be less than one or may be more than one.
  • each pixel indicates the threshold value as to whether the pixel of the restored sharp image should be masked when generating the first intermediate image.
  • the electrical device 10 generates the regular blend mask based on the difference image and the final threshold map through a thresholding process.
  • the thresholding process if the value of the diff_img, which is the value of the pixel of the difference image, is equal to or larger than the value of the th_map, which is the value of the pixel of the final threshold map, then the value of the blend_mask, which is the value of the pixel of the regular blend mask, is the blend_ratio_high.
  • the blend_ratio_high is 95 %.
  • the value of the diff_img is less than the value of the th_map, then the value of the blend_mask is the blend_ratio_low.
  • the blend_ratio_low is 5%.
  • the first intermediate image is generated such that the pixel of the first intermediate image contains 95%of the corresponding pixel of the restored sharp image and 5%of the corresponding pixel of the captured blurry image.
  • the blend_mask of the pixel of the regular blend mask is the blend_ratio_low (5%)
  • the first intermediate image is generated such that the pixel of the first intermediate image contains 5%of the corresponding pixel of the restored sharp image and 95%of the corresponding pixel of the captured blurry image.
  • adding the blend_ratio_high to the blend_ratio_low should be one.
  • the value of the restored sharp image and the value of the captured blurry image are blended in the same pixel so that the boundary between the area where the restored sharp image is used and the area where the captured blurry image is used cannot be distinguished by the human eye in the target image.
  • the thresholding process Through the thresholding process, small spread gray areas, such as the noise in the difference image, are eliminated.
  • the bright areas in the restored sharp image include the noise. Therefore, in the present embodiment, the value of the bright areas of the final threshold map is high. As a result, the bright areas in the regular blend mask become the blend ratio low and the bright areas in the restored sharp image are masked with the regular blend mask.
  • the inverse blend mask can be generated by inversing the regular blend mask. That is, by inversing the regular blend mask, the blend_mask of the pixel of the blend_ratio_high (95%) in the regular blend mask is converted to the blend_mask of the pixel of the blend_ratio_low (5%) in the inverse blend mask, whereas the blend_mask of the pixel of the blend_ratio_low (5%) in the regular blend mask is converted to the blend_mask of the pixel of the blend_ratio_high (95%) in the inverse blend mask.
  • the blend_ratio_high is less than 100%and the blend_ratio_low is more than 0%.
  • the blend_ratio_high may be 100%and the blend_ratio_low may be 0%in the regular blend mask.
  • the blend_ratio_high is also 100%and the blend_ratio_low is also 0%in the inverse blend mask.
  • a value of the blend_ratio_high is higher than a value of the blend_ratio_low.
  • the brightness is decreasing around corners of the captured image. That is, the amount of the light in the corners is low due to lens shading (a.k.a. vignetting) . Therefore, in some cases, the captured blurry image has already been compensated to correct the shading characteristics. To compensate, the electrical device 10 multiplies the captured image by an appropriate gain. However, the noise characteristics strongly depend on this compensation. Therefore, the noise characteristics may be taken into account when generating the regular blend mask.
  • FIG. 14 is a visual explanation of the lens shading model LSM in the present embodiment.
  • a captured image following the original lens shading characteristics of the optics has the brighter area at the center of the captured image and the darker areas at the corners of the captured image.
  • the lens shading model LSM is generated based on the original lens shading characteristics. That is, the lens shading model LSM is generated by a nonlinear transform of the original lens characteristics. The purpose of the non-linear transform is to adjust the brightness based on the lens shading model LSM to obtain the regular blend mask suitable for masking the noisy area.
  • FIG. 15 is a visual explanation of the first option of reflecting the shading characteristics on the regular blend mask.
  • the electrical device 10 executes a pixel-wise multiplication of the difference image and the lens shading model LSM.
  • the brightness around the corners of the captured image has been raised. That is, the brightness around the corners of the captured blurry image has also already been raised. Therefore, according to the first option, the brightness around the corners of the difference image is lowered by using the lens shading model LSM. As a result, the effect of the correction of the shading characteristics of the regular blend mask can be compensated.
  • FIG. 16 is a visual explanation of the second option of reflecting the shading characteristics on the regular blend mask.
  • the electrical device 10 executes a pixel-wise multiplication of the final threshold map and an inverse lens shading model to modify the final threshold map.
  • the inverse lens shading model is calculated by inversing the lens shading model LSM. Therefore, an area around the center of the inverse lens shading model is dark whereas areas around the corners of the inverse lens shading model are bright.
  • FIG. 17 is a visual explanation of the third option of reflecting the shading characteristics on the regular blend mask.
  • the thresholding process mentioned above is modified. That is, the value of the blend_mask is calculated by multiplying the blend_ratio_high by the lens shading model LSM (i, j) or multiplying the blend_ratio_low by the inverse lens shading model. As a result of this calculation, the ratio of the restored sharp image is increased around the center of the regular blend mask. In other words, the ratio of the restored sharp image is decreased in the corners of the regular blend mask.
  • the ratio of the restored sharp image decreases as the pixel is located nearer the corners of the regular blend mask.
  • the ratio of the captured blurry image increases as the pixel is located nearer the corners of the regular blend mask.
  • FIG. 18 is a visual explanation of smoothing the transition in the final blend mask.
  • the values of the blend_mask of the final blend mask are two types, i.e., the blend_ratio_high and the blend_ratio_low. Therefore, a boundary between the areas of the blend_ratio_high and the areas of the blend_ratio_low is very clear and sharp. When the clear and sharp final blend mask is used to generate the target image, the target image may become unnatural for the human eye.
  • the electrical device 10 may add a blur to the final blend mask so as to naturally blend the areas of the blend_ratio_high and the areas of the blend_ratio_low.
  • the electrical device 10 adds the blur to the final blend mask by executing a simple convolution of Gaussian Blur Kernel.
  • a blurry blend mask is generated.
  • the boundary between the areas of the blend_ratio_high and the areas of the blend_ratio_low is blurred.
  • the values of the pixels in the boundary area have transitional values between the blend_ratio_low and the blend_ratio_high. That is, in the boundary area, the values of the blend_mask are gradually increased from the blend_ratio_low to the blend_ratio_high.
  • FIG. 19 is a visual explanation of a manner of generating the target image.
  • the target image is generated by masking the restored sharp image with the regular blend mask, masking the noise reduction image with the inverse blend mask, and combining them.
  • the first intermediate image can be generated by executing a pixel-wise multiplication of the restored sharp image and the regular blend mask.
  • the second intermediate image can be generated by executing a pixel-wise multiplication of the noise reduction image and the inverse blend mask. That is, the masking process can be implemented by the pixel-wise multiplication. Therefore, the target image can be generated by combining the first intermediate image and the second intermediate image.
  • FIG. 20 is a flowchart of a target image generation process in the electrical device 10 according to the embodiment of the present disclosure.
  • the target image generation process can be executed by the main processor 40 or the image signal processor 42.
  • the target image generation process can be executed by a combination of the main processor 40 and the image signal processor 42.
  • the main processor 40 obtains the captured blurry image from an output port of the image signal processor 42. Then, the main processor 40 executes the target image generation process against the captured blurry image and inputs the generated target image to an input port of the image signal processor 42.
  • program instructions to implement the target image generation process may be stored on a non-transitory computer readable medium.
  • the main processor 40 reads out the program instructions from the non-transitory computer readable medium and executes the program instructions to implement the target image generation process.
  • the target image generation process may be executed against a brightness plane of YUV standards. That is, the brightness plane (Y plane) can be subjected to the target image generation process in the electrical device 10 according to the embodiment of the present disclosure. Of course, other planes or images may be subjected to the target image generation process disclosed herein.
  • the main processor 40 of the electrical device 10 obtains the captured blurry image captured by the camera assembly 30 including the optics, for example, from the image signal processor 42 (Step S10) .
  • the main processor 40 of the electrical device 10 applies the circular convolution to the captured blurry image with the inverse filter to generate the restored sharp image (Step S12) .
  • the details of this process have already been explained above.
  • the main processor 40 of the electrical device 10 generates the regular blend mask based on the captured blurry image (Step S14) .
  • the details of this process have already been explained above.
  • the main processor 40 of the electrical device 10 masks the restored sharp image with the regular blend mask to generate the first intermediate image (Step S16) .
  • the details of this process have already been explained above.
  • the main processor 40 of the electrical device 10 generates the inverse blend mask in which the regular blend mask is inversed (Step S18) .
  • the details of this process have already been explained above.
  • the main processor 40 of the electrical device 10 applies the noise reduction process to the captured blurry image to generate the noise reduction image (Step S19) .
  • the details of this process have already been explained above.
  • the main processor 40 of the electrical device 10 masks the captured blurry image with the inverse blend mask to generate the second intermediate image (Step S20) .
  • the main processor 40 of the electrical device 10 masks the captured blurry image with the inverse blend mask to generate the second intermediate image (Step S20) .
  • the details of this process have already been explained above.
  • the main processor 40 of the electrical device 10 combines the first intermediate image and the second intermediate image to generate the target image (Step S22) .
  • the details of this process have already been explained above.
  • the target image generation process according to the present embodiment ends.
  • FIG. 21 illustrates a comparison between the target image generated by the prior art technology and the target image generated by the electrical device 10 according to the embodiment of the present disclosure.
  • the target image contains noise due to the filtering restoration process for improving the sharpness, or is blurry due to not have been subjected to the filtering restoration process.
  • the noise due to the filtering restoration process for improving the sharpness of the captured blurry image is eliminated by masking the noisy area of the restored sharp image and using the captured blurry image instead.
  • the sharpness of the texture of the target image is improved by the filtering restoration process by not masking the good texture area of the restored sharp image when generating the target image.
  • a more natural and sharper target image can be obtained for the user without costs increases.
  • the electrical device 10 enables simplifying the optics of the camera assembly 30 and reducing the number of lens elements of the optics of the camera assembly 30. A natural and high quality target image is simultaneously obtained without requiring large expensive optics of the camera assembly 30.
  • the target image generation process shown in FIG. 20 is applied to a luminance component (Y) .
  • the noise reduction process shown in FIG. 11A through FIG. 11E may be applied to chroma components (U, V) in addition to the luminance component (Y) . If the noise reduction process is applied to the chroma components (U, V) , the noise in the chroma components can also be reduced. Thereafter, the target image of the luminance component (Y) generated by the target image generation process shown in FIG 20 and the chroma components (U, V) which have been subjected to the noise reduction process are outputted as an image defined by the YUV color model.
  • first and second are used herein for purposes of description and are not intended to indicate or imply relative importance or significance or to imply the number of indicated technical features.
  • the feature defined with “first” and “second” may comprise one or more of this feature.
  • a plurality of means two or more than two, unless specified otherwise.
  • the terms “mounted” , “connected” , “coupled” and the like are used broadly, and may be, for example, fixed connections, detachable connections, or integral connections; may also be mechanical or electrical connections; may also be direct connections or indirect connections via intervening structures; may also be inner communications of two elements, which can be understood by those skilled in the art according to specific situations.
  • a structure in which a first feature is "on" or “below” a second feature may include an embodiment in which the first feature is in direct contact with the second feature, and may also include an embodiment in which the first feature and the second feature are not in direct contact with each other, but are contacted via an additional feature formed therebetween.
  • a first feature "on” , “above” or “on top of” a second feature may include an embodiment in which the first feature is right or obliquely “on” , “above” or “on top of” the second feature, or just means that the first feature is at a height higher than that of the second feature; while a first feature “below” , “under” or “on bottom of” a second feature may include an embodiment in which the first feature is right or obliquely “below” , "under” or “on bottom of” the second feature, or just means that the first feature is at a height lower than that of the second feature.
  • Any process or method described in a flow chart or described herein in other ways may be understood to include one or more modules, segments or portions of codes of executable instructions for achieving specific logical functions or steps in the process, and the scope of a preferred embodiment of the present disclosure includes other implementations, in which it should be understood by those skilled in the art that functions may be implemented in a sequence other than the sequences shown or discussed, including in a substantially identical sequence or in an opposite sequence.
  • the logic and/or step described in other manners herein or shown in the flow chart, for example, a particular sequence table of executable instructions for realizing the logical function may be specifically achieved in any computer readable medium to be used by the instruction execution system, device or equipment (such as the system based on computers, the system comprising processors or other systems capable of obtaining the instruction from the instruction execution system, device and equipment and executing the instruction) , or to be used in combination with the instruction execution system, device and equipment.
  • the computer readable medium may be any device adaptive for including, storing, communicating, propagating or transferring programs to be used by or in combination with the instruction execution system, device or equipment.
  • the computer readable medium comprise but are not limited to: an electronic connection (an electronic device) with one or more wires, a portable computer enclosure (a magnetic device) , a random access memory (RAM) , a read only memory (ROM) , an erasable programmable read-only memory (EPROM or a flash memory) , an optical fiber device and a portable compact disk read-only memory (CDROM) .
  • the computer readable medium may even be a paper or other appropriate medium capable of printing programs thereon, this is because, for example, the paper or other appropriate medium may be optically scanned and then edited, decrypted or processed with other appropriate methods when necessary to obtain the programs in an electric manner, and then the programs may be stored in the computer memories.
  • each part of the present disclosure may be realized by the hardware, software, firmware or their combination.
  • a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instruction execution system.
  • the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA) , a field programmable gate array (FPGA) , etc.
  • each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module.
  • the integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.
  • the storage medium mentioned above may be read-only memories, magnetic disks, CD, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A method of generating a target image according to the embodiment of the present disclosure includes: obtaining a captured blurry image captured by a camera assembly including an optics; applying a restoration process to the captured blurry image with a restoration filter to generate a restored sharp image; generating a regular blend mask based on the captured blurry image; masking the restored sharp image with the regular blend mask to generate a first intermediate image; generating an inverse blend mask; applying a noise reduction process to the captured blurry image to reduce noise in the captured blurry image and to generate a noise reduction image, wherein the noise reduction process is adjusted based on adjusting information; masking the noise reduction image with the inverse blend mask to generate a second intermediate image; and combining the first intermediate image and the second intermediate image.

Description

METHOD OF GENERATING TARGET IMAGE, ELECTRICAL DEVICE, AND NON-TRANSITORY COMPUTER READABLE MEDIUM TECHNICAL FIELD
The present disclosure relates to a method of generating a target image, an electrical device, and a non-transitory computer readable medium.
BACKGROUND
Electrical devices such as smartphones and tablet terminals are widely used in our daily life. Nowadays, many of the electrical devices are equipped with a camera assembly for capturing images. Some of the electrical devices are portable and are thus easy to carry. Therefore, a user of the electrical device can easily take a picture of an object by using the camera assembly of the electrical device anytime, anywhere.
When capturing an image with the camera assembly, a sharpness of the image is degraded by an optical aberration, such as coma, astigmatism and so on. In the past decade, many academic papers have proposed techniques that improve a captured blurry image due to such optical aberration. However, most techniques disclosed are very complicated or require high computational costs. Therefore, these techniques are not practical, and are not fit for implementation in electrical devices such as smartphones and tablet terminals because a computation capability thereof is not sufficiently high.
SUMMARY
The present disclosure aims to solve at least one of the technical problems mentioned above. Accordingly, the present disclosure needs to provide a method of generating a target image, an electrical device and a non-transitory computer readable medium.
In accordance with the present disclosure, a method of generating a target image may include:
obtaining a captured blurry image captured by a camera assembly including an optics;
applying a restoration process to the captured blurry image with a restoration filter to generate a restored sharp image, the restoration filter being a filter to restore a sharpness of the captured blurry image based on optical characteristics of the optics;
generating a regular blend mask based on the captured blurry image;
masking the restored sharp image with the regular blend mask to generate a first intermediate image;
generating an inverse blend mask in which the regular blend mask is inversed;
applying a noise reduction process to the captured blurry image to reduce noise in the captured blurry image and to generate a noise reduction image, wherein the noise reduction process is adjusted based on adjusting information depending on filtering characteristics of the restoration filter and/or the optical characteristics of the optics;
masking the noise reduction image with the inverse blend mask to generate a second intermediate image; and
combining the first intermediate image and the second intermediate image to generate the target image.
In accordance with the present disclosure, an electrical device may include:
a camera assembly including an optics; and
a processor configured to:
obtain a captured blurry image captured by a camera assembly including an optics;
apply a restoration process to the captured blurry image with a restoration filter to generate a restored sharp image, the restoration filter being a filter to restore a sharpness of the captured blurry image based on optical characteristics of the optics;
generate a regular blend mask based on the captured blurry image;
mask the restored sharp image with the regular blend mask to generate a first intermediate image;
generate an inverse blend mask in which the regular blend mask is inversed;
apply a noise reduction process to the captured blurry image to reduce noise in the captured blurry image and to generate a noise reduction image, wherein the noise reduction process is adjusted based on adjusting information depending on filtering characteristics of the restoration filter and/or the optical characteristics of the optics;
mask the noise reduction image with the inverse blend mask to generate a second intermediate image; and
combine the first intermediate image and the second intermediate image to generate a target image.
In accordance with the present disclosure, a non-transitory computer readable medium including program instructions stored thereon, wherein, when the program instructions are executed by an electrical device, the program instructions cause the electrical device to perform at least the following:
obtaining a captured blurry image captured by a camera assembly including an optics;
applying a restoration process to the captured blurry image with a restoration filter to generate a restored sharp image, the restoration filter being a filter to restore a sharpness of the captured blurry image based on optical characteristics of the optics;
generating a regular blend mask based on the captured blurry image;
masking the restored sharp image with the regular blend mask to generate a first intermediate image;
generating an inverse blend mask in which the regular blend mask is inversed;
applying a noise reduction process to the captured blurry image to reduce noise in the captured blurry image and to generate a noise reduction image, wherein the noise reduction process is adjusted based on adjusting information depending on filtering characteristics of the restoration filter and/or the optical characteristics of the optics;
masking the noise reduction image with the inverse blend mask to generate a second intermediate image; and
combining the first intermediate image and the second intermediate image to generate a target image.
BRIEF DESCRIPTION OF THE DRAWINGS
These and/or other aspects and advantages of embodiments of the present disclosure will become apparent and more readily appreciated from the following descriptions made with reference to the drawings, in which:
FIG. 1 is a plan view of a first side of an electrical device according to an embodiment of the present disclosure;
FIG. 2 is a plan view of a second side of the electrical device according to the embodiment of the present disclosure;
FIG. 3 is a block diagram of the electrical device according to the embodiment of the present disclosure;
FIG. 4 is an explanatory drawing of an optical aberration in an optics;
FIG. 5 illustrates a formula which indicates a relationship between a captured blurry image and an ideal sharp image;
FIG. 6 is a visual explanation of a manner to improve a sharpness of the captured blurry image in the electrical device according to the embodiment of the present disclosure;
FIG. 7 illustrates a cost function c (L) including a fidelity term and a regularization term;
FIG. 8 is a formula showing a manner of computing an inverse filter based on the cost function c (L) ;
FIG. 9 shows an image of a blur kernel K and an image of an inverse filter
Figure PCTCN2021077519-appb-000001
in the  spatial domain;
FIG. 10 illustrates a problem of a restored sharp image;
FIG. 11A illustrates an outline of a blending process according to the electrical device of the embodiment of the present disclosure;
FIG. 11B shows a formula of a bilateral filter which is one example of filters used in a noise reduction process.
FIG. 11C shows one example of a part of the captured blurry image.
FIG. 11D shows one example of a formula of a spatial weight W s and an intensity weight W i in the bilateral filter.
FIG. 11E shows a captured blurry image before applying the noise reduction process by using the bilateral filter and a noise reduction image after applying the noise reduction process by using the bilateral filter.
FIG. 11F shows one example of a restoration filter array of the restoration filer.
FIG. 12 illustrates one example of how to generate a regular blend mask in the electrical device according to the embodiment of the present disclosure;
FIG. 13 illustrates one example of a look-up table to modulate the captured blurry image nonlinearly;
FIG. 14 is a visual explanation of a lens shading model in the electrical device of the embodiment of the present disclosure;
FIG. 15 is a visual explanation of a first option of reflecting shading characteristics on a regular blend mask;
FIG. 16 is a visual explanation of a second option of reflecting the shading characteristics on the regular blend mask;
FIG. 17 is a visual explanation of the third option of reflecting the shading characteristics on the regular blend mask;
FIG. 18 is a visual explanation of a smooth transition in a final blend mask;
FIG. 19 is a visual explanation of a manner to generate a target image in the electrical device according to the embodiment of the present disclosure;
FIG. 20 is a flowchart of a target image generation process in the electrical device according to the embodiment of the present disclosure; and
FIG. 21 illustrates a comparison between a target image generated by the prior art technology and a target image generated by the electrical device according to the embodiment of the present disclosure.
FIG. 22 shows one example in a case where the noise reduction process is applied to chroma components (U, V) in addition to luminance component (Y) .
DETAILED DESCRIPTION
Embodiments of the present disclosure will be described in detail and examples of the embodiments will be illustrated in the accompanying drawings. The same or similar elements and the elements having same or similar functions are denoted by like reference numerals throughout the descriptions. The embodiments described herein with reference to the drawings are explanatory, which aim to illustrate the present disclosure, but shall not be construed to limit the present disclosure.
FIG. 1 is a plan view of a first side of an electrical device 10 according to an embodiment of the present disclosure and FIG. 2 is a plan view of a second side of the electrical device 10 according to the embodiment of the present disclosure. The first side may be referred to as a back side of the electrical device 10 whereas the second side may be referred to as a front side of the electrical device 10.
As shown in FIG. 1 and FIG. 2, the electrical device 10 may include a display 20 and a camera assembly 30. In the present embodiment, the camera assembly 30 includes a first main camera 32, a second main camera 34 and a sub camera 36. The first main camera 32 and the  second main camera 34 can capture an image in the first side of the electrical device 10 and the sub camera 36 can capture an image in the second side of the electrical device 10. Therefore, the first main camera 32 and the second main camera 34 are so-called out-cameras whereas the sub camera 36 is a so-called in-camera. As an example, the electrical device 10 can be a mobile phone, a tablet computer, a personal digital assistant, and so on.
Each of the first main camera 32, the second main camera 34 and the sub camera 36 has an imaging sensor which converts a light which has passed a color filter to an electrical signal. A signal value of the electrical signal depends on an amount of the light which has passed the color filter.
Although the electrical device 10 according to the present embodiment has three cameras, the electrical device 10 may have less than three cameras or more than three cameras. For example, the electrical device 10 may have two, four, five, and so on, cameras.
FIG. 3 is a block diagram of the electrical device 10 according to the present embodiment. As shown in FIG. 3, in addition to the display 20 and the camera assembly 30, the electrical device 10 may include a main processor 40, an image signal processor 42, a memory 44, a power supply circuit 46 and a communication circuit 48. The display 20, the camera assembly 30, the main processor 40, the image signal processor 42, the memory 44, the power supply circuit 46 and the communication circuit 48 are connected with each other via a bus 50.
The main processor 40 executes one or more program instructions stored in the memory 44. The main processor 40 implements various applications and data processing of the electrical device 10 by executing the program instructions. The main processor 40 may be one or more computer processors. The main processor 40 is not limited to one CPU core, but it may have a plurality of CPU cores. The main processor 40 may be a main CPU of the electrical device 10, an image processing unit (IPU) or a DSP provided with the camera assembly 30.
The image signal processor 42 controls the camera assembly 30 and processes various kinds of image data captured by the camera assembly 30 to generate a target image data. For example, the image signal processor 42 can apply a demosaicing process, a noise reduction process, an auto exposure process, an auto focus process, an auto white balance process, a high dynamic range process and so on, to the image data captured by the camera assembly 30.
In the present embodiment, the main processor 40 and the image signal processor 42 collaborate with each other to generate a target image data of the object captured by the camera assembly 30. That is, the main processor 40 and the image signal processor 42 are configured to capture the image of the object by means of the camera assembly 30 and apply various kinds of image processing to the captured image data.
The memory 44 stores program instructions to be executed by the main processor 40, and various kinds of data. For example, data of the captured image are also stored in the memory 44.
The memory 44 may include a high-speed RAM memory, and/or a non-volatile memory such as a flash memory and a magnetic disk memory. That is, the memory 44 may include a non-transitory computer readable medium in which the program instructions are stored.
The power supply circuit 46 may have a battery such as a lithium-ion rechargeable battery and a battery management unit (BMU) for managing the battery.
The communication circuit 48 is configured to receive and transmit data to communicate with base stations of the telecommunication network system, the Internet or other devices via wireless communication. The wireless communication may adopt any communication standard or protocol, including but not limited to GSM (Global System for Mobile communication) , CDMA (Code Division Multiple Access) , LTE (Long Term Evolution) , LTE-Advanced, 5th generation (5G) . The communication circuit 48 may include an antenna and an RF (radio frequency) circuit.
FIG. 4 is an explanatory drawing of an optical aberration. That is, when capturing an image in the camera assembly 30, a sharpness of the image is degraded by the optical aberration in an optics, such as coma, astigmatism and so on. Therefore, a point light source image is spread by  the optical aberration in the optics, and the point light source image is not a point anymore on an image plane.
Generally, the spread point light source is modeled by a function referred to as Point Spread Function (PSF) which represents a manner to degrade the captured image, and its degradation characteristics. This is known as an optical blur. Thereinafter, the blurry image captured by the camera assembly 30 is also referred to as the captured blurry image.
FIG. 5 illustrates a formula which indicates a relationship between the captured blurry image B and an ideal sharp image L. As shown in FIG. 5, the sharpness of the captured blurry image B is expressed by K*L+n. The K indicates a blur kernel which is the same as the PSF. The L indicates the ideal sharp image which has no noise, i.e., which is ideal. The n indicates a noise. The "*" indicates a circular convolution. The formula shows that the image captured by the camera assembly 30 always includes the noise. This noise is also referred to as a shot noise because the noise is unavoidable when taking an image.
FIG. 6 is a visual explanation of a manner to improve the sharpness of the captured blurry image in the electrical device 10 according to the embodiment of the present disclosure. In the present embodiment, the electrical device 10 obtains a restored sharp image L_res via a filtering restoration process which uses an inverse filter computed from the PSF. The restored sharp image L_res is closer to the ideal sharp image L. However, the noise in the captured blurry image B is also filtered through the filtering restoration process and the noise is also increased therethrough. The inverse filter is one of the examples of restoration filters which are filters to restore a sharpness of the captured blurry image based on optical characteristics of the optics.
Next, it will be explained how to compute the inverse filter from the blur kernel K which is PSF. FIG. 7 illustrates a cost function c (L) in the present embodiment. As shown in FIG. 7, the cost function c (L) is basically expressed by an argument of minimum of "the captured blurry image B -the blur kernel K *the ideal sharp image L" . That is, this term is a fidelity term. In addition, the formula shown in FIG. 7, a regularization term is also introduced into the cost function c (L) in order to apply a penalty. That is, by introducing the regularization term into the cost function c (L) , overfitting can be avoided. As a result, it can be suppressed to increase the noise in the restored sharp image L_res. Here, D indicates a regularization function, and ρindicates a regularization gain.
FIG. 8 is a formula showing a manner of computing the inverse filter based on the cost function c (L) . Fourier transform F (L) in FIG. 8 can be obtained by solving the cost function c (L) in FIG. 7 in the frequency domain. The restored sharp image L_res can be obtained by inversely transforming the Fourier transform F (L) . Since the restored sharp image L_res can be obtained by the inverse filter
Figure PCTCN2021077519-appb-000002
*the captured blurry image B, the inverse filter
Figure PCTCN2021077519-appb-000003
can be specified in this formula.
FIG. 9 shows an image of the blur kernel K and an image of the inverse filter
Figure PCTCN2021077519-appb-000004
in the spatial domain. The reason why a hat is added to the inverse filter K -1 is that the blur kernel K is not an actual measurement value, but a designed value. The designed value of the blur kernel K is not equal to the actual measurement value of the blur kernel K because of an assembling error and a dimensional error of components of the camera assembly 30 and the electrical device 10. Therefore, the inverse filter
Figure PCTCN2021077519-appb-000005
is slightly different from the actual inverse filter K -1 of the electrical device 10.
The process to obtain the actual measurement value of the blur kernel K is very complicated and burdensome for the electrical device 10 and the user thereof. Moreover, the inverse filter
Figure PCTCN2021077519-appb-000006
is fine enough to restore the captured blurry image B. Therefore, the electrical device 10 according to the present embodiment computes the inverse filter
Figure PCTCN2021077519-appb-000007
based on the designed value of the blur kernel K.
FIG. 10 illustrates a problem of the restored sharp image. As shown in FIG. 10, the captured blurry image contains the noise. As a result, if the circular convolution is applied to the  captured blurry image with the inverse filter
Figure PCTCN2021077519-appb-000008
the sharpness of the restored sharp image is improved though the noise in the captured blurry image is increased.
Moreover, the captured blurry image contains noise and therefore it is desirable to reduce the noise in the captured blurry image as much as possible before generating the target image. The noise can be reduced by applying a noise reduction process to the captured blurry image and then a noise reduction image can be generated.
Therefore, in order to solve these problems, the electrical device 10 according to the embodiment of the present disclosure introduces a blending process for blending the restored sharp image to which the filtering restoration process is applied and the noise reduction image to which not the filtering restoration process but the noise reduction process is applied.
FIG. 11A illustrates an outline of the blending process according to the electrical device 10 of the embodiment of the present disclosure. In the blending process, the circular convolution is applied to the captured blurry image with the inverse filter
Figure PCTCN2021077519-appb-000009
to generate the restored sharp image.
Even if the noise is included in a high frequency area of the image, it cannot be detected by the human eye. Contrarily, if the noise is included in the low frequency area in the image, it can be detected by the human eye and it is readily noticeable.
Therefore, in the electrical device 10 according to the embodiment of the present disclosure, a low frequency area in the restored sharp image is masked with a regular blend mask to generate a first intermediate image.
On the other hand, the captured blurry image also includes the noise. Therefore, in order to reduce the noise in the captured blurry image, the noise reduction process is applied to the captured blurry image. After applying the noise reduction process to the captured blurry image, the noise reduction image in which the noise is reduced is obtained.
The noise reduction image is masked with an inverse blend mask to generate a second intermediate image. The inverse blend mask is a mask in which the regular blend mask is inversed. For example, the inverse blend mask can be obtained by inversing the regular blend mask. Thereafter, the electrical device 10 according to the present embodiment combines the first intermediate image and the second intermediate image to generate a target image.
By generating the target image through this blending process, the low frequency area including the noise in the restored sharp image is replaced with the low frequency area not including the noise in the noise reduction image because of the noise reduction process. Therefore, the noise is not contained in the low frequency area in the target image. On the other hand, the sharpness in the high frequency area in the target image is improved because of the restoration process. That is, a texture in the high frequency area in the target image can be fine.
Furthermore, the noise in the low frequency area in the captured blurry image is reduced by the noise reduction process to generate the noise reduction image. Therefore, the noise in the low frequency area in the target image generated by combining the restored sharp image and the noise reduction image is very little.
In general, the texture in the high frequency area is degraded through the noise reduction process. However, in the present embodiment, the texture in the high frequency area in the noise reduction image is replaced with the texture in the high frequency are in the restored sharp image. Therefore, the noise reduction process can be adjusted so that the noise in the low frequency area of the captured blurry image can effectively be reduced. Therefore, the user satisfaction with the target image is improved.
FIG. 11B shows a formula of a bilateral filter which is one example of filters used in the noise reduction process. FIG. 11C shows one example of a part of the captured blurry image. The bilateral filter is known to be able to remove the noise but preserve image edges. However, the bilateral filter is just one example for being used in the noise reduction process, and other methods for reducing the noise may be applied to the noise reduction process.
As shown in FIG. 11B, the bilateral filter can be expressed by:
Figure PCTCN2021077519-appb-000010
The q i, j indicates a denoised pixel and a center pixel value in a filter kernel. The first item of the right side of the formula is the filter kernel which is a normalized weight, and the second item of the image block I (I, j) of the right side of the formula is an image block to be processed in the captured blurry image. The i and the j indicate an x-position and a y-position in the captured blurry image, respectively, and a center coordinate of the filter kernel. The ". *" indicates an element-wise multiplication.
The W s indicates a spatial weight and the W i indicates an intensity weight. FIG. 11D shows one example of a formula of the spatial weight W s and the intensity weight W i in the bilateral filter. As shown in FIG. 11D. the spatial weight W s is a predetermined gaussian filter kernel and a size of the gaussian filter kernel is the same as a size of the image block I (I, j) . The H indicates a size in an x-direction and the V indicates a size in a y-direction of the gaussian filter kernel.
The spatial weight W s is expressed by:
Figure PCTCN2021077519-appb-000011
where the n indicates an x-position of the gaussian filter kernel and the m indicates a y-position in the gaussian filter kernel. The parameter δ s indicates an intensity weight variance. As understood from this formula, the spatial weight W s can be controlled by δ s. That is, the larger the parameter δ s is, the stronger the effectiveness of smoothing is.
The intensity weight W i is expressed by:
Figure PCTCN2021077519-appb-000012
where the parameter δ i indicates a spatial weight variance and the square indicates an element-wise multiplication. The (I (i, j) -p i, j2 indicates an intensity difference between an intensity of each pixel and a value of the p i, j. In the present embodiment, as understood from this formula, the intensity weight W i can be controlled by the parameter δ i. That is, the smaller the parameter δ i is, the stronger the effectiveness of edge preserving is. In other words, the smaller the parameter δ i is, the weaker the effectiveness of smoothing is.
FIG. 11E shows a captured blurry image before applying the noise reduction process by using the bilateral filter mentioned above and the noise reduction image after applying the noise reduction process by using the bilateral filter mentioned above.
As shown in FIG. 11E, through applying the noise reduction process by using the bilateral filter, noise in plane areas is reduced but edges are preserved in the noise reduction image. In addition, the spatial weight W s and the intensity weight W i in the bilateral filter can be controlled by the parameter δ s and the parameter δ i. Therefore, in the present embodiment, the parameter δ s and the parameter δ i are adjusted based on adjusting information depending on filtering characteristics of the restoration filter used by the restoration process and/or the optical characteristics of the optics in the camera assembly 30.
For example, the adjusting information may depend on a kernel size of the restoration filer, a filter strength of the restoration filer, frequency characteristics of the restoration filter, spatial characteristics of the optics of the camera assembly 30, and/or a point spread function (PSF) of the optics of the camera assembly 30.
FIG. 11F shows one example of a restoration filter array of the restoration filer. As shown in FIG. 11F, in a center region of the restoration filter array, the point spread function (PSF) is  small and isotropic and it has high brightness. Therefore, good sharpness recovering and relative small noise in the restored sharp image can be expected even after applying the restoration process. As a result, the parameter δ s can be small to be less smoothing and the parameter δ i can be large to be less edge preserving.
On the other hand, in corner regions of the restoration filter array, the point spread function (PSF) is large and anisotropic and it has low brightness (lens shading) . Therefore, the good sharpness recovering cannot be expected even after applying the restoration process. Moreover, much noise is generated due to the low brightness. As a result, the parameter δ s should be large to be more smoothing and the parameter δ i should be small to be more edge preserving.
From the center region to the corner region in the captured blurry image, the parameter δ s may become larger gradually and the parameter δ i may become smaller gradually.
FIG. 12 illustrates one example of how to generate the regular blend mask in the electrical device 10 according to the embodiment of the present disclosure. As shown in FIG. 12, the regular blend mask is generated based on the captured blurry image. More specifically, the electrical device 10 obtains the captured blurry image, for example, from the camera assembly 30 or the image signal processor 42. Thereafter, the electrical device 10 applies the circular convolution to the captured blurry image with the inverse filter to improve the sharpness of the captured blurry image and to generate the restored sharp image. However, if the restored sharp image has already been generated in another process, this process can be omitted.
In addition, the electrical device 10 executes an averaging process against the captured blurry image to degrade the sharpness of the captured blurry image and to generate an averaged image.
Then, the electrical device 10 subtracts the restored sharp image from the averaged image to generate a subtracted image and then calculates an absolute value of the subtracted image to generate a difference image. The difference image which can be obtained through these processes shows how much the brightness of the pixels is changed by the filtering restoration process with the inverse filter. In other words, each pixel in the difference image has a specific value which indicates a level of the brightness change due to the filtering restoration process.
In FIG. 12, the gray areas indicate areas where the brightness has significantly been changed by the filtering restoration process with the inverse filter. On the other hand, the black areas indicate areas where the brightness has not been changed so much by the filtering restoration process with the inverse filter.
On the other hand, the electrical device 10 nonlinearly modulates the captured blurry image to generate a temporary threshold map. In the present embodiment, a look-up table LUT is used to modulate the captured blurry image nonlinearly.
FIG. 13 illustrates one example of the look-up table LUT to nonlinearly modulate the captured blurry image. In the example of the look-up table, if the value of the pixel of the captured blurry image is low, the value of the pixel of the temporary threshold map is raised from its original value. On the other hand, if the value of the pixel of the captured blurry image is within the middle range, the value of the pixel of the temporary threshold map is lowered from its original value. Furthermore, if the value of the pixel of the captured blurry image is more than a certain value, the value of the pixel of the temporary threshold map is capped at a certain value.
FIG. 13 is one of the examples of the look-up table LUT, the look-up table not being limited to the example shown in FIG. 13. Moreover, the method to nonlinearly modulate the captured blurry image is not limited to using the look-up table LUT. Other various methods may be applied to nonlinearly modulate the captured blurry image.
Next, as shown in FIG. 12, the electrical device 10 according to the embodiment of the present disclosure multiplies the temporary threshold map by a certain value to adjust a gain of the temporary threshold map, and then a final threshold map can be calculated. The certain value may be less than one or may be more than one. In the final threshold map, each pixel indicates  the threshold value as to whether the pixel of the restored sharp image should be masked when generating the first intermediate image.
More specifically, the electrical device 10 generates the regular blend mask based on the difference image and the final threshold map through a thresholding process. In the thresholding process, if the value of the diff_img, which is the value of the pixel of the difference image, is equal to or larger than the value of the th_map, which is the value of the pixel of the final threshold map, then the value of the blend_mask, which is the value of the pixel of the regular blend mask, is the blend_ratio_high. For example, the blend_ratio_high is 95 %.
On the other hand, if the value of the diff_img is less than the value of the th_map, then the value of the blend_mask is the blend_ratio_low. For example, the blend_ratio_low is 5%.
In the present embodiment, when the regular blend mask is used, if the blend_mask of the pixel of the regular blend mask is the blend_ratio_high (95%) , then the first intermediate image is generated such that the pixel of the first intermediate image contains 95%of the corresponding pixel of the restored sharp image and 5%of the corresponding pixel of the captured blurry image.
On the other hand, if the blend_mask of the pixel of the regular blend mask is the blend_ratio_low (5%) , then the first intermediate image is generated such that the pixel of the first intermediate image contains 5%of the corresponding pixel of the restored sharp image and 95%of the corresponding pixel of the captured blurry image. In the present embodiment, adding the blend_ratio_high to the blend_ratio_low should be one.
The value of the restored sharp image and the value of the captured blurry image are blended in the same pixel so that the boundary between the area where the restored sharp image is used and the area where the captured blurry image is used cannot be distinguished by the human eye in the target image.
Through the thresholding process, small spread gray areas, such as the noise in the difference image, are eliminated. When restoring the sharpness of the captured blurry image, the bright areas in the restored sharp image include the noise. Therefore, in the present embodiment, the value of the bright areas of the final threshold map is high. As a result, the bright areas in the regular blend mask become the blend ratio low and the bright areas in the restored sharp image are masked with the regular blend mask.
The inverse blend mask can be generated by inversing the regular blend mask. That is, by inversing the regular blend mask, the blend_mask of the pixel of the blend_ratio_high (95%) in the regular blend mask is converted to the blend_mask of the pixel of the blend_ratio_low (5%) in the inverse blend mask, whereas the blend_mask of the pixel of the blend_ratio_low (5%) in the regular blend mask is converted to the blend_mask of the pixel of the blend_ratio_high (95%) in the inverse blend mask.
Incidentally, in the example mentioned above, the blend_ratio_high is less than 100%and the blend_ratio_low is more than 0%. However, the blend_ratio_high may be 100%and the blend_ratio_low may be 0%in the regular blend mask. In this case, the blend_ratio_high is also 100%and the blend_ratio_low is also 0%in the inverse blend mask. However, a value of the blend_ratio_high is higher than a value of the blend_ratio_low.
In accordance with characteristics of the optics in the camera assembly 30, the brightness is decreasing around corners of the captured image. That is, the amount of the light in the corners is low due to lens shading (a.k.a. vignetting) . Therefore, in some cases, the captured blurry image has already been compensated to correct the shading characteristics. To compensate, the electrical device 10 multiplies the captured image by an appropriate gain. However, the noise characteristics strongly depend on this compensation. Therefore, the noise characteristics may be taken into account when generating the regular blend mask.
Optionally, in the electrical device 10 according to the embodiment of the present disclosure, for example, a lens shading model LSM is introduced. FIG. 14 is a visual explanation of the lens shading model LSM in the present embodiment.
As shown in FIG. 14, a captured image following the original lens shading characteristics of the optics has the brighter area at the center of the captured image and the darker areas at the corners of the captured image. In the present embodiment, the lens shading model LSM is generated based on the original lens shading characteristics. That is, the lens shading model LSM is generated by a nonlinear transform of the original lens characteristics. The purpose of the non-linear transform is to adjust the brightness based on the lens shading model LSM to obtain the regular blend mask suitable for masking the noisy area.
<First Option>
FIG. 15 is a visual explanation of the first option of reflecting the shading characteristics on the regular blend mask. In the first option, the electrical device 10 executes a pixel-wise multiplication of the difference image and the lens shading model LSM.
Through the correction of the shading characteristics of the captured image, the brightness around the corners of the captured image has been raised. That is, the brightness around the corners of the captured blurry image has also already been raised. Therefore, according to the first option, the brightness around the corners of the difference image is lowered by using the lens shading model LSM. As a result, the effect of the correction of the shading characteristics of the regular blend mask can be compensated.
<Second Option>
FIG. 16 is a visual explanation of the second option of reflecting the shading characteristics on the regular blend mask. In the second option, the electrical device 10 executes a pixel-wise multiplication of the final threshold map and an inverse lens shading model to modify the final threshold map. The inverse lens shading model is calculated by inversing the lens shading model LSM. Therefore, an area around the center of the inverse lens shading model is dark whereas areas around the corners of the inverse lens shading model are bright.
Through the correction of the shading characteristics of the captured image, an amplitude of the noise around the corners of the captured image has been increased. Therefore, by raising the value of the corners of the final threshold map, the areas of the corners of the regular blend mask can easily be blend_ratio_low and masked with the regular blend mask.
<Third Option>
FIG. 17 is a visual explanation of the third option of reflecting the shading characteristics on the regular blend mask. In the third option, the thresholding process mentioned above is modified. That is, the value of the blend_mask is calculated by multiplying the blend_ratio_high by the lens shading model LSM (i, j) or multiplying the blend_ratio_low by the inverse lens shading model. As a result of this calculation, the ratio of the restored sharp image is increased around the center of the regular blend mask. In other words, the ratio of the restored sharp image is decreased in the corners of the regular blend mask.
Through the correction of the shading characteristics of the captured image, a lot of noise has been introduced in the areas of the corners of the captured image. Therefore, in order to eliminate the noise in the corner areas, the ratio of the restored sharp image decreases as the pixel is located nearer the corners of the regular blend mask. In other words, the ratio of the captured blurry image increases as the pixel is located nearer the corners of the regular blend mask. As a result, the noise in the corners of the restored sharp image can be suppressed when generating the target image.
Optionally, in the electrical device 10 according to the embodiment of the present disclosure, it may be possible to smooth a transition in the final blend mask. FIG. 18 is a visual explanation of smoothing the transition in the final blend mask.
As shown in FIG. 18, the values of the blend_mask of the final blend mask are two types, i.e., the blend_ratio_high and the blend_ratio_low. Therefore, a boundary between the areas of the blend_ratio_high and the areas of the blend_ratio_low is very clear and sharp. When the clear and sharp final blend mask is used to generate the target image, the target image may become unnatural for the human eye.
Therefore, the electrical device 10 according to the embodiment of the present disclosure may add a blur to the final blend mask so as to naturally blend the areas of the blend_ratio_high and the areas of the blend_ratio_low. For example, the electrical device 10 adds the blur to the final blend mask by executing a simple convolution of Gaussian Blur Kernel.
By introducing the blur into the final blend mask, a blurry blend mask is generated. In the blurry blend mask, the boundary between the areas of the blend_ratio_high and the areas of the blend_ratio_low is blurred. In other words, the values of the pixels in the boundary area have transitional values between the blend_ratio_low and the blend_ratio_high. That is, in the boundary area, the values of the blend_mask are gradually increased from the blend_ratio_low to the blend_ratio_high.
FIG. 19 is a visual explanation of a manner of generating the target image. As shown in FIG. 19 and as having briefly been explained based on FIG. 11A, the target image is generated by masking the restored sharp image with the regular blend mask, masking the noise reduction image with the inverse blend mask, and combining them.
More specifically, the first intermediate image can be generated by executing a pixel-wise multiplication of the restored sharp image and the regular blend mask. The second intermediate image can be generated by executing a pixel-wise multiplication of the noise reduction image and the inverse blend mask. That is, the masking process can be implemented by the pixel-wise multiplication. Therefore, the target image can be generated by combining the first intermediate image and the second intermediate image.
FIG. 20 is a flowchart of a target image generation process in the electrical device 10 according to the embodiment of the present disclosure. The target image generation process can be executed by the main processor 40 or the image signal processor 42. Alternatively, the target image generation process can be executed by a combination of the main processor 40 and the image signal processor 42.
In the present embodiment, for example, the main processor 40 obtains the captured blurry image from an output port of the image signal processor 42. Then, the main processor 40 executes the target image generation process against the captured blurry image and inputs the generated target image to an input port of the image signal processor 42.
In addition, program instructions to implement the target image generation process may be stored on a non-transitory computer readable medium. The main processor 40 reads out the program instructions from the non-transitory computer readable medium and executes the program instructions to implement the target image generation process.
Moreover, the target image generation process may be executed against a brightness plane of YUV standards. That is, the brightness plane (Y plane) can be subjected to the target image generation process in the electrical device 10 according to the embodiment of the present disclosure. Of course, other planes or images may be subjected to the target image generation process disclosed herein.
As shown in FIG. 20, the main processor 40 of the electrical device 10 obtains the captured blurry image captured by the camera assembly 30 including the optics, for example, from the image signal processor 42 (Step S10) .
Next, as shown in FIG. 20, the main processor 40 of the electrical device 10 applies the circular convolution to the captured blurry image with the inverse filter to generate the restored sharp image (Step S12) . The details of this process have already been explained above.
Next, as shown in FIG. 20, the main processor 40 of the electrical device 10 generates the regular blend mask based on the captured blurry image (Step S14) . The details of this process have already been explained above.
Next, as shown in FIG. 20, the main processor 40 of the electrical device 10 masks the restored sharp image with the regular blend mask to generate the first intermediate image (Step S16) . The details of this process have already been explained above.
Next, as shown in FIG. 20, the main processor 40 of the electrical device 10 generates the inverse blend mask in which the regular blend mask is inversed (Step S18) . The details of this process have already been explained above.
Next, as shown in FIG. 20, the main processor 40 of the electrical device 10 applies the noise reduction process to the captured blurry image to generate the noise reduction image (Step S19) . The details of this process have already been explained above.
Next, as shown in FIG. 20, the main processor 40 of the electrical device 10 masks the captured blurry image with the inverse blend mask to generate the second intermediate image (Step S20) . The details of this process have already been explained above.
Next, as shown in FIG. 20, the main processor 40 of the electrical device 10 combines the first intermediate image and the second intermediate image to generate the target image (Step S22) . The details of this process have already been explained above. After the process of the Step S22 is completed, the target image generation process according to the present embodiment ends.
FIG. 21 illustrates a comparison between the target image generated by the prior art technology and the target image generated by the electrical device 10 according to the embodiment of the present disclosure. As shown in FIG. 21, in the prior art technology, the target image contains noise due to the filtering restoration process for improving the sharpness, or is blurry due to not have been subjected to the filtering restoration process.
On the other hand, in the target image generated by the electrical device 10 according to the present embodiment, the noise due to the filtering restoration process for improving the sharpness of the captured blurry image is eliminated by masking the noisy area of the restored sharp image and using the captured blurry image instead.
On the other hand, the sharpness of the texture of the target image is improved by the filtering restoration process by not masking the good texture area of the restored sharp image when generating the target image. As a result, a more natural and sharper target image can be obtained for the user without costs increases.
Also, the electrical device 10 according to the present embodiment enables simplifying the optics of the camera assembly 30 and reducing the number of lens elements of the optics of the camera assembly 30. A natural and high quality target image is simultaneously obtained without requiring large expensive optics of the camera assembly 30.
As shown in FIG. 22, if the captured blurry image can be defined by YUV color model, the target image generation process shown in FIG. 20 is applied to a luminance component (Y) . However, the noise reduction process shown in FIG. 11A through FIG. 11E may be applied to chroma components (U, V) in addition to the luminance component (Y) . If the noise reduction process is applied to the chroma components (U, V) , the noise in the chroma components can also be reduced. Thereafter, the target image of the luminance component (Y) generated by the target image generation process shown in FIG 20 and the chroma components (U, V) which have been subjected to the noise reduction process are outputted as an image defined by the YUV color model.
In the description of embodiments of the present disclosure, it is to be understood that terms such as "central" , "longitudinal" , "transverse" , "length" , "width" , "thickness" , "upper" , "lower" , "front" , "rear" , "back" , "left" , "right" , "vertical" , "horizontal" , "top" , "bottom" , "inner" , "outer" , "clockwise" and "counterclockwise" should be construed to refer to the orientation or the position as described or as shown in the drawings under discussion. These relative terms are only used to simplify description of the present disclosure, and do not indicate or imply that the device or element referred to must have a particular orientation, or constructed or operated in a particular orientation. Thus, these terms cannot be constructed to limit the present disclosure.
In addition, terms such as "first" and "second" are used herein for purposes of description and are not intended to indicate or imply relative importance or significance or to imply the number of indicated technical features. Thus, the feature defined with "first" and "second" may  comprise one or more of this feature. In the description of the present disclosure, "a plurality of" means two or more than two, unless specified otherwise.
In the description of embodiments of the present disclosure, unless specified or limited otherwise, the terms "mounted" , "connected" , "coupled" and the like are used broadly, and may be, for example, fixed connections, detachable connections, or integral connections; may also be mechanical or electrical connections; may also be direct connections or indirect connections via intervening structures; may also be inner communications of two elements, which can be understood by those skilled in the art according to specific situations.
In the embodiments of the present disclosure, unless specified or limited otherwise, a structure in which a first feature is "on" or "below" a second feature may include an embodiment in which the first feature is in direct contact with the second feature, and may also include an embodiment in which the first feature and the second feature are not in direct contact with each other, but are contacted via an additional feature formed therebetween. Furthermore, a first feature "on" , "above" or "on top of" a second feature may include an embodiment in which the first feature is right or obliquely "on" , "above" or "on top of" the second feature, or just means that the first feature is at a height higher than that of the second feature; while a first feature "below" , "under" or "on bottom of" a second feature may include an embodiment in which the first feature is right or obliquely "below" , "under" or "on bottom of" the second feature, or just means that the first feature is at a height lower than that of the second feature.
Various embodiments and examples are provided in the above description to implement different structures of the present disclosure. In order to simplify the present disclosure, certain elements and settings are described in the above. However, these elements and settings are only by way of example and are not intended to limit the present disclosure. In addition, reference numbers and/or reference letters may be repeated in different examples in the present disclosure. This repetition is for the purpose of simplification and clarity and does not refer to relations between different embodiments and/or settings. Furthermore, examples of different processes and materials are provided in the present disclosure. However, it would be appreciated by those skilled in the art that other processes and/or materials may be also applied.
Reference throughout this specification to "an embodiment" , "some embodiments" , "an exemplary embodiment" , "an example" , "a specific example" or "some examples" means that a particular feature, structure, material, or characteristics described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Thus, the appearances of the above phrases throughout this specification are not necessarily referring to the same embodiment or example of the present disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples.
Any process or method described in a flow chart or described herein in other ways may be understood to include one or more modules, segments or portions of codes of executable instructions for achieving specific logical functions or steps in the process, and the scope of a preferred embodiment of the present disclosure includes other implementations, in which it should be understood by those skilled in the art that functions may be implemented in a sequence other than the sequences shown or discussed, including in a substantially identical sequence or in an opposite sequence.
The logic and/or step described in other manners herein or shown in the flow chart, for example, a particular sequence table of executable instructions for realizing the logical function, may be specifically achieved in any computer readable medium to be used by the instruction execution system, device or equipment (such as the system based on computers, the system comprising processors or other systems capable of obtaining the instruction from the instruction execution system, device and equipment and executing the instruction) , or to be used in combination with the instruction execution system, device and equipment. As to the specification, "the computer readable medium" may be any device adaptive for including, storing,  communicating, propagating or transferring programs to be used by or in combination with the instruction execution system, device or equipment. More specific examples of the computer readable medium comprise but are not limited to: an electronic connection (an electronic device) with one or more wires, a portable computer enclosure (a magnetic device) , a random access memory (RAM) , a read only memory (ROM) , an erasable programmable read-only memory (EPROM or a flash memory) , an optical fiber device and a portable compact disk read-only memory (CDROM) . In addition, the computer readable medium may even be a paper or other appropriate medium capable of printing programs thereon, this is because, for example, the paper or other appropriate medium may be optically scanned and then edited, decrypted or processed with other appropriate methods when necessary to obtain the programs in an electric manner, and then the programs may be stored in the computer memories.
It should be understood that each part of the present disclosure may be realized by the hardware, software, firmware or their combination. In the above embodiments, a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instruction execution system. For example, if it is realized by the hardware, likewise in another embodiment, the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA) , a field programmable gate array (FPGA) , etc.
Those skilled in the art shall understand that all or parts of the steps in the above exemplifying method of the present disclosure may be achieved by commanding the related hardware with programs. The programs may be stored in a computer readable storage medium, and the programs comprise one or a combination of the steps in the method embodiments of the present disclosure when run on a computer.
In addition, each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module. The integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.
The storage medium mentioned above may be read-only memories, magnetic disks, CD, etc.
Although embodiments of the present disclosure have been shown and described, it would be appreciated by those skilled in the art that the embodiments are explanatory and cannot be construed to limit the present disclosure, and changes, modifications, alternatives and variations can be made in the embodiments without departing from the scope of the present disclosure.

Claims (25)

  1. A method of generating a target image, comprising:
    obtaining a captured blurry image captured by a camera assembly including an optics;
    applying a restoration process to the captured blurry image with a restoration filter to generate a restored sharp image, the restoration filter being a filter to restore a sharpness of the captured blurry image based on optical characteristics of the optics;
    generating a regular blend mask based on the captured blurry image;
    masking the restored sharp image with the regular blend mask to generate a first intermediate image;
    generating an inverse blend mask in which the regular blend mask is inversed;
    applying a noise reduction process to the captured blurry image to reduce noise in the captured blurry image and to generate a noise reduction image, wherein the noise reduction process is adjusted based on adjusting information depending on filtering characteristics of the restoration filter and/or the optical characteristics of the optics;
    masking the noise reduction image with the inverse blend mask to generate a second intermediate image; and
    combining the first intermediate image and the second intermediate image to generate the target image.
  2. The method according to claim 1, wherein the adjusting information depends on a kernel size of the restoration filter.
  3. The method according to claim 1, wherein the adjusting information depends on a filter strength of the restoration filter.
  4. The method according to claim 1, wherein the adjusting information depends on frequency characteristics of the restoration filter.
  5. The method according to claim 1, wherein the adjusting information depends on spatial characteristics of the optics.
  6. The method according to claim 1, wherein the adjusting information depends on a point spread function (PSF) of the optics.
  7. The method according to claim 1, wherein the noise reduction process uses a bilateral filter expressed by:
    Figure PCTCN2021077519-appb-100001
    where the q i, j indicates a denoised pixel and a center pixel value in a filter kernel,
    the i and j indicate an x-position and a y-position in the captured blurry image, respectively, and a center coordinate of the filter kernel,
    the I (I, j) indicates an image block to be processed in the captured blurry image,
    the ". *"indicates an element-wise multiplication,
    the W s indicates a spatial weight and the W i indicates an intensity weight, and
    the H indicates a size in an x-direction of the filter kernel and the V indicates a size in a y-direction of the filter kernel.
  8. The method according to claim 7, wherein the spatial weight W s is expressed by:
    Figure PCTCN2021077519-appb-100002
    where the n indicates an x-position in the filter kernel, the m indicates a y-position in the filter kernel, and the parameter δ s indicates an intensity weight variance.
  9. The method according to claim 8, wherein the intensity weight W i is expressed by:
    Figure PCTCN2021077519-appb-100003
    where the parameter δ i indicates a spatial weight variance and the square indicates an element-wise multiplication.
  10. The method according to claim 9, wherein the parameter δ s and the parameter δ i are adjusted based on the adjusting information depending on the filtering characteristics of the restoration filter used by the restoration process and/or the optical characteristics of the optics in the camera assembly.
  11. The method according to claim 10, wherein the parameter δ s becomes larger gradually and the parameter δ i becomes smaller gradually from the center region to the corner region in the captured blurry image.
  12. The method according to claim 1, wherein the generating the inverse blend mask comprises inversing the regular blend mask.
  13. The method according to claim 1, wherein the generating the regular blend mask comprises:
    generating an averaged image by degrading the captured blurry image;
    subtracting the restored sharp image from the averaged image to generate a subtracted image; and
    calculating an absolute value of the subtracted image to generate a difference image.
  14. The method according to claim 13, wherein the generating the regular blend mask comprises generating a threshold map based on the captured blurry image data, wherein, in the threshold map, each pixel indicates a threshold value to determine whether the pixel of the restored sharp image should be masked to generate the first intermediate image.
  15. The method according to claim 14, wherein the generating the threshold map comprises:
    modulating the captured blurry image nonlinearly to generate a temporary threshold map; and
    multiplying the temporary threshold map by a certain value to generate a final temporary threshold map which is the threshold map.
  16. The method according to claim 15, wherein the regular blend mask is generated based on the difference image and the threshold map by a thresholding process.
  17. The method according to claim 16, wherein, in the thresholding process,
    if a value of the pixel of the difference image is equal to or larger than a value of the corresponding pixel of the threshold map, then a value of the pixel of the regular blend mask is a  blend_ratio_high which indicates a ratio of the pixel of the restored sharp image in the first intermediate image, and
    if a value of the pixel of the difference image is less than a value of the corresponding pixel of the threshold map, then a value of the pixel of the regular blend mask is a blend_ratio_low which also indicates a ratio of the pixel of the restored sharp image in the first intermediate image,
    wherein a value of the blend_ratio_high is higher than a value of the blend_ratio_low.
  18. The method according to claim 17, wherein the value of the blend_ratio_high is less than 100%.
  19. The method according to claim 18, wherein the masking the restored sharp image with the regular blend mask to generate the first intermediate image comprises generating the first intermediate image such that the pixel of the first intermediate image contains the blend_ratio_high of the value of the corresponding pixel of the restored sharp image and the blend_ratio_low of the value of the corresponding pixel of the captured blurry image.
  20. The method according to claim 19, wherein the generating the regular blend mask comprises:
    generating a lens shading model based on lens shading characteristics of the optics; and
    executing a pixel-wise multiplication of the difference image and the lens shading model to generate the regular blend mask.
  21. The method according to claim 19, further comprising:
    generating a lens shading model based on lens shading characteristics of the optics;
    inversing the lens shading model to generate an inverse lens shading model; and
    executing a pixel-wise multiplication of the final threshold map and the inverse lens shading model to modify the final threshold map.
  22. The method according to claim 19, wherein, in the thresholding process, the value of the blend_ratio_low is increased as the pixel is located nearer the corners of the regular blend mask.
  23. The method according to claim 17, further comprising adding a blur to the regular blend mask to smooth a transition of a boundary between areas of the blend_ratio_high and areas of the blend_ratio_low of the regular blend mask.
  24. An electrical device comprising:
    a camera assembly including an optics; and
    a processor configured to:
    obtain a captured blurry image captured by a camera assembly including an optics;
    apply a restoration process to the captured blurry image with a restoration filter to generate a restored sharp image, the restoration filter being a filter to restore a sharpness of the captured blurry image based on optical characteristics of the optics;
    generate a regular blend mask based on the captured blurry image;
    mask the restored sharp image with the regular blend mask to generate a first intermediate image;
    generate an inverse blend mask in which the regular blend mask is inversed;
    apply a noise reduction process to the captured blurry image to reduce noise in the captured blurry image and to generate a noise reduction image, wherein the noise reduction process is adjusted based on adjusting information depending on filtering characteristics of the restoration filter and/or the optical characteristics of the optics;
    mask the noise reduction image with the inverse blend mask to generate a second intermediate image; and
    combine the first intermediate image and the second intermediate image to generate a target image.
  25. A non-transitory computer readable medium comprising program instructions stored thereon, wherein, when the program instructions are executed by an electrical device, the program instructions cause the electrical device to perform at least the following:
    obtaining a captured blurry image captured by a camera assembly including an optics;
    applying a restoration process to the captured blurry image with a restoration filter to generate a restored sharp image, the restoration filter being a filter to restore a sharpness of the captured blurry image based on optical characteristics of the optics;
    generating a regular blend mask based on the captured blurry image;
    masking the restored sharp image with the regular blend mask to generate a first intermediate image;
    generating an inverse blend mask in which the regular blend mask is inversed;
    applying a noise reduction process to the captured blurry image to reduce noise in the captured blurry image and to generate a noise reduction image, wherein the noise reduction process is adjusted based on adjusting information depending on filtering characteristics of the restoration filter and/or the optical characteristics of the optics;
    masking the noise reduction image with the inverse blend mask to generate a second intermediate image; and
    combining the first intermediate image and the second intermediate image to generate a target image.
PCT/CN2021/077519 2021-02-23 2021-02-23 Method of generating target image, electrical device, and non-transitory computer readable medium WO2022178683A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180084318.0A CN116636228A (en) 2021-02-23 2021-02-23 Method for generating target image, electronic device and non-transitory computer readable medium
PCT/CN2021/077519 WO2022178683A1 (en) 2021-02-23 2021-02-23 Method of generating target image, electrical device, and non-transitory computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/077519 WO2022178683A1 (en) 2021-02-23 2021-02-23 Method of generating target image, electrical device, and non-transitory computer readable medium

Publications (1)

Publication Number Publication Date
WO2022178683A1 true WO2022178683A1 (en) 2022-09-01

Family

ID=83048612

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/077519 WO2022178683A1 (en) 2021-02-23 2021-02-23 Method of generating target image, electrical device, and non-transitory computer readable medium

Country Status (2)

Country Link
CN (1) CN116636228A (en)
WO (1) WO2022178683A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009128798A1 (en) * 2008-04-16 2009-10-22 Nikon Corporation Method for deblurring an image that produces less ringing
CN104331871A (en) * 2014-12-02 2015-02-04 苏州大学 Image de-blurring method and image de-blurring device
US20150222826A1 (en) * 2014-02-06 2015-08-06 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US20160080711A1 (en) * 2014-09-17 2016-03-17 Canon Kabushiki Kaisha Image processing method, image-pickup apparatus and image processing apparatus using the method, and non-transitory computer-readable storage medium storing image processing program using the method
CN105931196A (en) * 2016-04-11 2016-09-07 天津大学 Fourier optical modeling-based coded aperture camera image restoration method
US20170004603A1 (en) * 2014-03-28 2017-01-05 Fujifilm Corporation Image processing device, imaging device, image processing method, and image processing program
CN106960417A (en) * 2016-01-08 2017-07-18 中国科学院沈阳自动化研究所 A kind of noise based on the notable structure of image obscures Image Blind mesh Deconvolution Method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009128798A1 (en) * 2008-04-16 2009-10-22 Nikon Corporation Method for deblurring an image that produces less ringing
US20150222826A1 (en) * 2014-02-06 2015-08-06 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US20170004603A1 (en) * 2014-03-28 2017-01-05 Fujifilm Corporation Image processing device, imaging device, image processing method, and image processing program
US20160080711A1 (en) * 2014-09-17 2016-03-17 Canon Kabushiki Kaisha Image processing method, image-pickup apparatus and image processing apparatus using the method, and non-transitory computer-readable storage medium storing image processing program using the method
CN104331871A (en) * 2014-12-02 2015-02-04 苏州大学 Image de-blurring method and image de-blurring device
CN106960417A (en) * 2016-01-08 2017-07-18 中国科学院沈阳自动化研究所 A kind of noise based on the notable structure of image obscures Image Blind mesh Deconvolution Method
CN105931196A (en) * 2016-04-11 2016-09-07 天津大学 Fourier optical modeling-based coded aperture camera image restoration method

Also Published As

Publication number Publication date
CN116636228A (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN111418201B (en) Shooting method and equipment
US9749551B2 (en) Noise models for image processing
US9076218B2 (en) Method and image processing device for image dynamic range compression with local contrast enhancement
US8971660B2 (en) Noise reduction device, noise reduction method, noise reduction program, and recording medium
US8208039B2 (en) Image processing apparatus and computer-readable medium
US9087261B1 (en) Shadow and highlight image enhancement
US20100278423A1 (en) Methods and systems for contrast enhancement
Ko et al. Artifact-free low-light video enhancement using temporal similarity and guide map
CN105432068B (en) Photographic device, image capture method and image processing apparatus
CN105009168A (en) Restoration filter generation device and method, image processing device and method, imaging device, program, and recording medium
EP2819092B1 (en) Image correction apparatus and imaging apparatus
US20130329135A1 (en) Real time denoising of video
JP2011003048A (en) Image processing apparatus and image processing program
US9984449B2 (en) Restoration filter generation device and method, image processing device and method, imaging device, and non-transitory computer-readable medium
CN104380727A (en) Image processing device and image processing method
US20220005165A1 (en) Image enhancement method and apparatus
CN110807735A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
CN115239578A (en) Image processing method and device, computer readable storage medium and terminal equipment
Albu et al. One scan shadow compensation and visual enhancement of color images
CN113284062B (en) Lens shading correction method, device, medium and terminal
WO2022178683A1 (en) Method of generating target image, electrical device, and non-transitory computer readable medium
WO2022174449A1 (en) Method of generating target image, electrical device, and non-transitory computer readable medium
KR101101434B1 (en) Apparatus for improving sharpness of image
CN113344822B (en) Image denoising method, device, terminal and storage medium
CN113132562B (en) Lens shading correction method and device and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927128

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180084318.0

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21927128

Country of ref document: EP

Kind code of ref document: A1