WO2017166301A1 - 一种图像处理方法、电子设备以及存储介质 - Google Patents

一种图像处理方法、电子设备以及存储介质 Download PDF

Info

Publication number
WO2017166301A1
WO2017166301A1 PCT/CN2016/078346 CN2016078346W WO2017166301A1 WO 2017166301 A1 WO2017166301 A1 WO 2017166301A1 CN 2016078346 W CN2016078346 W CN 2016078346W WO 2017166301 A1 WO2017166301 A1 WO 2017166301A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
layer
formula
region
input image
Prior art date
Application number
PCT/CN2016/078346
Other languages
English (en)
French (fr)
Inventor
陈刚
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2016/078346 priority Critical patent/WO2017166301A1/zh
Priority to CN201680051160.6A priority patent/CN108027962B/zh
Publication of WO2017166301A1 publication Critical patent/WO2017166301A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • G06T5/75Unsharp masking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Definitions

  • the present invention relates to the field of communications, and in particular, to an image processing method, an electronic device, and a storage medium.
  • the electronic device In order to enhance the sharpness of the image during processing of the image, the electronic device usually processes an image by an image signal processor during processing of the image.
  • the signal processor includes two Modules, one for image noise removal and one for image sharpness enhancement, however, noise removal often inevitably leads to loss of image detail and sharpness during image processing using image signal processors
  • sharpness enhancement also has problems such as image noise amplification, and the image signal processor can only perform noise removal on the image in the same frequency band, and cannot handle noise in other frequency bands. For example, the image signal processor can only remove high frequency. Noise, unable to remove low- and medium-frequency noise.
  • Embodiments of the present invention provide an image processing method, an electronic device, and a storage medium.
  • a first aspect of the embodiments of the present invention provides an image processing method, including:
  • the input image may be an image captured by an electronic device, or an image sent by the electronic device to another electronic device;
  • the isotropic filter is a filter having the same characteristics of the filter in each edge direction of the input image.
  • Image fusion is performed on the first image and the second image to obtain an output image.
  • the input image can be decomposed, so that the acquired first image can not only remove the noise of each frequency band of the input image, but also improve the sharpness and flatness of the edge of the input image.
  • the input image is filtered by an isotropic filter to obtain a second image, thereby improving the sharpness of the second image and maintaining the original naturalness.
  • the image processing method shown in this embodiment can image-merge the first image and the second image to form an output image, so that the output image can achieve better effects in terms of noise control, sharpness enhancement, texture naturalness, and the like. That is, the output image effectively controls the noise amplification problem while improving the image sharpness.
  • a second implementation manner of the first aspect of the embodiment of the present invention and a third implementation manner of the first aspect of the embodiment of the present invention are used to determine the flat region of the input image, the edge region, and the texture region.
  • the flat region of the input image, the edge region, and the texture region may be determined according to a texture feature parameter of each pixel;
  • the texture feature parameter of all pixels in the flat region is less than a first threshold
  • the texture feature parameter of all pixels in the edge region is greater than a second threshold
  • the texture feature parameter of all pixels in the texture region is greater than or equal to the first threshold and less than or equal to the second threshold, and the first threshold is less than the second threshold.
  • the specific way to determine the texture feature parameters of each pixel can be:
  • Determining a selected area of the target pixel wherein the target pixel is any pixel of the input image, and the selected area is centered on the target pixel;
  • the selected area is exemplified as an example, and the selected area shown in this embodiment has a side length of 5 pixels.
  • the projection of the gradient of the target area in the one main direction is determined as the first characteristic value S0, and the projection of the gradient of the target area in the other main direction is determined to be the second characteristic value S1.
  • the first formula is:
  • the kSum is an area of the selected area of the target pixel
  • the lambda is any constant greater than 0 and less than or equal to 1, the alpha being greater than 0 and less than or equal to 1 Any constant.
  • the image processing method shown in this embodiment can quickly distinguish the flat region of the input image, the edge region and the texture region, thereby effectively ensuring that the image processing method shown in this embodiment can input the image.
  • the adaptive processing of different areas improves the efficiency of image processing under the premise of effectively improving image sharpness, flatness and naturalness.
  • the weight weight is determined according to the second formula
  • the second formula is:
  • T1, T2, T3, and T4 are constants that are sequentially incremented and greater than or equal to 0;
  • the first image and the second image can be fused to form an output image, thereby effectively ensuring the output image in terms of noise control, sharpness improvement, texture naturalness, and the like. Both can achieve better results, that is, the output image can effectively control the noise amplification problem while improving the image clarity.
  • the edge-based filter (English full name: Edge Preserve Filter, EPF) shown in this embodiment may be a non-local mean filter NLMean or a kernel regression filter SKR.
  • All areas I0 of the input image shown in this embodiment include a flat area, an edge area, and a texture area of the input image.
  • the high-frequency enhanced image B0 is subjected to low-pass filtering and downsampling operations layer by layer to obtain a multi-layer image with reduced area such that the number of layers of the multi-layer image is equal to the number of target layers.
  • Hn is a high frequency information image of In, and U represents an upsampling operation.
  • the images I1, I2, ..., In1, In are subjected to X2 times upsampling operations according to the fourth formula to obtain high frequency information images.
  • the high-frequency information image H1 of the image I1, the high-frequency information image H2 of the image I2, and the high-frequency information image Hn of the image In are acquired by the present embodiment.
  • the acquired image width and image height of the reconstructed image R0 are equal to the image width and image height of the input image
  • the first image is an image of the reconstructed image R0 corresponding to the flat region and the edge region of the input image.
  • the first image acquired by the image processing method shown in this embodiment can not only remove the noise of each frequency band of the input image, but also improve the sharpness and flatness of the edge of the input image.
  • a first region of the input image may be determined
  • the first region Im is filtered by the edge-based filter EPF to obtain the filtered image A0 ,
  • sharpenLevel is the intensity of high frequency enhancement
  • the high frequency enhanced image B0 is layered , low pass filtered, and downsampled to obtain a multi-layer image with decreasing area.
  • Reconstructing all of the high frequency information images layer by layer in order of increasing area to obtain the first image includes:
  • the reconstructed image is determined R0, to the first image.
  • the first image acquired by the image processing method shown in this embodiment can not only remove the noise of each frequency band of the input image, but also improve the sharpness and flatness of the edge of the input image.
  • the target image I0+(I0 ⁇ I0 ⁇ LPF)*sharpenLevel, ⁇ denotes a convolution operation, and sharpenLevel is a high-frequency enhancement intensity;
  • the second image is an image of the target image that corresponds to the texture region of the input image.
  • the filtering the second region of the input image by the isotropic filter to obtain the second image comprises:
  • R2 M0+(M0 ⁇ M0 ⁇ LPF)*sharpenLevel
  • denotes convolution operation
  • sharpenLevel is the intensity of high frequency enhancement
  • a fourth implementation manner of the first aspect of the embodiment of the present invention, a sixth implementation manner of the first aspect of the embodiment of the present invention, an eighth implementation manner of the first aspect of the embodiment of the present invention, or the first aspect of the embodiment of the present invention In a tenth implementation manner of the first aspect of the embodiment of the present invention,
  • the method further includes:
  • Obtaining a statistical characteristic edge of each pixel of the input image wherein a statistical characteristic edge of each pixel of the input image is an edge intensity of the input image or an intensity of high frequency information of the input image;
  • W1, W2, W3, and W4 are constants that are sequentially incremented by greater than or equal to 0, and MinLevel1 and MinLevel2 are constants smaller than MaxLevel.
  • a second aspect of the embodiments of the present invention provides an electronic device, including:
  • a first determining unit configured to determine a target layer number, wherein the target layer number is any natural number in [1, log2(min(width, height))], width is the width of the input image, and height is an input image. the height of;
  • a first acquiring unit configured to decompose the first region of the input image to obtain a multi-layer image with decreasing area, wherein the first region is a flat region and an edge region of the input image, and The number of layers of the multi-layer image is equal to the number of target layers;
  • a second acquiring unit configured to perform an upsampling operation on the image of each scale to obtain a high frequency information image
  • a third acquiring unit configured to reconstruct all the high-frequency information images layer by layer in an order of increasing area to obtain a first image, and an area of the first image is equal to an area of the first area of the input image;
  • a filtering unit configured to filter a second region of the input image by using an isotropic filter to obtain a second image, where the second region is a texture region of the input image
  • a merging unit configured to perform image fusion on the first image and the second image to obtain an output image.
  • the input image can be decomposed, so that the acquired first image can not only remove the noise of each frequency band of the input image, but also improve the sharpness and flatness of the edge of the input image.
  • the input image is filtered by an isotropic filter to obtain a second image, thereby improving the sharpness of the second image and maintaining the original naturalness.
  • the electronic device shown in this embodiment can image-merge the first image and the second image to form an output image, so that the output image can achieve better effects in terms of noise control, definition enhancement, texture naturalness, and the like. That is, the output image effectively controls the noise amplification problem while improving the image sharpness.
  • the electronic device further includes:
  • a second determining unit configured to perform texture analysis on each pixel of the input image to determine a texture feature parameter of each pixel
  • a third determining unit configured to determine the flat region, the edge region, and the texture region according to the texture feature parameter, wherein the texture feature parameter of all pixels in the flat region is less than a first threshold, The texture feature parameter of all pixels in the edge region is greater than a second threshold, and the texture feature parameter of all pixels in the texture region is greater than or equal to the first threshold and less than or equal to the second threshold The first threshold is less than the second threshold.
  • the second determining unit includes:
  • a first determining module configured to determine a selected area of the target pixel, wherein the target pixel is any pixel of the input image, and the selected area is centered on the target pixel;
  • a second determining module configured to perform singular value decomposition on the selected area of the target pixel to obtain a first feature value S0 and a second feature value S1;
  • a third determining module configured to calculate the texture feature parameter gammaMap of the target pixel according to a first formula
  • the first formula is:
  • the kSum is an area of the selected area of the target pixel
  • the lambda is any constant greater than 0 and less than or equal to 1, the alpha being greater than 0 and less than or equal to 1 Any constant.
  • the electronic device shown in this embodiment can quickly distinguish the flat region of the input image, the edge region and the texture region, thereby effectively ensuring that the image processing method shown in this embodiment can input the image.
  • the adaptation of different areas improves the efficiency of image processing under the premise of effectively improving image clarity, flatness and naturalness.
  • the merging unit includes:
  • a fourth determining module configured to determine a weight weight according to the second formula
  • the second formula is:
  • T1, T2, T3, and T4 are constants that are sequentially incremented greater than or equal to 0;
  • a fifth determining module configured to perform image fusion on the first image R1 and the second image R2 according to a third formula to obtain an output image R;
  • the first image and the second image can be fused to form an output image, thereby effectively ensuring that the output image is in terms of noise control, definition enhancement, texture naturalness, and the like.
  • a better effect can be achieved, that is, the output image effectively controls the noise amplification problem while improving the image sharpness.
  • the first acquiring unit includes :
  • a first acquiring module configured to filter all regions I0 of the input image by using an edge-based filter EPF to obtain a filtered image A0;
  • a second obtaining module configured to perform high frequency enhancement on the filtered image A0 by using a low pass filter LPF to obtain a high frequency enhanced image B0;
  • a third obtaining module configured to perform a low-pass filtering and a down sampling operation on the high-frequency enhanced image B0 layer by layer to obtain a multi-layer image with decreasing area.
  • the second acquiring unit is further configured to: use an image of each scale according to the fourth formula.
  • I1 is to perform multiscale decomposition on the high frequency enhanced image B0.
  • Hn is a high frequency information image of In
  • U represents an upsampling operation;
  • the third obtaining unit includes:
  • a sixth determining module configured to increase the high frequency information image Hn of In according to the fifth formula by area
  • a seventh determining module configured to determine the first image, wherein the first image is an image of the reconstructed image R0 corresponding to the flat region and the edge region of the input image.
  • the first acquiring unit includes :
  • a fourth acquiring module configured to filter the first area Im by using an edge-based filter EPF to obtain a filtered image A0 ,
  • a fifth acquiring module for filtering the image after A0, enhanced by a high-frequency low-pass filter LPF to obtain the high-frequency enhanced image B0,;
  • sharpenLevel is the intensity of high frequency enhancement
  • the second acquiring unit is further configured to: use an image of each scale according to the sixth formula.
  • the third obtaining unit includes:
  • a ninth determining means for determining a reconstructed image R0, to the first image.
  • the first image acquired by the image processing method shown in this embodiment can not only remove the noise of each frequency band of the input image, but also improve the sharpness and flatness of the edge of the input image.
  • the filtering unit includes:
  • a seventh acquiring module configured to filter all areas I0 of the input image by the isotropic filter LPF to obtain a target image
  • the target image I0+(I0 ⁇ I0 ⁇ LPF)*sharpenLevel, ⁇ denotes a convolution operation, and sharpenLevel is a high-frequency enhancement intensity;
  • a tenth determining module configured to determine the second image, wherein the second image is an image in the target image that corresponds to the texture region of the input image.
  • the filtering unit is further used to Transmitting, by the isotropic filter LPF, the second region M0 of the input image to obtain a second image R2;
  • R2 M0+(M0 ⁇ M0 ⁇ LPF)*sharpenLevel
  • denotes convolution operation
  • sharpenLevel is the intensity of high frequency enhancement
  • the electronic device further includes:
  • a fourth acquiring unit configured to acquire a statistical characteristic edge of each pixel of the input image, where a statistical characteristic edge of each pixel of the input image is an edge strength of the input image or the input The intensity of the high frequency information into the image;
  • a fifth obtaining unit configured to calculate the high-band enhanced intensity sharpenLevel according to the eighth formula
  • W1, W2, W3, and W4 are constants that are sequentially incremented by greater than or equal to 0, and MinLevel1 and MinLevel2 are constants smaller than MaxLevel.
  • a third aspect of the embodiments of the present invention provides an electronic device, including a processor, an output unit, and an input unit.
  • the processor is configured to acquire an input image by using the input unit
  • the processor is further configured to determine a target layer number, wherein the target layer number is any natural number in [1, log2(min(width, height))], and width is a width of the input image, height The height of the input image;
  • the processor is further configured to decompose the first region of the input image to obtain a multi-layer image with decreasing area, wherein the first region is a flat region and an edge region of the input image, and The number of layers of the multi-layer image is equal to the number of target layers;
  • the processor is further configured to perform an upsampling operation on an image of each scale to obtain a high frequency information image
  • the processor is further configured to reconstruct all the high frequency information images layer by layer in an order of increasing area to obtain a first image, and an area of the first image is equal to an area of the first area of the input image ;
  • the processor is further configured to filter a second region of the input image by using an isotropic filter to obtain a second image, where the second region is a texture region of the input image;
  • the processor is further configured to perform image fusion on the first image and the second image to obtain an output image
  • the processor displays the output image through the output unit.
  • the input image can be decomposed, so that the acquired first image can not only remove the noise of each frequency band of the input image, but also improve the sharpness and flatness of the edge of the input image.
  • the input image is filtered by an isotropic filter to obtain a second image, thereby improving the sharpness of the second image and maintaining the original naturalness.
  • the electronic device shown in this embodiment can image-merge the first image and the second image to form an output image, so that the output image can achieve better effects in terms of noise control, definition enhancement, texture naturalness, and the like. That is, the output image effectively controls the noise amplification problem while improving the image sharpness.
  • the processor is further configured to perform texture analysis on each pixel of the input image to determine a texture feature parameter of each pixel;
  • the processor is further configured to determine the flat region, the edge region, and the texture region according to the texture feature parameter, wherein the texture feature parameter of all pixels in the flat region is less than a first threshold
  • the texture feature parameter of all pixels in the edge region is greater than a second threshold
  • the texture feature parameter of all pixels in the texture region is greater than or equal to the first threshold and less than or equal to the second a threshold, the first threshold being less than the second threshold.
  • the processor is further configured to determine a selected area of the target pixel, where the The target pixel is any pixel of the input image, and the selected area is centered on the target pixel;
  • the processor is further configured to perform singular value decomposition on the selected area of the target pixel to obtain a first feature value S0 and a second feature value S1;
  • the processor is further configured to calculate the texture feature parameter gammaMap of the target pixel according to a first formula
  • the first formula is:
  • the kSum is an area of the selected area of the target pixel
  • the lambda is any constant greater than 0 and less than or equal to 1, the alpha being greater than 0 and less than or equal to 1 Any constant.
  • the processor is further configured to determine a weight weight according to the second formula
  • the second formula is:
  • T1, T2, T3, and T4 are constants that are sequentially incremented greater than or equal to 0;
  • the processor is further configured to perform image fusion on the first image R1 and the second image R2 according to a third formula to obtain an output image R;
  • the processor is further configured to filter all areas I0 of the input image by using an edge-based filter EPF to obtain a filtered image A0;
  • the processor is further configured to perform high frequency enhancement on the filtered image A0 through a low pass filter LPF to obtain a high frequency enhanced image B0;
  • the processor is further configured to perform low-pass filtering and mining on the high-frequency enhanced image B0 layer by layer Sample operations to obtain a multi-layered image with decreasing area.
  • the processor is further configured to determine the first image, wherein the first image is an image of the reconstructed image R0 that corresponds to the flat region and the edge region of the input image.
  • the processor is further configured to filter the first region Im by using an edge-based filter EPF to obtain a filtered image A0 ,
  • the processor further configured to said filtered image A0, enhanced by a high-frequency low-pass filter LPF to obtain the high-frequency enhanced image B0,;
  • sharpenLevel is the intensity of high frequency enhancement
  • the processor is further configured to layer by layer after the high-frequency enhanced image B0, low-pass filtering and down-sampling operation to obtain a multilayer image area decreasing.
  • the processor further configured to determine a reconstructed image R0, to the first image.
  • the processor is further configured to filter, by using the isotropic filter LPF, all regions I0 of the input image to obtain a target image;
  • the target image I0+(I0 ⁇ I0 ⁇ LPF)*sharpenLevel, ⁇ denotes a convolution operation, and sharpenLevel is a high-frequency enhancement intensity;
  • the processor is further configured to determine the second image, wherein the second image is an image in the target image that corresponds to the texture region of the input image.
  • the processor is further configured to filter the second region M0 of the input image by using the isotropic filter LPF to obtain a second image R2;
  • R2 M0+(M0 ⁇ M0 ⁇ LPF)*sharpenLevel
  • denotes convolution operation
  • sharpenLevel is the intensity of high frequency enhancement
  • a fourth implementation manner of the first aspect of the embodiment of the present invention, a sixth implementation manner of the first aspect of the embodiment of the present invention, an eighth implementation manner of the first aspect of the embodiment of the present invention, or the first aspect of the embodiment of the present invention In a tenth implementation manner of the first aspect of the embodiment of the present invention,
  • the processor is further configured to acquire a statistical characteristic edge of each pixel of the input image, where a statistical characteristic edge of each pixel of the input image is an edge intensity of the input image or a high frequency of the input image The strength of the information;
  • the processor is further configured to calculate, according to an eighth formula, the intensity of the high frequency enhancement, a sharpenLevel;
  • W1, W2, W3, and W4 are constants that are sequentially incremented by greater than or equal to 0, and MinLevel1 and MinLevel2 are constants smaller than MaxLevel.
  • a fourth aspect of the embodiments of the present invention provides a computer readable storage medium for storing one or more computer programs, the one or more computer programs including program code, when the computer program is run on a computer
  • the program code is used to perform the image processing method according to any one of the first aspect of the present invention to the tenth implementation of the first aspect of the embodiment of the present invention.
  • An embodiment of the present invention provides an image processing method, an electronic device, and a storage medium.
  • the method includes: determining a target layer number, and decomposing the first region of the input image to obtain a multi-layer image with decreasing area, for each
  • the image of the scale is subjected to an upsampling operation to obtain a high frequency information image, and all of the high frequency information images are reconstructed layer by layer in order of increasing area to obtain a first image, and the input image is passed through an isotropic filter.
  • the second region is filtered to acquire a second image, and the first image and the second image are image fused to obtain an output image.
  • the first image acquired by the image processing method shown in this embodiment can not only remove the noise of each frequency band of the input image, but also improve the sharpness and flatness of the edge of the input image. And can obtain a second image that enhances the sharpness of the image and maintains the original naturalness, so as to keep the output image while improving the sharpness of the image. Effective control of noise amplification problems.
  • FIG. 1 is a schematic structural diagram of an embodiment of an electronic device according to an embodiment of the present invention.
  • FIG. 2 is a flow chart of steps of an embodiment of an image processing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an embodiment of a correspondence between a statistical characteristic edge of an input image and a high-band enhancement intensity sharpenLevel of an input image according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of an embodiment of a layer-by-layer low-pass filtering and downsampling operation of a high-frequency enhanced image B0 to obtain a multi-layer image with reduced area;
  • FIG. 5 is a schematic diagram of an embodiment of a correspondence between a weight weight of an input image and a texture feature parameter gammaMap of an input image according to an embodiment of the present disclosure
  • FIG. 6 is a flowchart of steps of another embodiment of an image processing method according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram showing an effect comparison between an image displayed by the image processing method shown in the embodiment of the present invention and an image displayed by the image processing method according to the embodiment of the present invention
  • FIG. 8 is a schematic diagram showing another effect comparison of an image displayed by the image processing method shown in the embodiment of the present invention and an image displayed by the image processing method according to the embodiment of the present invention;
  • FIG. 9 is a schematic diagram showing another effect comparison of an image displayed by the image processing method shown in the embodiment of the present invention and an image displayed by the image processing method according to the embodiment of the present invention.
  • FIG. 10 is a schematic diagram showing another effect comparison of an image displayed by the image processing method shown in the embodiment of the present invention and an image displayed by the image processing method according to the embodiment of the present invention;
  • FIG. 11 is a schematic structural diagram of an embodiment of an electronic device according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of another embodiment of an electronic device according to an embodiment of the present invention.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the electronic device includes components as shown in FIG. 1, and the components are fed through one or more buses. Line communication.
  • the structure of the electronic device shown in FIG. 1 does not constitute a limitation of the present invention, and it may be a bus-shaped structure or a star-shaped structure, and may include more or more than the illustration. There are few parts, or some parts are combined, or different parts are arranged.
  • the electronic device may be any mobile or portable electronic device, including but not limited to a mobile phone, a tablet computer (English name: Tablet Personal Computer), a multimedia player, a personal digital assistant (English name: personal) Digital assistant, English abbreviation: PDA), navigation device, mobile Internet device (English full name: Mobile Internet Device, English abbreviation: MID), media player, smart TV, and a combination of two or more of the above.
  • the output unit 101 includes, but is not limited to, an image output unit and a sound output unit.
  • the image output unit is used to output text, pictures, and/or video.
  • the image output unit may include a display panel, for example, a liquid crystal display (English name: Liquid Crystal Display, English abbreviation: LCD), an organic light emitting diode (English name: Organic Light-Emitting Diode, English abbreviation: OLED), a field emission display (English full name: field emission display, English abbreviation FED) and other forms of display panels.
  • a display panel for example, a liquid crystal display (English name: Liquid Crystal Display, English abbreviation: LCD), an organic light emitting diode (English name: Organic Light-Emitting Diode, English abbreviation: OLED), a field emission display (English full name: field emission display, English abbreviation FED) and other forms of display panels.
  • the image output unit may include a reflective display, such as an electrophoretic (English) (electrophoretic) display, or a display using an optical interference modulation technique (English name: Interferometric Modulation of Light).
  • a reflective display such as an electrophoretic (English) (electrophoretic) display, or a display using an optical interference modulation technique (English name: Interferometric Modulation of Light).
  • the image output unit may comprise a single display or a plurality of displays, wherein the plurality of displays may be of the same size or different sizes.
  • the touch panel used by the input unit 101 can also serve as the display panel of the output unit 101 at the same time.
  • the touch panel detects a touch or proximity gesture operation thereon, it is transmitted to the processor 103 to determine the type of the touch event, and then the processor 103 provides a corresponding visual output on the display panel according to the type of the touch event. .
  • the input unit 107 and the output unit 101 are two independent components to implement the input and output functions of the electronic device
  • the touch panel and the display may be The panel is integrated to realize the input and output functions of the electronic device.
  • the image output unit can display various graphical user interfaces (English full name: Graphical User Interface, English abbreviated as GUI) as virtual control components, including but not limited to windows, scroll axes, icons, and scrapbooks. The user operates by touch.
  • GUI Graphical User Interface
  • the image output unit includes a filter and an amplifier for filtering and amplifying the video output by the processor.
  • the audio output unit includes a digital to analog converter for converting the audio signal output by the processor from a digital format to an analog format.
  • the output unit 101 specifically includes a display module 102, and the display module 102 is configured to display an image to be displayed on a display, and the display is covered with a transparent panel to make the image light Ability to enter the user's eyes.
  • the processor 103 is a control center of the electronic device, and connects various parts of the entire electronic device by using various interfaces and lines, by running or executing software programs and/or modules stored in the storage unit, and calling the storage. Data within storage unit 104 to perform various functions of the electronic device and/or process data.
  • the processor 103 may be composed of an integrated circuit (English name: Integrated Circuit, English abbreviation IC), for example, may be composed of a single packaged IC, or may be composed of a plurality of packaged ICs that have the same function or different functions.
  • IC Integrated Circuit
  • the processor 103 may include only a central processing unit (English full name: Central Processing Unit, English abbreviated as CPU), or may be a graphics processor (English full name: Graphics Processing Unit, English abbreviation: GPU), digital signal processor 103 (English full name: Digital Signal Processor, English abbreviation: DSP for short), and a combination of control chips (for example, baseband chips) in the communication unit 109.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • DSP Digital Signal Processor
  • control chips for example, baseband chips
  • the processor 103 may be a single computing core, and may also include multiple computing cores.
  • the storage unit 104 can be used to store software programs and modules, and the processor 103 executes various functional applications of the electronic device and implements data processing by running software programs and modules stored in the storage unit 104.
  • the storage unit 104 mainly includes a program storage area and a data storage area, wherein the program storage area
  • the operating system, at least one function required application, such as a sound playing program, an image playing program, and the like can be stored;
  • the data storage area can store data (such as audio data, phone book, etc.) created according to the use of the electronic device.
  • the storage unit 104 may include a volatile memory, such as a non-volatile dynamic random access memory (English name: Nonvolatile Random Access Memory, NVRAM for short), phase change random access memory (English) Full name: Phase Change RAM, English abbreviation: PRAM), magnetoresistive random access memory (English full name: Magetoresistive RAM, English abbreviation: MRAM), etc., may also include non-volatile memory, such as at least one disk storage device, electronic Erasable programmable read-only memory (English full name: Electrically Erasable Programmable Read-Only Memory, English abbreviation: EEPROM), flash memory devices, such as reverse or flash memory (English full name: NOR flash memory) or reverse flash memory (English full name: NAND flash memory).
  • NVRAM Nonvolatile Random Access Memory
  • NVRAM Nonvolatile Random Access Memory
  • PRAM Phase Change RAM
  • MRAM magnetoresistive random access memory
  • MRAM magnetoresistive random access memory
  • non-volatile memory such
  • the non-volatile memory stores an operating system and applications executed by the processor 103.
  • the processor 103 loads the running program and data from the non-volatile memory into the memory and stores the digital content in a plurality of storage devices.
  • the operating system includes controls and management of conventional system tasks such as memory management, storage device control, power management, etc., as well as various components and/or drivers that facilitate communication between various hardware and software.
  • the operating system may be an Android system of Google Inc., an iOS system developed by Apple Corporation, a Windows operating system developed by Microsoft Corporation, or an embedded operating system such as Vxworks.
  • the application includes any application installed on the electronic device, including but not limited to browsers, email, instant messaging services, word processing, keyboard virtualization, widgets, encryption, digital rights management, voice recognition, voice replication, Positioning (such as those provided by GPS), music playback, and more.
  • the storage unit 104 is configured to store code and data, and the code is used by the processor 103 to run.
  • the data includes optical deformation parameters, curvature parameters, image compression parameters, pixel weight parameters, and the like of the transparent panel. At least one of them.
  • An input unit 107 configured to implement interaction and/or a letter between the user and the electronic device Information is input to the electronic device.
  • the input unit 107 can receive numeric or character information input by a user to generate a signal input related to user settings or function control.
  • the input unit 107 may be a touch panel, or may be other human-computer interaction interfaces, such as physical input keys, microphones, etc., and may also be other external information extraction devices, such as a camera.
  • a touch panel also known as a touch screen or touch screen, collects operational actions that the user touches or approaches on.
  • the user uses an action of any suitable object or accessory such as a finger or a stylus on or near the touch panel, and drives the corresponding connecting device according to a preset program.
  • any suitable object or accessory such as a finger or a stylus on or near the touch panel
  • the touch panel may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects a touch operation of the user, converts the detected touch operation into an electrical signal, and transmits the electrical signal to the touch controller, and the touch controller receives the electrical signal from the touch detection device, and It is converted into contact coordinates and sent to the processor 103.
  • the touch controller can also receive commands from the processor and execute.
  • the input unit 107 can implement the touch panel by using various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the physical input keys used by the input unit 107 may include, but are not limited to, a physical keyboard, function keys (such as a volume control button, a switch button, etc.), a trackball, a mouse, a joystick, and the like.
  • function keys such as a volume control button, a switch button, etc.
  • trackball such as a mouse
  • joystick such as a joystick
  • the input unit 107 in the form of a microphone can collect the voice input by the user or the environment and convert it into a processor-executable command in the form of an electrical signal.
  • the input unit 107 may also be various types of sensor components, such as Hall devices, for detecting physical quantities of electronic devices, such as force, moment, pressure, stress, position, displacement, speed. , acceleration, angle, angular velocity, number of revolutions, speed, and time when the working state changes, etc., are converted into electricity for detection and control.
  • sensor components such as Hall devices, for detecting physical quantities of electronic devices, such as force, moment, pressure, stress, position, displacement, speed. , acceleration, angle, angular velocity, number of revolutions, speed, and time when the working state changes, etc.
  • Other sensor components may also include gravity sensors, three-axis accelerometers, gyroscopes, electronic compasses, ambient light sensors, proximity sensors, temperature sensors, humidity sensors, pressure sensors, heart rate sensors, fingerprint readers, and the like.
  • the camera module 108 is capable of performing image shooting according to a user's operation.
  • the image is taken and sent to the processor 103 to cause the processor 103 to process the image.
  • the communication unit 109 is configured to establish a communication channel, to enable the electronic device to connect to the remote server through the communication channel, and to media data from the remote server.
  • the communication unit 109 may include a wireless local area network (English name: Wireless Local Area Network, English short circuit) module, a Bluetooth module, a baseband module, and the like, and a radio frequency corresponding to the communication module (English name: Radio Frequency, English) Referred to as RF), it is used for wireless local area network communication, Bluetooth communication, infrared communication and/or cellular communication system communication, such as broadband code division multiple access (English name: Wideband Code Division Multiple Access, English abbreviation: W-CDMA And/or high-speed downlink packet access (English full name: High Speed Downlink Packet Access, English abbreviation: HSDPA), long-term evolution (English full name: Long Term Evolution, English abbreviation: LTE) system.
  • RF Radio Frequency
  • the communication unit 109 is used to control communication of components in the electronic device, and can support direct memory access (English full name: Direct Memory Access).
  • the various communication modules in the communication unit 109 generally appear in the form of an integrated circuit chip (English name: Integrated Circuit Chip), and can be selectively combined without including all communication modules. And the corresponding antenna group.
  • the communication unit 109 may include only a baseband chip, a radio frequency chip, and a corresponding antenna to provide communication functions in one cellular communication system.
  • the electronic device can be connected to a cellular network (English name: Cellular Network) or the Internet via a wireless communication connection established by the communication unit 109, such as wireless local area network access or WCDMA access.
  • the communication module in the communication unit 109 such as a baseband module, may be integrated into the processor, typically an APQ+MDM series platform such as that provided by Qualcomm Incorporated.
  • the radio frequency circuit 110 is configured to receive and transmit signals during information transmission or reception or during a call.
  • the radio frequency circuit 110 includes well-known circuits for performing these functions, including but not limited to an antenna system, a radio frequency transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, Decoding (Codec) chipset, user identity module (SIM) Card, memory, etc.
  • the radio frequency circuit 110 can also communicate with the network and other devices through wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to a global mobile communication system (English full name: Global System of Mobile communication, English abbreviation: GSM), general packet radio service (English full name: General Packet Radio Service, English abbreviation: GPRS), code division multiple access (English full name: Code Division Multiple Access, English abbreviation: CDMA), wideband code division multiple access (English full name: Wideband Code Division Multiple Access, English abbreviation: WCDMA), high-speed uplink chain Road packet access technology (English full name: High Speed Uplink Packet Access, English abbreviation: HSUPA), long-term evolution (English full name: Long Term Evolution, English abbreviation: LTE) e-mail, short message service (English full name: Short Messaging Service, English abbreviation: SMS).
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • a power source 111 for powering different components of the electronic device to maintain its operation may be a built-in battery, such as a common lithium ion battery, a nickel hydride battery, etc., and also includes an external power source that directly supplies power to the electronic device, such as an AC adapter.
  • the power source 111 can be further defined, for example, a power management system, a charging system, a power failure detecting circuit, a power converter or an inverter, and a power status indicator. (such as light-emitting diodes), as well as any other component associated with the generation, management, and distribution of electrical energy from electronic devices.
  • a power management system for example, a power management system, a charging system, a power failure detecting circuit, a power converter or an inverter, and a power status indicator. (such as light-emitting diodes), as well as any other component associated with the generation, management, and distribution of electrical energy from electronic devices.
  • Step 201 Receive an input image.
  • the input image may be an image captured by the electronic device shown in FIG. 1, or the electronic device shown in FIG. 1 may receive an image sent by another electronic device.
  • the source of the input image is not limited in this embodiment, as long as the electronic device shown in this embodiment can process the input image.
  • the noise of the input image shown in this embodiment can be classified into high frequency noise, medium frequency noise, and low frequency noise.
  • the high frequency noise of the input image exists in the high frequency band of the input image
  • the intermediate frequency noise of the input image exists in the intermediate frequency band of the input image
  • the low frequency noise of the input image exists in the low frequency band of the input image.
  • Step 202 Acquire a statistical characteristic edge of each pixel of the input image.
  • the statistical characteristic edge of the input image is an edge intensity of the input image or an intensity of high frequency information of the input image.
  • the method for obtaining the statistical characteristic edge of each pixel of the input image may be a Sobel operator sobel edge extraction algorithm, an image gradient extraction algorithm, etc., as shown in the prior art, specifically in this embodiment. Do not repeat them.
  • the specific method for acquiring the statistical characteristic edge of each pixel of the input image is not limited in this embodiment, as long as the statistical characteristic edge of each pixel of the input image can be determined.
  • the edge intensity of the input image is a measure of the local variation intensity of the input image along the normal direction of the edge of the input image.
  • Step 203 Calculate the intensity of the high frequency enhancement, sharpenLevel, according to the eighth formula.
  • W1, W2, W3, and W4 are constants that are sequentially incremented by greater than or equal to 0, and MinLevel1 and MinLevel2 are constants smaller than MaxLevel.
  • the eighth formula shown in this embodiment establishes a correspondence relationship between the statistical characteristic edge of the input image and the intensity of the high-frequency enhancement, sharpenLevel.
  • the W1, W2, W3, W4, MinLevel1, MinLevel2, and MaxLevel shown in this embodiment may be set by the manufacturer at the time of shipment.
  • W1, W2, W3, W4, and MinLevel1 can also be obtained through testing.
  • MinLevel2 and MaxLevel can also be obtained through testing.
  • the electronic device shown in this embodiment may acquire a test image in advance, acquire a statistical characteristic edge of each pixel of the test image, and gradually debug values of W1, W2, W3, W4, MinLevel1, MinLevel2, and MaxLevel. And compare the W1, W2, W3, W4, MinLevel1, MinLevel2 and MaxLevel to obtain the sharpness and signal-to-noise ratio of the output image of the test image. When the resolution and signal-to-noise ratio of the output image meet the requirements, the W1 can be determined. Specific values for W2, W3, W4, MinLevel1, MinLevel2, and MaxLevel.
  • W1, W2, W3, W4, MinLevel1, MinLevel2, and MaxLevel in this embodiment is an optional example, and is not limited, as long as the determined W1, W2, W3, W4, MinLevel1, MinLevel2, and MaxLevel enable the input image to acquire an output image with a sharpness and signal-to-noise ratio that meets certain requirements.
  • the statistical characteristic edge and the height of the input image can be established by the eighth formula.
  • the correspondence between the statistical characteristic edge of the input image and the intensity of the high-frequency enhancement, the sharpenLevel, in the embodiment may be as shown in FIG. 3.
  • all the pixels of the input image are respectively selected by the eighth formula, that is, the first pixel, the second pixel, the third pixel, the fourth pixel, and the fifth pixel.
  • the first pixel is a pixel whose statistical characteristic edge is smaller than W1
  • the second pixel is a pixel whose statistical characteristic edge is greater than or equal to W1 and less than or equal to W2
  • the third pixel is a pixel whose statistical characteristic edge is greater than W2 and smaller than W3, and fourth.
  • the pixel is a pixel whose statistical characteristic edge is greater than or equal to W3 and less than or equal to W4
  • the fifth pixel is a pixel whose statistical characteristic edge is greater than W4.
  • Step 204 Filter all the regions I0 of the input image by the edge-based filter EPF to obtain the filtered image A0.
  • the edge-based filter (English full name: Edge Preserve Filter, EPF) shown in this embodiment may be a non-local mean filter NLMean or a kernel regression filter SKR.
  • edge-based filter A detailed description of the edge-based filter is shown in the prior art, and is not described in detail in this embodiment.
  • all the regions I0 of the input image shown in this embodiment include a flat region, an edge region, and a texture region of the input image.
  • A0 I0 ⁇ EPF, ⁇ denotes a convolution operation.
  • Step 205 Perform high frequency enhancement on the filtered image A0 through the low pass filter LPF to obtain the high frequency enhanced image B0.
  • B0 A0+[A0 ⁇ A0 ⁇ LPF]*sharpenLevel.
  • sharpenLevel is the intensity of high frequency enhancement.
  • step 202 For details about the method for obtaining the sharpen level, refer to step 202, which is not specifically described in this step.
  • Step 206 Determine a target layer number of the multi-layer image.
  • the target layer number of the multi-layer image is any natural number in [1, log2(min(width, height))].
  • Width is the width of the input image
  • height is the height of the input image
  • the range of the target layer number of the multi-layer image [1, log 2 (min (width, height))] can be determined, in the process of specifically determining the target layer number of the multi-layer image, You can arbitrarily choose a value within this range.
  • Step 207 Perform low-pass filtering and down sampling operations on the high-frequency enhanced image B0 to obtain Multi-layer image with decreasing area.
  • the target layer number of the multi-layer image can be determined by step 206, and in step 207, the high-frequency enhanced image B0 is layer-by-layer according to the determined target layer number of the multi-layer image. Low pass filtering and downsampling operations.
  • the high frequency enhanced image B0 is first low pass filtered by a low pass filter having a filter coefficient of [.0625, .25, .375, .25, .0625].
  • low-pass filtering on the high-frequency-enhanced image B0, low-frequency information of the high-frequency-enhanced image B0 can be extracted, and high-frequency information of the high-frequency-enhanced image B0 can be filtered.
  • An X1 time downsampling operation is performed on the low pass filtered B0 to form an image I1.
  • the specific value of X1 is not limited as long as X1 is greater than 1.
  • the image width of the image I1 formed after the X1 times down sampling operation by the low-pass filtered B0 is 1/X1 of the image width of the high-frequency enhanced image B0
  • the The image height of the image I1 is 1/X1 of the image height of the high-frequency enhanced image B0.
  • X1 is equal to 2. It can be seen that the image width of the image I1 formed after performing the 2 ⁇ down sampling operation by the low-pass filtered B0 is The image width of the image B0 after high frequency enhancement is half, and the image height of the image I1 is half of the image height of the image B0 after the high frequency enhancement.
  • step 208 after the image I1 is acquired, the low pass filtering and the down sampling operation are performed on the basis of the image I1.
  • the specific manner of low-pass filtering the image I1 is the same as the method of low-pass filtering the image B0, and is not described in detail in this embodiment.
  • the low-pass filtered I1 is subjected to an X1-fold downsampling operation to form an image I2.
  • the sampling refers to a signal that is continuous in time and amplitude, and is converted into a discrete signal in time and amplitude under the action of the sampling pulse.
  • Downsampling is also the extraction of the signal.
  • downsampling is the re-acquisition of the digital signal.
  • the sampling rate of the re-acquisition is compared with the sampling rate of the original digital signal (such as sampled from the analog signal). If the original signal is smaller than the original signal, it is called downsampling.
  • the low-pass filtering and the downsampling operation are performed layer by layer with the same multiple X1 as an example.
  • the low-pass filtering and the downsampling operation may be performed in different layers at different times, which is not limited in this embodiment.
  • the decomposition shown in this step that is, the low-pass filtering and the downsampling operation, is performed until the n-th layer image In is acquired, wherein the value of n is equal to the target layer number of the multi-layer image acquired in step 205.
  • the decomposition shown in this embodiment may be a multi-scale decomposition, wherein the multi-scale decomposition may be a decomposition method that is processed by mathematical analysis methods to decompose images on different scales.
  • This embodiment is exemplified by taking an example of decomposing an image by using the multi-scale decomposition.
  • I1 is a first layer image obtained by multi-scale decomposition of the high-frequency enhanced image B0. If n is greater than 1, In is the image of the n-1th layer. The scale is decomposed to obtain the nth layer image.
  • Step 209 Perform an upsampling operation on the image of each scale to obtain a high frequency information image.
  • upsampling is a sample of the analog signal of the acquired image.
  • Upsampling can also be understood as the re-acquisition of a digital signal.
  • the sampling rate of the re-acquisition is compared with the sampling rate of the original digital signal (such as sampled from the analog signal), and the larger than the original signal is called upsampling.
  • images I1, I2, ..., In1, In whose area is gradually decreasing are acquired by step 206.
  • the image of each scale is subjected to an upsampling operation according to the fourth formula to obtain a high frequency information image.
  • the images I1, I2, ..., In1, In are upsampled according to the fourth formula. Obtain high frequency information images.
  • Hn is a high frequency information image of In, and U represents an upsampling operation.
  • X1 times upsampling operations are performed on the images I1, I2, ..., In1, In according to the fourth formula to obtain a high frequency information image.
  • the high-frequency information image H1 of the image I1, the high-frequency information image H2 of the image I2, and the high-frequency information image Hn of the image In can be acquired by the step 207 shown in this embodiment.
  • Step 210 The high-frequency information image Hn of In is reconstructed layer by layer in the order of increasing area according to the fifth formula to the reconstructed image R0.
  • the fifth formula shown in this embodiment is a recursive formula.
  • the acquired image width and image height of the reconstructed image R0 are equal to the image width and image height of the input image.
  • the flat area and the edge area of the input image are determined by steps 209 to 212 shown below.
  • Step 211 Determine a selected area of the target pixel.
  • the target pixel is any pixel of the input image.
  • the selected area is centered on the target pixel.
  • the selected area is exemplified as an example, and the selected area shown in this embodiment has a side length of 5 pixels.
  • the present embodiment is described by taking a selected area of a square as an example.
  • the selected area shown in this embodiment may also be other shapes, which is not limited in this embodiment.
  • the description of the size of the selected area in this embodiment is an exemplary description, which is not limited.
  • the side length of the selected area may be greater than 5 pixels or less than 5 pixels. .
  • the electronic device shown in this embodiment may analyze the input image to determine a size of a side length of the selected area, and may be accurately determined according to a size of a side length of the selected area determined by the electronic device.
  • the flat area, the edge area, and the texture area of the input image are analyzed.
  • the selected area is determined centering on each pixel of the input image.
  • Step 212 Perform singular value decomposition on the selected area of the target pixel to obtain a first feature value S0 and a second feature value S1.
  • the selected region of the target pixel is subjected to singular value decomposition (English full name: Singular Value Decomposition, English abbreviation: SVD).
  • two main directions of the gradient distribution of the target area are obtained by performing singular value decomposition on the selected area of the target pixel.
  • the projection of the gradient of the target region in the two main directions is acquired according to the two main directions of the gradient distribution of the target region.
  • the projection of the gradient of the target area in the one main direction is determined as the first characteristic value S0, and the projection of the gradient of the target area in the other main direction is determined to be the second characteristic value S1.
  • Step 213 Calculate the texture feature parameter gammaMap of the target pixel according to a first formula.
  • the first formula is:
  • the kSum is an area of the selected area of the target pixel.
  • the electronic device shown in this embodiment is capable of acquiring an area of a selected area of the target pixel of the input image.
  • the lambda is any constant greater than 0 and less than or equal to 1
  • the alpha is any constant greater than 0 and less than or equal to 1.
  • the electronic device can debug different lambdas and alphas to determine the resolution and signal-to-noise ratio of the output images of different lambdas and alphas, thereby selecting the lambda and alpha specific numerical values according to the sharpness and signal-to-noise ratio of the output image. .
  • Step 214 Determine the flat area, the edge area, and the texture area according to the texture feature parameter.
  • the target pixel is located in the flat region of the input image, and the edge region is also the texture region.
  • the gammaMap of the selected area of the target pixel is less than the first threshold, it is determined that the target pixel is located in the flat area of the input image.
  • the gammaMap of the selected region of the target pixel is greater than the second threshold, it is determined that the target pixel is located within the edge region of the input image.
  • the gammaMap of the selected region of the target pixel is greater than or equal to the first threshold and less than or equal to the second threshold, it is determined that the target pixel is located within the texture region of the input image.
  • the first threshold and the second threshold are not limited, as long as the flat region of the input image, the edge region, and the texture region can be determined according to the first threshold and the second threshold.
  • the electronic device in this embodiment may acquire a test image in advance, where a flat area of the test image, the edge area, and the texture area are known, and the electronic device may determine according to the test image.
  • a known flat region, the edge region and the texture region determine a first threshold and a magnitude of a second threshold.
  • the electronic device analyzes the gammaMap of each pixel of the input image to determine an area in which each pixel of the input image is located.
  • the texture feature parameter of all pixels in the flat region is less than a first threshold, and all pixels in the edge region are The texture feature parameter is greater than a second threshold, and the texture feature parameter of all pixels in the texture region is greater than or equal to the first threshold and less than or equal to the second threshold.
  • Step 215 Determine the first image.
  • the first image is an image of the reconstructed image R0 corresponding to the flat region and the edge region of the input image.
  • the electronic device performs texture analysis on the acquired input image to obtain an input image. After the flat area, the edge area, and the texture area, the texture-analyzed input image is stored.
  • the reconstructed image R0 is acquired through steps 203 to 208.
  • the electronic device compares the texture-analyzed input image with the reconstructed image R0 to obtain an image corresponding to the flat region and the edge region of the input image in the reconstructed image R0, and Determining an image corresponding to the flat area and the edge area of the input image as a first image.
  • Step 216 Filter all regions I0 of the input image by an isotropic filter LPF to obtain a target image.
  • the isotropic filter LPF is a filter having the same characteristics of the filter in each edge direction of the input image.
  • isotropic filter For a detailed description of the isotropic filter, please refer to the prior art, which is not specifically described in this embodiment.
  • All areas I0 of the input image include a second area of the input image.
  • the target image I0+(I0 ⁇ I0 ⁇ LPF)*sharpenLevel
  • denotes the convolution operation and the sharpenLevel is the intensity of the high frequency enhancement.
  • Step 217 Determine the second image.
  • the second image is an image of the target image corresponding to the texture region of the input image.
  • the electronic device performs texture analysis on the acquired input image to obtain the flat region of the input image, and after the edge region and the texture region, the image after the texture analysis is stored.
  • the target image is acquired.
  • the electronic device compares the texture-analyzed input image and the target image to obtain an image corresponding to the texture region of the input image in the target image, and determines the input image.
  • the image corresponding to the texture area is a second image.
  • Step 218 Determine a weight weight according to the second formula.
  • the second formula is:
  • T1, T2, T3, and T4 are constants that are sequentially incremented and greater than or equal to zero.
  • the T1, T2, T3, and T4 shown in this embodiment may be set by the manufacturer at the time of shipment.
  • T1, T2, T3, and T4 can also be obtained through testing.
  • the electronic device shown in this embodiment can obtain the test image in advance, and gradually adjust the values of T1, T2, T3, and T4, and compare the different T1, T2, T3, and T4 to obtain the clear image of the output image of the test image.
  • Degree and signal-to-noise ratio when the sharpness of the output image and the signal-to-noise ratio are the highest, the specific values of T1, T2, T3, and T4 can be determined.
  • T1, T2, T3, and T4 are obtained in this embodiment is an optional example, and is not limited, as long as the determined T1, T2, T3, and T4 enable the input image to be clear.
  • the degree and signal-to-noise ratio can meet the required output image.
  • the corresponding relationship between the weight and the gammaMap is shown in FIG. 5. It should be clarified that the corresponding relationship between the weight and the gammaMap shown in FIG. 5 is an optional example, which is not limited.
  • Step 219 Perform image fusion on the first image R1 and the second image R2 according to a third formula to obtain an output image R.
  • step 216 For details about how to obtain the weight, see step 216. The details are not described in this step.
  • the image processing method shown in this embodiment can decompose the input image, and can also perform an upsampling operation on the image of each scale, so that the acquired first image can not only remove the noise of each frequency band of the input image, but also can also Improve the sharpness and flatness of the edges of the input image.
  • the image processing method shown in this embodiment can filter the input image by an isotropic filter to obtain a second image, thereby improving the sharpness of the second image and maintaining the original naturalness.
  • the image processing method shown in this embodiment can image-merge the first image and the second image to form an output image, so that the output image can achieve better effects in terms of noise control, sharpness enhancement, texture naturalness, and the like. That is, the output image effectively controls the noise amplification problem while improving the image sharpness.
  • FIG. 7 to FIG. 10 are schematic diagrams showing a comparison of display effects of the image processing method provided by the embodiment and the image processing method provided by the embodiment in the different regions of the same input image.
  • the left side in FIG. 7 is an effect of an image displayed by the electronic device when the image 701 of the input image is not used in the image processing method of the present embodiment, and the right side in FIG. 7 is for the input image.
  • the area 701 adopts the effect of the image displayed by the electronic device when the image processing method provided by the embodiment is used.
  • the left side in FIG. 8 is an effect of an image displayed by the electronic device when the image processing method of the present embodiment is not employed in the region 801 of the input image, and the right side in FIG. 8 is the region 801 for the input image.
  • the left side in FIG. 9 is an effect of an image displayed by the electronic device when the image processing method of the present embodiment is not employed in the region 901 of the input image, and the right side in FIG. 9 is the region 901 for the input image.
  • the left side in FIG. 10 is an effect of an image displayed by the electronic device when the image processing method of the present embodiment is not employed in the region 1001 of the input image, and the right side in FIG. 10 is the region 1001 for the input image.
  • the image processing method shown in this embodiment can make the output image achieve better effects in terms of noise control, sharpness improvement, texture naturalness, etc., and is guaranteed.
  • the output image effectively controls the noise amplification problem while improving the sharpness of the image.
  • the second embodiment shows how to obtain the output image when multi-scale decomposition and filtering are performed on all regions of the input image.
  • the following describes the flat region and the edge region of the input image by first explaining the third embodiment shown in FIG. 6 . And how to get the output image when the texture area.
  • Step 601 Receive an input image.
  • Step 602 Acquire a statistical characteristic edge of each pixel of the input image.
  • Step 603 Calculate the intensity of the high frequency enhancement, the sharpenLevel, according to the eighth formula.
  • Step 604 Determine a selected area of the target pixel.
  • Step 605 Perform singular value decomposition on the selected area of the target pixel to obtain a first feature value S0 and a second feature value S1.
  • Step 606 Calculate the texture feature parameter gammaMap of the target pixel according to a first formula.
  • Step 607 Determine the flat area, the edge area, and the texture area according to the texture feature parameter.
  • Step 608 Determine a first area of the input image.
  • the first area of the input image is determined to be a flat area and an edge area of the input image.
  • Step 609 filtered by a filter based on the edge of the first region EPF Im to obtain a filtered image A0,.
  • the edge-based filter (English full name: Edge Preserve Filter, EPF) shown in this embodiment may be a non-local mean filter NLMean or a kernel regression filter SKR.
  • the specific manner in which the edge-based filter filters the first region Im of the input image is:
  • Step 610 after the filtered image A0, enhanced by a high-frequency low-pass filter LPF to obtain the high-frequency enhanced image B0,.
  • sharpenLevel is the intensity of high frequency enhancement.
  • Step 611 Determine a target layer number of the multi-layer image.
  • step 611 in this embodiment is shown in the step 206 shown in FIG. 2, and is not described in detail in this embodiment.
  • Step 612 layer by layer after the high-frequency enhanced image B0, low-pass filtering and down-sampling operation to obtain a multilayer image area decreasing.
  • step 611 of determining a target number of layers of a multilayer image in step 612 in accordance with the determined target number of layers of the multilayer high frequency enhanced image of the images B0, layer by layer Perform low pass filtering and downsampling operations.
  • an X1-fold downsampling operation is performed to form an image I1.
  • the specific value of X1 is not limited as long as X1 is greater than 1.
  • Image width of the image I1 after the present embodiment, the low pass filter B0, X1 times for downsampling operation after the formation of the high-frequency enhanced image B0, the image width of 1 / X1, and the image I1 is an image height after the high-frequency enhanced image B0, the image height 1 / X1.
  • step 612 after acquiring the image I1, low-pass filtering and lowering are performed based on the image I1. Sampling operation.
  • the specific manner of performing low-pass filtering on the image I1 is the same as the method of performing low-pass filtering on the image B0 , and is not described in detail in this embodiment.
  • the low-pass filtered I1 is subjected to an X1-fold downsampling operation to form an image I2.
  • the operation mode of the downsampling operation is the same as that of the embodiment shown in FIG. 2, and the specific manner of the downsampling operation in this embodiment is not described in detail.
  • the low-pass filtering and the downsampling operation are performed layer by layer at the same multiple X1 as an example.
  • the low-pass filtering and the downsampling operation may be performed in different layers at different times, which is not limited in this embodiment.
  • the decomposition shown in this embodiment may be a multi-scale decomposition, wherein the multi-scale decomposition may be a decomposition method that is processed by mathematical analysis methods to decompose images on different scales.
  • This embodiment is exemplified by taking an example of decomposing an image by using the multi-scale decomposition.
  • I1 is a first layer image obtained by multi-scale decomposition of the first region Im of the input image
  • Im is a pair m-1
  • the image is multiscale-scaled to obtain the mth layer image.
  • Step 613 Perform an upsampling operation on the image of each scale to obtain a high frequency information image.
  • the images I1, I2, ..., Im-1, Im, whose area is gradually decreasing, are acquired by step 612.
  • the image of each scale is subjected to an upsampling operation according to the sixth formula to obtain a high frequency information image.
  • the images I1, I2, ..., Im-1, Im are respectively subjected to upsampling operations according to the sixth formula to acquire high frequency information images.
  • Hm is a high frequency information image of Im, and U represents an upsampling operation.
  • X1 times upsampling operations are performed on the images I1, I2, ..., Im-1, Im, respectively, according to the sixth formula to obtain high frequency information images.
  • the high-frequency information image H1 of the image I1, the high-frequency information image H2 of the image I2, and the high-frequency information image Hm of the image Im can be acquired by the step 613 shown in this embodiment.
  • Step 614 according to the seventh formula Im according to high-frequency area Hm information image in order of increasing layer by layer to a reconstructed image reconstructed R0,.
  • the seventh formula shown in this embodiment is a recursive formula.
  • the reconstructed image acquired R0, image width and image width of the first region and the image height equal to the height of the input image.
  • Step 615 the reconstructed image is determined R0, to the first image.
  • Step 616 determining a second region of the input image.
  • the second area of the input image is determined to be a texture area of the input image.
  • Step 617 Filter the second region M0 of the input image by the isotropic filter LPF to obtain a second image R2.
  • denotes the convolution operation and the sharpenLevel is the intensity of the high frequency enhancement.
  • Step 618 Determine a weight according to the second formula.
  • Step 619 Perform image fusion on the first image R1 and the second image R2 according to a third formula to obtain an output image R.
  • steps 618 to 619 in this embodiment please refer to the steps shown in FIG. 2 . Steps 218 to 219 are not described in detail in this embodiment.
  • the image processing method shown in this embodiment can decompose the first region of the input image, and can also perform an upsampling operation on the image of each scale, so that the acquired first image can not only remove the first region of the input image.
  • the noise of the frequency band can also improve the sharpness and flatness of the edge of the first area of the input image.
  • the image processing method shown in this embodiment can filter the second region of the input image by using an isotropic filter to obtain a second image, thereby improving the sharpness of the second image and maintaining the original nature. degree.
  • the image processing method shown in this embodiment can image-merge the first image and the second image to form an output image, so that the output image can achieve better effects in terms of noise control, sharpness enhancement, texture naturalness, and the like. That is, the output image effectively controls the noise amplification problem while improving the image sharpness.
  • the image processing method shown in this embodiment can make the output image achieve better effects in terms of noise control, sharpness improvement, texture naturalness, etc., and is guaranteed.
  • the output image effectively controls the noise amplification problem while improving the sharpness of the image.
  • the embodiment provides an electronic device capable of implementing the image processing method shown in FIG. 2.
  • a fourth acquiring unit 1101 configured to acquire a statistical characteristic edge of each pixel of the input image, where a statistical characteristic edge of each pixel of the input image is an edge intensity of the input image or a high frequency of the input image The strength of the information;
  • a fifth obtaining unit 1102 configured to calculate, according to an eighth formula, the intensity of the high frequency enhancement, a sharpenLevel
  • W1, W2, W3, and W4 are constants that are sequentially incremented by greater than or equal to 0, and MinLevel1 and MinLevel2 are constants smaller than MaxLevel.
  • a second determining unit 1103, configured to perform texture analysis on each pixel of the input image to determine a texture feature parameter of each pixel
  • the second determining unit 1103 includes:
  • a first determining module 11031 configured to determine a selected area of the target pixel, where the target pixel is any pixel of the input image, and the selected area is centered on the target pixel;
  • a second determining module 11032 configured to perform singular value decomposition on the selected area of the target pixel to obtain a first feature value S0 and a second feature value S1;
  • a third determining module 11033 configured to calculate the texture feature parameter gammaMap of the target pixel according to a first formula
  • the first formula is:
  • the kSum is an area of the selected area of the target pixel
  • the lambda is any constant greater than 0 and less than or equal to 1, the alpha being greater than 0 and less than or equal to 1 Any constant.
  • a third determining unit 1104 configured to determine, according to the texture feature parameter, the flat region, the edge region, and the texture region, wherein the texture feature parameter of all pixels in the flat region is less than a first threshold
  • the texture feature parameter of all pixels in the edge region is greater than a second threshold
  • the texture feature parameter of all pixels in the texture region is greater than or equal to the first threshold and less than or equal to the second a threshold, the first threshold being less than the second threshold.
  • a first determining unit 1105 configured to determine a target layer number, where the target layer number is [1, log2(min(width, height))] any natural number, width is the width of the input image, and height is the height of the input image;
  • a first acquiring unit 1106, configured to decompose the first region of the input image to obtain a multi-layer image with decreasing area, wherein the first region is a flat region and an edge region of the input image, and The number of layers of the multi-layer image is equal to the number of target layers;
  • the first obtaining unit 1106 includes:
  • the first obtaining module 11061 is configured to filter all the regions I0 of the input image by the edge-based filter EPF to obtain the filtered image A0;
  • the second obtaining module 11062 is configured to perform high frequency enhancement on the filtered image A0 through the low pass filter LPF to obtain the high frequency enhanced image B0;
  • the third obtaining module 11063 is configured to perform low-pass filtering and down sampling operations on the high-frequency enhanced image B0 layer by layer to obtain a multi-layer image with decreasing area.
  • a second acquiring unit 1107 configured to perform an upsampling operation on an image of each scale to obtain a high frequency information image
  • a third acquiring unit 1108, configured to reconstruct all the high-frequency information images layer by layer in an order of increasing area to obtain a first image, and an area of the first image is equal to an area of the first area of the input image ;
  • the third obtaining unit 1108 includes:
  • the seventh determining module 11082 is configured to determine the first image, wherein the first image is an image in the reconstructed image R0 that corresponds to the flat region and the edge region of the input image.
  • a filtering unit 1109 configured to filter a second region of the input image by using an isotropic filter to obtain a second image, where the second region is a texture region of the input image;
  • the filtering unit 1109 includes:
  • An eighth obtaining module 11091 configured to filter, by using the isotropic filter LPF, all the regions I0 of the input image to obtain a target image;
  • the target image I0+(I0 ⁇ I0 ⁇ LPF)*sharpenLevel, ⁇ denotes a convolution operation, and sharpenLevel is a high-frequency enhancement intensity;
  • the tenth determining module 11092 is configured to determine the second image, wherein the second image is an image in the target image that corresponds to the texture region of the input image.
  • the merging unit 1110 is configured to perform image fusion on the first image and the second image to obtain an output image.
  • the converging unit 1110 includes:
  • a fourth determining module 11101 configured to determine a weight weight according to the second formula
  • the second formula is:
  • T1, T2, T3, and T4 are constants that are sequentially incremented greater than or equal to 0;
  • a fifth determining module 11102 configured to perform image fusion on the first image R1 and the second image R2 according to a third formula to obtain an output image R;
  • FIG. 2 The specific implementation process of the image processing method performed by the electronic device shown in this embodiment is shown in FIG. 2 , which is not described in detail in this embodiment.
  • FIG. 2 For the beneficial effects of the image processing performed by the electronic device shown in this embodiment, please refer to FIG. 2 , which is not specifically described in this embodiment.
  • the embodiment provides an electronic device capable of realizing the image processing method shown in FIG. 6.
  • a fourth acquiring unit 1201 configured to acquire a statistical characteristic edge of each pixel of the input image, where a statistical characteristic edge of each pixel of the input image is an edge intensity of the input image or a high frequency of the input image The strength of the information;
  • the fifth obtaining unit 1202 is configured to calculate, according to the eighth formula, the intensity of the high frequency enhancement, the sharpenLevel;
  • W1, W2, W3, and W4 are constants that are sequentially incremented by greater than or equal to 0, and MinLevel1 and MinLevel2 are constants smaller than MaxLevel.
  • a second determining unit 1203, configured to perform texture analysis on each pixel of the input image to determine a texture feature parameter of each pixel
  • the second determining unit 1203 includes:
  • a first determining module 12031 configured to determine a selected area of the target pixel, where the target pixel is any pixel of the input image, and the selected area is centered on the target pixel;
  • a second determining module 12032 configured to perform singular value decomposition on the selected area of the target pixel to obtain a first feature value S0 and a second feature value S1;
  • a third determining module 12033 configured to calculate, according to the first formula, the texture of the target pixel Sign parameter gammaMap
  • the first formula is:
  • the kSum is an area of the selected area of the target pixel
  • the lambda is any constant greater than 0 and less than or equal to 1, the alpha being greater than 0 and less than or equal to 1 Any constant.
  • a third determining unit 1204 configured to determine, according to the texture feature parameter, the flat region, the edge region, and the texture region, wherein the texture feature parameter of all pixels in the flat region is less than a first threshold
  • the texture feature parameter of all pixels in the edge region is greater than a second threshold
  • the texture feature parameter of all pixels in the texture region is greater than or equal to the first threshold and less than or equal to the second a threshold, the first threshold being less than the second threshold.
  • the first determining unit 1205 is configured to determine a target layer number, where the target layer number is any natural number in [1, log2(min(width, height))], width is the width of the input image, and height is an input. The height of the image;
  • a first acquiring unit 1206, configured to decompose the first region of the input image to obtain a multi-layer image with decreasing area, wherein the first region is a flat region and an edge region of the input image, and The number of layers of the multi-layer image is equal to the number of target layers;
  • the first obtaining unit 1206 includes:
  • a fourth obtaining module 12061 configured to filter the first area Im by using an edge-based filter EPF to obtain a filtered image A0 ,
  • sharpenLevel is the intensity of high frequency enhancement
  • Sixth obtaining module 12063 for the high frequency enhancement layer by layer back image B0, low-pass filtering and down-sampling operation to obtain a multilayer image area decreasing.
  • a second obtaining unit 1207 configured to perform an upsampling operation on an image of each scale to obtain a high frequency information image
  • the second obtaining unit 1207 is further configured to map each scale according to the sixth formula.
  • Performing multi-scale decomposition to obtain the first layer image If m is greater than 1, Im is the m-th layer image obtained by multi-scale decomposition of the m-th layer image, and Hm is a high-frequency information image of Im, U represents Upsampling operation;
  • a third obtaining unit 1208, configured to reconstruct all the high-frequency information images layer by layer in an order of increasing area to obtain a first image, and an area of the first image is equal to an area of the first area of the input image ;
  • the third obtaining unit 1208 includes:
  • a filtering unit 1209 configured to filter a second region of the input image by using an isotropic filter to obtain a second image, where the second region is a texture region of the input image;
  • the filtering unit 1208 is further configured to filter the second region M0 of the input image by using the isotropic filter LPF to obtain a second image R2;
  • R2 M0+(M0 ⁇ M0 ⁇ LPF)*sharpenLevel
  • denotes convolution operation
  • sharpenLevel is the intensity of high frequency enhancement
  • the merging unit 1210 is configured to perform image fusion on the first image and the second image to obtain an output image.
  • the converging unit 1209 includes:
  • a fourth determining module 12101 configured to determine a weight weight according to the second formula
  • the second formula is:
  • T1, T2, T3, and T4 are constants that are sequentially incremented greater than or equal to 0;
  • a fifth determining module 12102 configured to perform image fusion on the first image R1 and the second image R2 according to a third formula to obtain an output image R;
  • FIG. 6 The specific implementation process of the image processing method performed by the electronic device shown in this embodiment is shown in FIG. 6 , which is not described in detail in this embodiment.
  • the fourth embodiment and the fifth embodiment describe the structure of the electronic device capable of implementing the image processing method provided by the embodiment of the present invention from the perspective of the functional module, and the specific embodiment of the electronic device is shown in FIG. The structure is described in detail.
  • processor 103 The specific functions of the processor 103, the output unit 101, and the input unit 107 shown in FIG. 1 are further described in detail in this embodiment, so that the electronic device shown in FIG. 1 can implement the image processing method provided by the embodiment of the present invention. .
  • the processor 103 is configured to acquire an input image by using the input unit 107;
  • the processor 103 is further configured to determine a target layer number, where the target layer number is any natural number in [1, log 2 (min (width, height))], and width is a width of the input image. Height is the height of the input image;
  • the processor 103 is further configured to decompose the first region of the input image to obtain a multi-layer image with decreasing area, wherein the first region is a flat region and an edge region of the input image, and The number of layers of the multi-layer image is equal to the number of the target layers;
  • the processor 103 is further configured to perform an upsampling operation on an image of each scale to obtain a high frequency information image
  • the processor 103 is further configured to reconstruct all the high frequency information images layer by layer in an order of increasing area to obtain a first image, and an area of the first image is equal to a first area of the input image.
  • the processor 103 is further configured to filter a second region of the input image by using an isotropic filter to obtain a second image, where the second region is a texture region of the input image;
  • the processor 103 is further configured to perform image fusion on the first image and the second image to obtain an output image;
  • the processor 103 displays the output image through the output unit 101.
  • the processor 103 is further configured to perform texture analysis on each pixel of the input image to determine a texture feature parameter of each pixel.
  • the processor 103 is further configured to determine the flat region, the edge region, and the texture region according to the texture feature parameter, wherein the texture feature parameter of all pixels in the flat region is smaller than the first a threshold, the texture feature parameter of all pixels in the edge region is greater than a second threshold, and the texture feature parameter of all pixels in the texture region is greater than or equal to the first threshold and less than or equal to the first a second threshold, the first threshold being less than the second threshold.
  • the processor 103 is further configured to determine a selected area of the target pixel, where the target pixel is any pixel of the input image, and the selected area is centered on the target pixel ;
  • the processor 103 is further configured to perform singular value decomposition on the selected area of the target pixel to obtain a first feature value S0 and a second feature value S1;
  • the processor 103 is further configured to calculate the texture feature parameter gammaMap of the target pixel according to a first formula
  • the first formula is:
  • the kSum is an area of the selected area of the target pixel
  • the lambda is any constant greater than 0 and less than or equal to 1, the alpha being greater than 0 and less than or equal to 1 Any constant.
  • the processor 103 is further configured to determine a weight weight according to the second formula
  • the second formula is:
  • T1, T2, T3, and T4 are constants that are sequentially incremented greater than or equal to 0;
  • the processor 103 is further configured to perform image fusion on the first image R1 and the second image R2 according to a third formula to obtain an output image R;
  • the processor 103 is further configured to filter all the regions I0 of the input image by using an edge-based filter EPF to obtain a filtered image A0;
  • the processor 103 is further configured to perform high frequency enhancement on the filtered image A0 through a low pass filter LPF to obtain a high frequency enhanced image B0;
  • the processor 103 is further configured to perform low-pass filtering and down sampling operations on the high-frequency enhanced image B0 layer by layer to obtain a multi-layer image with decreasing area.
  • the processor 103 is further configured to determine the first image, wherein the first image is an image of the reconstructed image R0 that corresponds to the flat region and the edge region of the input image.
  • the processor 103 is further configured to filter the first area Im by using an edge-based filter EPF to obtain a filtered image A0 ,
  • the processor 103 is further configured to said filtered image A0, enhanced by a high-frequency low-pass filter LPF to obtain the high-frequency enhanced image B0,;
  • sharpenLevel is the intensity of high frequency enhancement
  • the processor 103 is further configured to layer by layer after the high-frequency enhanced image B0, low-pass filtering and down-sampling operation to obtain a multilayer image area decreasing.
  • the processor 103 is further configured to determine the reconstructed image R0, to the first image.
  • the processor 103 is further configured to filter, by using the isotropic filter LPF, all the regions I0 of the input image to obtain a target image;
  • the target image I0+(I0 ⁇ I0 ⁇ LPF)*sharpenLevel, ⁇ denotes a convolution operation, and sharpenLevel is a high-frequency enhancement intensity;
  • the processor 103 is further configured to determine the second image, wherein the second image is an image in the target image that corresponds to the texture region of the input image.
  • the processor 103 is further configured to filter the second region M0 of the input image by using the isotropic filter LPF to obtain a second image R2;
  • R2 M0+(M0 ⁇ M0 ⁇ LPF)*sharpenLevel
  • denotes convolution operation
  • sharpenLevel is the intensity of high frequency enhancement
  • the processor 103 is further configured to acquire a statistical characteristic edge of each pixel of the input image, where a statistical characteristic edge of each pixel of the input image is a strong edge of the input image. Degree or intensity of high frequency information of the input image;
  • the processor 103 is further configured to calculate the high-band enhanced intensity sharpenLevel according to the eighth formula
  • W1, W2, W3, and W4 are constants that are sequentially incremented by greater than or equal to 0, and MinLevel1 and MinLevel2 are constants smaller than MaxLevel.
  • FIG. 2 and FIG. 6 the specific process of the image processing method shown in FIG. 1 is shown in FIG. 2 and FIG. 6 , and details are not described in detail in this embodiment.
  • the electronic device shown in FIG. 1 performs the advantageous effects of the image processing method shown in this embodiment. Please refer to FIG. 2 and FIG. 6 for details, which are not specifically described in this embodiment.
  • This embodiment provides a computer readable storage medium.
  • the computer readable storage medium provided by this embodiment is for storing one or more computer programs, and the one or more computer programs include program code.
  • the program code is for performing the image processing method illustrated in FIGS. 2 and/or 6 when the computer program is run on a computer.
  • the program code is used to perform the specific process of the image processing method shown in FIG. 2 and/or FIG. 6 , which is shown in FIG. 2 and/or FIG. 6 , and is not described in detail in this embodiment.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

一种图像处理方法、电子设备以及存储介质,所述方法包括:确定目标层数,对所述输入图像的第一区域进行分解以获取多层图像,对每个尺度的图像进行上采样操作以获取高频信息图像,对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像,通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像,对所述第一图像和所述第二图像进行图像融合以获取输出图像。该图像处理方法有效地保住输出图像在提升图像清晰度的同时,有效地控制噪声放大问题。

Description

一种图像处理方法、电子设备以及存储介质 技术领域
本发明涉及通信领域,尤其涉及的是一种图像处理方法、电子设备以及存储介质。
背景技术
随着镜头和成像传感器技术的飞速发展,电子设备成像分辨率呈现日新月异的发展态势。在这种发展潮流中,用户对于电子设备拍照清晰度的要求也越来越严格。
电子设备在对图像进行处理的过程中为提升图像的清晰度,则电子设备在对图像进行处理的过程中,通常会有一个图像信号处理器对图像进行处理,该信号处理器包括有两个模块,一个模块用于图像噪声去除,另一个模块用于图像锐度增强,但是,在采用图像信号处理器对图像进行处理的过程中,噪声去除往往不可避免的导致图像细节损失和清晰度下降,而锐度增强也存在图像噪声放大等问题,而且图像信号处理器只能在同一频段对图像进行噪声去除,并不能处理其他频段的噪声,例如,图像信号处理器只能在去除高频的噪声,无法去除中低频噪声。
发明内容
本发明实施例提供了一种图像处理方法、电子设备以及存储介质。
本发明实施例第一方面提供了一种图像处理方法,包括:
获取输入图像,其中,该输入图像可为电子设备所拍摄的图像,或者为电子设备接收到其他电子设备所发送的图像;
根据输入图像的宽度和输入图像的高度确定目标层数,其中,所述目标层数为[1,log2(min(width,height))]内的任一自然数,width为输入图像的宽度,height为输入图像的高度;
对所述输入图像的第一区域进行分解以获取面积递减的多层图像,其中,所述第一区域为所述输入图像的平坦区域和边缘区域,且所述多层图像的层数等于所述目标层数;
对每个尺度的图像进行上采样操作以获取高频信息图像;
对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像,且所述第一图像的面积等于所述输入图像的第一区域的面积;
通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像,其中,所述第二区域为所述输入图像的纹理区域;
其中,所述各向同性的滤波器为在所述输入图像的每个边缘方向上滤波器的特性相同的滤波器。
对所述第一图像和所述第二图像进行图像融合以获取输出图像。
采用实施例所示的图像处理方法,能够对输入图像进行分解,以使获取到的第一图像不仅可以去除输入图像各个频段的噪声,同时还可以提升输入图像边缘的清晰度和平整度。通过各向同性的滤波器对所述输入图像进行滤波以获取第二图像,从而很好的提升第二图像的清晰度并维持原来的自然度。
本实施例所示的图像处理方法能够将第一图像和第二图像进行图像融合以形成输出图像,从而使得输出图像在噪声控制、清晰度提升、纹理自然度等方面都可以达到较好的效果,即输出图像在提升图像清晰度的同时,有效的控制噪声放大问题。
本发明实施例第一方面的第二种实现方式以及本发明实施例第一方面的第三种实现方式用于确定输入图像的所述平坦区域,所述边缘区域以及所述纹理区域。
结合本发明实施例第一方面,本发明实施例第一方面的第一种实现方式中,
本实施例中,可根据每个像素的纹理特征参数确定输入图像的所述平坦区域,所述边缘区域以及所述纹理区域;
具体的,对所述输入图像的每个像素进行纹理分析以确定每个像素的纹理特征参数;
所述平坦区域内的所有像素的所述纹理特征参数小于第一阈值;
所述边缘区域内的所有像素的所述纹理特征参数大于第二阈值;
所述纹理区域内的所有像素的所述纹理特征参数大于或等于所述第一阈值且小于或等于所述第二阈值,所述第一阈值小于所述第二阈值。
结合本发明实施例第一方面的第一种实现方式,本发明实施例第一方面的第二种实现方式中,
确定每个像素的纹理特征参数的具体方式可为:
确定目标像素的选定区域,其中,所述目标像素为所述输入图像的任一像素,且所述选定区域以所述目标像素为中心;
本实施例以所述选定区域为正方形为例进行示例说明,且本实施例所示的所述选定区域的边长为5个像素。
对所述目标像素的所述选定区域进行奇异值分解(英文全称:Singular Value Decomposition,英文简称:SVD);
通过对所述目标像素的所述选定区域进行奇异值分解以获取所述目标区域的梯度分布的两个主方向;
根据所述目标区域的梯度分布的两个主方向获取所述目标区域的梯度在这两个主方向上的投影;
本实施例中,确定所述目标区域的梯度在这一个主方向上的投影为第一特征值S0,确定所述目标区域的梯度在这另一个主方向上的投影为第二特征值S1。
根据第一公式计算所述目标像素的所述纹理特征参数gammaMap;
所述第一公式为:
Figure PCTCN2016078346-appb-000001
其中,所述kSum为所述目标像素的所述选定区域的面积,所述lambda为大于0且小于或等于1之间的任一常数,所述alpha为大于0且小于或等于1之间的任一常数。
通过本实施例所示的图像处理方法能够快速的区分所述输入图像的平坦区域,所述边缘区域以及所述纹理区域,从而有效的保证了本实施例所示的图像处理方法能够对输入图像的不同区域进行适应的处理,在有效的提升了图像清晰度、平整度以及自然度的前提下,提升了图像处理的效率。
结合本发明实施例第一方面的第二种实现方式,本发明实施例第一方面的第三种实现方式中,
本种实现方式中对如何将所述第一图像和所述第二图像进行图像融合以形成所述输出图像进行说明:
具体的,根据第二公式确定权重weight;
所述第二公式为:
Figure PCTCN2016078346-appb-000002
其中,T1、T2、T3、T4为依次递增且大于或等于0的常数;
根据第三公式对所述第一图像R1和所述第二图像R2进行图像融合以获取输出图像R;
其中,所述第三公式为:R=weight*R1+(1‐weight)*R2。
通过本实施例所示的图像处理方法,能够将所述第一图像和所述第二图像进行融合以形成输出图像,进而有效的保证输出图像在噪声控制、清晰度提升、纹理自然度等方面都可以达到较好的效果,即输出图像在提升图像清晰度的同时,有效的控制噪声放大问题。
结合本发明实施例第一方面至本发明实施例第一方面的第三种实现方式任一项所示,本发明实施例第一方面的第四种实现方式中,
以下对根据所述输入图像如何获取所述多层图像进行说明:
通过基于边缘的滤波器EPF对所述输入图像的所有区域I0进行滤波以获取滤波后图像A0;
本实施例所示的基于边缘的滤波器(英文全称:Edge Preserve Filter,英文简称:EPF)可为非局部均值滤波器NLMean或核回归滤波器SKR。
本实施例所示的所述输入图像的所有区域I0包括输入图像的平坦区域,边缘区域以及纹理区域。
其中,A0=I0⊙EPF,⊙表示卷积操作;
通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强 后图像B0;
其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像,以使多层图像的层数等于所述目标层数。
结合本发明实施例第一方面的第四种实现方式,本发明实施例第一方面的第五种实现方式中,
获取高频信息图像的具体过程为:
根据第四公式对每个尺度的图像进行上采样操作以获取高频信息图像;
具体的,根据第四公式分别对图像I1、I2……In‐1、In进行上采样操作以获取高频信息图像;
其中,所述第四公式为Hn=In‐U(In),若n等于1时,则I1为对所述高频增强后图像B0进行多尺度分解以获取的第1层图像,若n大于1时,In为对第n‐1层图像进行多尺度分解以获取的第n层图像;
Hn为In的高频信息图像,U表示上采样操作。
根据第四公式分别对图像I1、I2……In‐1、In进行X2倍的上采样操作以获取高频信息图像。
通过本实施例所示能够获取到图像I1的高频信息图像H1、获取到图像I2的高频信息图像H2、获取到图像In的高频信息图像Hn。
根据第五公式将In的高频信息图像Hn按面积递增的顺序逐层重建至重建图像R0,其中,所述第五公式为递归公式,且所述第五公式为:Rn=In;Rn‐1=U(Rn)+Hn‐1;
根据该递归公式指导Rn后,带入后面的等式Rn‐1=U(Rn)+Hn‐1,就可以得到Rn‐1。n输入图像最大的层数,一直到1;
获取的重建图像R0的图像宽度和图像高度与输入图像的图像宽度和图像高度相等;
在已确定输入图像的所述平坦区域和所述边缘区域的情况下,确定所述第一图像;
其中,所述第一图像为所述重建图像R0中与所述输入图像的所述平坦区域和所述边缘区域对应的图像。
通过本实施例所示的图像处理方法所获取到的第一图像不仅可以去除输入图像各个频段的噪声,同时还可以提升输入图像边缘的清晰度和平整度。
结合本发明实施例第一方面至本发明实施例第一方面的第三种实现方式任一项所示,本发明实施例第一方面的第六种实现方式中,
在已确定所述输入图像的所述平坦区域和所述边缘区域的情况下,可确定所述输入图像的第一区域;
在确定了输入图像的第一区域后即可获取面积递减的多层图像;
具体的,通过基于边缘的滤波器EPF对所述第一区域Im进行滤波以获取滤波后图像A0
其中,A0=Im⊙EPF,⊙表示卷积操作;
通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0
其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像。
结合本发明实施例第一方面的第六种实现方式,本发明实施例第一方面的第七种实现方式中,
获取高频信息图像的具体过程为:
根据第六公式对每个尺度的图像进行上采样操作以获取高频信息图像,所述第六公式为Hm=Im‐U(Im),其中,若m等于1时,则I1为对所述输入图像的所述第一区域Im进行多尺度分解以获取的第1层图像,若m大于1时,Im为对第m‐1层图像进行多尺度分解以获取的第m层图像,Hm为Im的高频信息图像,U表示上采样操作;
所述对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像包括:
根据第七公式将Im的高频信息图像Hm按面积递增的顺序逐层重建至重建图像R0,其中,所述第七公式为递归公式,且所述第七公式为:Rm=Im;Rm‐1=U(Rm)+Hm‐1;
确定所述重建图像R0为所述第一图像。
通过本实施例所示的图像处理方法所获取到的第一图像不仅可以去除输入图像各个频段的噪声,同时还可以提升输入图像边缘的清晰度和平整度。
结合本发明实施例第一方面至本发明实施例第一方面的第三种实现方式任一项所示,本发明实施例第一方面的第八种实现方式中,
获取第二图像的具体过程为:
通过所述各向同性的滤波器LPF对所述输入图像的所有区域I0进行滤波以获取目标图像;
其中,所述目标图像=I0+(I0‐I0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度;
确定所述第二图像,其中,所述第二图像为所述目标图像中与所述输入图像的所述纹理区域对应的图像。
通过本实施例所示的图像处理方法能够获取到提升了图像清晰度并维持原来的自然度的第二图像。
结合本发明实施例第一方面至本发明实施例第一方面的第三种实现方式任一项所示,本发明实施例第一方面的第九种实现方式中,
所述通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像包括:
通过所述各向同性的滤波器LPF对所述输入图像的所述第二区域M0进行滤波以获取第二图像R2;
其中,R2=M0+(M0‐M0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度。
通过本实施例所示的图像处理方法能够获取到提升了图像清晰度并维持原来的自然度的第二图像。
结合本发明实施例第一方面的第四种实现方式、本发明实施例第一方面的第六种实现方式、本发明实施例第一方面的第八种实现方式或本发明实施例第一方面的第九种实现方式任一项所示,本发明实施例第一方面的第十种实现方式中,
所述方法还包括:
获取所述输入图像每个像素的统计特性edge,其中,所述输入图像每个像素的统计特性edge为所述输入图像的边缘强度或所述输入图像的高频信息的强度;
根据第八公式计算所述高频增强的强度sharpenLevel;
其中,所述第八公式为:
Figure PCTCN2016078346-appb-000003
其中,W1、W2、W3、W4为依次递增的大于或等于0的常数,且MinLevel1、MinLevel2为小于MaxLevel的常数。
本发明实施例第二方面提供了一种电子设备,包括:
第一确定单元,用于确定目标层数,其中,所述目标层数为[1,log2(min(width,height))]内的任一自然数,width为输入图像的宽度,height为输入图像的高度;
第一获取单元,用于对所述输入图像的第一区域进行分解以获取面积递减的多层图像,其中,所述第一区域为所述输入图像的平坦区域和边缘区域,且 所述多层图像的层数等于所述目标层数;
第二获取单元,用于对每个尺度的图像进行上采样操作以获取高频信息图像;
第三获取单元,用于对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像,且所述第一图像的面积等于所述输入图像的第一区域的面积;
滤波单元,用于通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像,其中,所述第二区域为所述输入图像的纹理区域;
融合单元,用于对所述第一图像和所述第二图像进行图像融合以获取输出图像。
采用实施例所示的电子设备,能够对输入图像进行分解,以使获取到的第一图像不仅可以去除输入图像各个频段的噪声,同时还可以提升输入图像边缘的清晰度和平整度。通过各向同性的滤波器对所述输入图像进行滤波以获取第二图像,从而很好的提升第二图像的清晰度并维持原来的自然度。
本实施例所示的电子设备能够将第一图像和第二图像进行图像融合以形成输出图像,从而使得输出图像在噪声控制、清晰度提升、纹理自然度等方面都可以达到较好的效果,即输出图像在提升图像清晰度的同时,有效的控制噪声放大问题。
结合本发明实施例第二方面,本发明实施例第二方面的第一种实现方式中,
所述电子设备还包括:
第二确定单元,用于对所述输入图像的每个像素进行纹理分析以确定每个像素的纹理特征参数;
第三确定单元,用于根据所述纹理特征参数确定所述平坦区域,所述边缘区域以及所述纹理区域,其中,所述平坦区域内的所有像素的所述纹理特征参数小于第一阈值,所述边缘区域内的所有像素的所述纹理特征参数大于第二阈值,所述纹理区域内的所有像素的所述纹理特征参数大于或等于所述第一阈值且小于或等于所述第二阈值,所述第一阈值小于所述第二阈值。
结合本发明实施例第二方面的第一种实现方式,本发明实施例第二方面的第二种实现方式中,
所述第二确定单元包括:
第一确定模块,用于确定目标像素的选定区域,其中,所述目标像素为所述输入图像的任一像素,且所述选定区域以所述目标像素为中心;
第二确定模块,用于对所述目标像素的所述选定区域进行奇异值分解以获取第一特征值S0和第二特征值S1;
第三确定模块,用于根据第一公式计算所述目标像素的所述纹理特征参数gammaMap;
所述第一公式为:
Figure PCTCN2016078346-appb-000004
其中,所述kSum为所述目标像素的所述选定区域的面积,所述lambda为大于0且小于或等于1之间的任一常数,所述alpha为大于0且小于或等于1之间的任一常数。
通过本实施例所示的电子设备能够快速的区分所述输入图像的平坦区域,所述边缘区域以及所述纹理区域,从而有效的保证了本实施例所示的图像处理方法能够对输入图像的不同区域进行适应的处理,在有效的提升了图像清晰度、平整度以及自然度的前提下,提升了图像处理的效率。
结合本发明实施例第二方面的第二种实现方式,本发明实施例第二方面的第三种实现方式中,所述融合单元包括:
第四确定模块,用于根据第二公式确定权重weight;
所述第二公式为:
Figure PCTCN2016078346-appb-000005
其中,T1、T2、T3、T4为依次递增的大于或等于0的常数;
第五确定模块,用于根据第三公式对所述第一图像R1和所述第二图像R2进行图像融合以获取输出图像R;
其中,所述第三公式为:R=weight*R1+(1‐weight)*R2。
通过本实施例所示的电子设备,能够将所述第一图像和所述第二图像进行融合以形成输出图像,进而有效的保证输出图像在噪声控制、清晰度提升、纹理自然度等方面都可以达到较好的效果,即输出图像在提升图像清晰度的同时,有效的控制噪声放大问题。
结合本发明实施例第二方面至本发明实施例第二方面的第三种实现方式任一项所示,本发明实施例第二方面的第四种实现方式中,所述第一获取单元包括:
第一获取模块,用于通过基于边缘的滤波器EPF对所述输入图像的所有区域I0进行滤波以获取滤波后图像A0;
其中,A0=I0⊙EPF,⊙表示卷积操作;
第二获取模块,用于通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0;
其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
第三获取模块,用于逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像。
结合本发明实施例第二方面的第四种实现方式,本发明实施例第二方面的第五种实现方式中,所述第二获取单元还用于,根据第四公式对每个尺度的图像进行上采样操作以获取高频信息图像,其中,所述第四公式为Hn=In‐U(In),若n等于1时,则I1为对所述高频增强后图像B0进行多尺度分解以获取的第1层图像,若n大于1时,In为对第n‐1层图像进行多尺度分解以获取的第n层图像,Hn为In的高频信息图像,U表示上采样操作;
所述第三获取单元包括:
第六确定模块,用于根据第五公式将In的高频信息图像Hn按面积递增的 顺序逐层重建至重建图像R0,其中,所述第五公式为递归公式,且所述第五公式为:Rn=In;Rn‐1=U(Rn)+Hn‐1;
第七确定模块,用于确定所述第一图像,其中,所述第一图像为所述重建图像R0中与所述输入图像的所述平坦区域和所述边缘区域对应的图像。
结合本发明实施例第二方面至本发明实施例第二方面的第三种实现方式任一项所示,本发明实施例第二方面的第六种实现方式中,所述第一获取单元包括:
第四获取模块,用于通过基于边缘的滤波器EPF对所述第一区域Im进行滤波以获取滤波后图像A0
其中,A0=Im⊙EPF,⊙表示卷积操作;
第五获取模块,用于通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0
其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
第六获取模块,用于逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像。
结合本发明实施例第二方面的第六种实现方式,本发明实施例第二方面的第七种实现方式中,所述第二获取单元还用于,根据第六公式对每个尺度的图像进行上采样操作以获取高频信息图像,所述第六公式为Hm=Im‐U(Im),其中,若m等于1时,则I1为对所述输入图像的所述第一区域Im进行多尺度分解以获取的第1层图像,若m大于1时,Im为对第m‐1层图像进行多尺度分解以获取的第m层图像,Hm为Im的高频信息图像,U表示上采样操作;
所述第三获取单元包括:
第八确定模块,用于根据第七公式将Im的高频信息图像Hm按面积递增的顺序逐层重建至重建图像R0,其中,所述第七公式为递归公式,且所述第七公式为:Rm=Im;Rm‐1=U(Rm)+Hm‐1;
第九确定模块,用于确定所述重建图像R0为所述第一图像。
通过本实施例所示的图像处理方法所获取到的第一图像不仅可以去除输入图像各个频段的噪声,同时还可以提升输入图像边缘的清晰度和平整度。
结合本发明实施例第二方面至本发明实施例第二方面的第三种实现方式任一项所示,本发明实施例第二方面的第八种实现方式中,所述滤波单元包括:
第七获取模块,用于通过所述各向同性的滤波器LPF对所述输入图像的所有区域I0进行滤波以获取目标图像;
其中,所述目标图像=I0+(I0‐I0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度;
第十确定模块,用于确定所述第二图像,其中,所述第二图像为所述目标图像中与所述输入图像的所述纹理区域对应的图像。
通过本实施例所示的图像处理方法能够获取到提升了图像清晰度并维持原来的自然度的第二图像。
结合本发明实施例第二方面至本发明实施例第二方面的第三种实现方式任一项所示,本发明实施例第二方面的第九种实现方式中,所述滤波单元还用于,通过所述各向同性的滤波器LPF对所述输入图像的所述第二区域M0进行滤波以获取第二图像R2;
其中,R2=M0+(M0‐M0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度。
通过本实施例所示的图像处理方法能够获取到提升了图像清晰度并维持原来的自然度的第二图像。
结合本发明实施例第二方面的第四种实现方式、本发明实施例第二方面的第六种实现方式、本发明实施例第二方面的第八种实现方式或本发明实施例第二方面的第九种实现方式任一项所示,本发明实施例第二方面的第十种实现方式中,所述电子设备还包括:
第四获取单元,用于获取所述输入图像每个像素的统计特性edge,其中,所述输入图像每个像素的统计特性edge为所述输入图像的边缘强度或所述输 入图像的高频信息的强度;
第五获取单元,用于根据第八公式计算所述高频增强的强度sharpenLevel;
其中,所述第八公式为:
Figure PCTCN2016078346-appb-000006
其中,W1、W2、W3、W4为依次递增的大于或等于0的常数,且MinLevel1、MinLevel2为小于MaxLevel的常数。
本发明实施例第三方面提供了一种电子设备,包括处理器、输出单元以及输入单元;
所述处理器用于通过所述输入单元获取输入图像;
所述处理器,还用于确定目标层数,其中,所述目标层数为[1,log2(min(width,height))]内的任一自然数,width为所述输入图像的宽度,height为所述输入图像的高度;
所述处理器,还用于对所述输入图像的第一区域进行分解以获取面积递减的多层图像,其中,所述第一区域为所述输入图像的平坦区域和边缘区域,且所述多层图像的层数等于所述目标层数;
所述处理器,还用于对每个尺度的图像进行上采样操作以获取高频信息图像;
所述处理器,还用于对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像,且所述第一图像的面积等于所述输入图像的第一区域的面积;
所述处理器,还用于通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像,其中,所述第二区域为所述输入图像的纹理区域;
所述处理器,还用于对所述第一图像和所述第二图像进行图像融合以获取输出图像;
所述处理器通过所述输出单元显示所述输出图像。
采用实施例所示的电子设备,能够对输入图像进行分解,以使获取到的第一图像不仅可以去除输入图像各个频段的噪声,同时还可以提升输入图像边缘的清晰度和平整度。通过各向同性的滤波器对所述输入图像进行滤波以获取第二图像,从而很好的提升第二图像的清晰度并维持原来的自然度。
本实施例所示的电子设备能够将第一图像和第二图像进行图像融合以形成输出图像,从而使得输出图像在噪声控制、清晰度提升、纹理自然度等方面都可以达到较好的效果,即输出图像在提升图像清晰度的同时,有效的控制噪声放大问题。
结合本发明实施例第一方面,本发明实施例第一方面的第一种实现方式中,
所述处理器,还用于对所述输入图像的每个像素进行纹理分析以确定每个像素的纹理特征参数;
所述处理器,还用于根据所述纹理特征参数确定所述平坦区域,所述边缘区域以及所述纹理区域,其中,所述平坦区域内的所有像素的所述纹理特征参数小于第一阈值,所述边缘区域内的所有像素的所述纹理特征参数大于第二阈值,所述纹理区域内的所有像素的所述纹理特征参数大于或等于所述第一阈值且小于或等于所述第二阈值,所述第一阈值小于所述第二阈值。
结合本发明实施例第一方面的第一种实现方式,本发明实施例第一方面的第二种实现方式中,所述处理器,还用于确定目标像素的选定区域,其中,所述目标像素为所述输入图像的任一像素,且所述选定区域以所述目标像素为中心;
所述处理器,还用于对所述目标像素的所述选定区域进行奇异值分解以获取第一特征值S0和第二特征值S1;
所述处理器,还用于根据第一公式计算所述目标像素的所述纹理特征参数gammaMap;
所述第一公式为:
Figure PCTCN2016078346-appb-000007
其中,所述kSum为所述目标像素的所述选定区域的面积,所述lambda为大于0且小于或等于1之间的任一常数,所述alpha为大于0且小于或等于1之间的任一常数。
结合本发明实施例第一方面的第二种实现方式,本发明实施例第一方面的第三种实现方式中,所述处理器,还用于根据第二公式确定权重weight;
所述第二公式为:
Figure PCTCN2016078346-appb-000008
其中,T1、T2、T3、T4为依次递增的大于或等于0的常数;
所述处理器,还用于根据第三公式对所述第一图像R1和所述第二图像R2进行图像融合以获取输出图像R;
其中,所述第三公式为:R=weight*R1+(1‐weight)*R2。
结合本发明实施例第一方面至本发明实施例第一方面的第三种实现方式任一项所示,本发明实施例第一方面的第四种实现方式中,
所述处理器,还用于通过基于边缘的滤波器EPF对所述输入图像的所有区域I0进行滤波以获取滤波后图像A0;
其中,A0=I0⊙EPF,⊙表示卷积操作;
所述处理器,还用于通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0;
其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
所述处理器,还用于逐层对所述高频增强后图像B0进行低通滤波和下采 样操作以获取面积递减的多层图像。
结合本发明实施例第一方面的第四种实现方式,本发明实施例第一方面的第五种实现方式中,所述处理器,还用于根据第四公式对每个尺度的图像进行上采样操作以获取高频信息图像,其中,所述第四公式为Hn=In‐U(In),若n等于1时,则I1为对所述高频增强后图像B0进行多尺度分解以获取的第1层图像,若n大于1时,In为对第n‐1层图像进行多尺度分解以获取的第n层图像,Hn为In的高频信息图像,U表示上采样操作;
所述处理器,还用于根据第五公式将In的高频信息图像Hn按面积递增的顺序逐层重建至重建图像R0,其中,所述第五公式为递归公式,且所述第五公式为:Rn=In;Rn‐1=U(Rn)+Hn‐1;
所述处理器,还用于确定所述第一图像,其中,所述第一图像为所述重建图像R0中与所述输入图像的所述平坦区域和所述边缘区域对应的图像。
结合本发明实施例第一方面至本发明实施例第一方面的第三种实现方式任一项所示,本发明实施例第一方面的第六种实现方式中,
所述处理器,还用于通过基于边缘的滤波器EPF对所述第一区域Im进行滤波以获取滤波后图像A0
其中,A0=Im⊙EPF,⊙表示卷积操作;
所述处理器,还用于通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0
其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
所述处理器,还用于逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像。
结合本发明实施例第一方面的第六种实现方式,本发明实施例第一方面的第七种实现方式中,
所述处理器,还用于根据第六公式对每个尺度的图像进行上采样操作以获 取高频信息图像,所述第六公式为Hm=Im‐U(Im),其中,若m等于1时,则I1为对所述输入图像的所述第一区域Im进行多尺度分解以获取的第1层图像,若m大于1时,Im为对第m‐1层图像进行多尺度分解以获取的第m层图像,Hm为Im的高频信息图像,U表示上采样操作;
所述处理器,还用于根据第七公式将Im的高频信息图像Hm按面积递增的顺序逐层重建至重建图像R0,其中,所述第七公式为递归公式,且所述第七公式为:Rm=Im;Rm‐1=U(Rm)+Hm‐1;
所述处理器,还用于确定所述重建图像R0为所述第一图像。
结合本发明实施例第一方面至本发明实施例第一方面的第三种实现方式任一项所示,本发明实施例第一方面的第八种实现方式中,
所述处理器,还用于通过所述各向同性的滤波器LPF对所述输入图像的所有区域I0进行滤波以获取目标图像;
其中,所述目标图像=I0+(I0‐I0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度;
所述处理器,还用于确定所述第二图像,其中,所述第二图像为所述目标图像中与所述输入图像的所述纹理区域对应的图像。
结合本发明实施例第一方面至本发明实施例第一方面的第三种实现方式任一项所示,本发明实施例第一方面的第九种实现方式中,
所述处理器,还用于通过所述各向同性的滤波器LPF对所述输入图像的所述第二区域M0进行滤波以获取第二图像R2;
其中,R2=M0+(M0‐M0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度。
结合本发明实施例第一方面的第四种实现方式、本发明实施例第一方面的第六种实现方式、本发明实施例第一方面的第八种实现方式或本发明实施例第一方面的第九种实现方式任一项所示,本发明实施例第一方面的第十种实现方式中,
所述处理器,还用于获取所述输入图像每个像素的统计特性edge,其中,所述输入图像每个像素的统计特性edge为所述输入图像的边缘强度或所述输入图像的高频信息的强度;
所述处理器,还用于根据第八公式计算所述高频增强的强度sharpenLevel;
其中,所述第八公式为:
Figure PCTCN2016078346-appb-000009
其中,W1、W2、W3、W4为依次递增的大于或等于0的常数,且MinLevel1、MinLevel2为小于MaxLevel的常数。
本发明实施例第四方面提供了一种计算机可读的存储介质,用于存储一个或多个计算机程序,所述一个或多个计算机程序包括程序代码,当所述计算机程序在计算机上运行时,所述程序代码用于执行上述权利要求本发明实施例第一方面至本发明实施例第一方面的第十种实现方式中任一项所述的图像处理方法。
本发明实施例提供了一种图像处理方法、电子设备以及存储介质,所述方法包括:确定目标层数,对所述输入图像的第一区域进行分解以获取面积递减的多层图像,对每个尺度的图像进行上采样操作以获取高频信息图像,对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像,通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像,对所述第一图像和所述第二图像进行图像融合以获取输出图像。通过本实施例所示的图像处理方法所获取到的第一图像不仅可以去除输入图像各个频段的噪声,同时还可以提升输入图像边缘的清晰度和平整度。且能够获取到提升图像的清晰度并维持原来的自然度的第二图像,以保住输出图像在提升图像清晰度的同时, 有效的控制噪声放大问题。
附图说明
图1为本发明实施例所提供的电子设备的一种实施例结构示意图;
图2为本发明实施例所提供的图像处理方法的一种实施例步骤流程图;
图3为本发明实施例所提供的输入图像的统计特性edge和输入图像的高频增强的强度sharpenLevel的对应关系一种实施例示意图;
图4为本发明实施例所提供的逐层对高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像的一种实施例示意图;
图5为本发明实施例所提供的输入图像的权重weight和输入图像的纹理特征参数gammaMap的对应关系一种实施例示意图;
图6为本发明实施例所提供的图像处理方法的另一种实施例步骤流程图;
图7为未采用本发明实施例所示的图像处理方法所显示的图像和采用本发明实施例所示的图像处理方法所显示的图像的一种效果对比示意图;
图8为未采用本发明实施例所示的图像处理方法所显示的图像和采用本发明实施例所示的图像处理方法所显示的图像的另一种效果对比示意图;
图9为未采用本发明实施例所示的图像处理方法所显示的图像和采用本发明实施例所示的图像处理方法所显示的图像的另一种效果对比示意图;
图10为未采用本发明实施例所示的图像处理方法所显示的图像和采用本发明实施例所示的图像处理方法所显示的图像的另一种效果对比示意图;
图11为本发明实施例所提供的电子设备的一种实施例结构示意图;
图12为本发明实施例所提供的电子设备的另一种实施例结构示意图。
具体实施方式
实施例一:
首先结合图1所示对能够应用本发明实施例所示的图像处理方法的电子设备的具体结构进行详细说明。
其中,图1所示为本发明一个具体实施例的电子设备的具体结构示意图。
所述电子设备包括如图1所示的各组件,这些组件通过一条或多条总线进 行通信。
本领域技术人员可以理解,图1中示出的电子设备的结构并不构成对本发明的限定,它既可以是总线形结构,也可以是星型结构,还可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
在本发明实施例中,所述电子设备可以是任何移动或便携式电子设备,包括但不限于移动电话、平板电脑(英文全称:Tablet Personal Computer)、多媒体播放器、个人数字助理(英文全称:personal digital assistant,英文简称:PDA)、导航装置、移动上网装置(英文全称:Mobile Internet Device,英文简称:MID)、媒体播放器、智能电视,以及上述两项或两项以上的组合等。
本实施例所提供的电子设备包括:
输出单元101,所述输出单元101包括但不限于影像输出单元和声音输出单元。影像输出单元用于输出文字、图片和/或视频。
所述影像输出单元可包括显示面板,例如采用液晶显示器(英文全称:Liquid Crystal Display,英文简称:LCD)、有机发光二极管(英文全称:Organic Light-Emitting Diode,英文简称:OLED)、场发射显示器(英文全称:field emission display,英文简称FED)等形式的显示面板。
或者所述影像输出单元可以包括反射式显示器,例如电泳式(英文全称:electrophoretic)显示器,或利用光干涉调变技术(英文全称:Interferometric Modulation of Light)的显示器。
所述影像输出单元可以包括单个显示器或多个显示器,其中多个显示器可以是同一尺寸的,也可以是不同尺寸的。
在本发明的具体实施例中,上述输入单元101所采用的触控面板亦可同时作为输出单元101的显示面板。
例如,当触控面板检测到在其上的触摸或接近的手势操作后,传送给处理器103以确定触摸事件的类型,随后处理器103根据触摸事件的类型在显示面板上提供相应的视觉输出。
虽然在图1中,输入单元107与输出单元101是作为两个独立的部件来实现电子设备的输入和输出功能,但是在某些实施例中,可以将触控面板与显示 面板集成一体而实现电子设备的输入和输出功能。
例如,所述影像输出单元可以显示各种图形化用户接口(英文全称:Graphical User Interface,英文简称GUI)以作为虚拟控制组件,包括但不限于窗口、卷动轴、图标及剪贴簿,以供用户通过触控方式进行操作。
在本发明具体实施例中,影像输出单元包括滤波器及放大器,用来将处理器所输出的视频滤波及放大。音频输出单元包括数字模拟转换器,用来将处理器所输出的音频信号从数字格式转换为模拟格式。
在本发明实施例中,所述输出单元101具体包括显示模块102,所述显示模块102用于把待显示的图像显示在显示器上,所述显示器上覆盖设置有透明面板,以使图像的光线能够进入用户的眼睛。
处理器103,所述处理器103为电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储单元内的软件程序和/或模块,以及调用存储在存储单元104内的数据,以执行电子设备的各种功能和/或处理数据。
所述处理器103可以由集成电路(英文全称:Integrated Circuit,英文简称IC)组成,例如可以由单颗封装的IC所组成,也可以由连接多颗相同功能或不同功能的封装IC而组成。
举例来说,处理器103可以仅包括中央处理器(英文全称:Central Processing Unit,英文简称CPU),也可以是图形处理器(英文全称:Graphics Processing Unit,英文简称:GPU)、数字信号处理器103(英文全称:Digital Signal Processor,英文简称:简称DSP)、及通信单元109中的控制芯片(例如基带芯片)的组合。
在本发明实施例中,所述处理器103可以是单运算核心,也可以包括多运算核心。
存储单元104,所述存储单元104可用于存储软件程序以及模块,所述处理器103通过运行存储在存储单元104的软件程序以及模块,从而执行电子设备的各种功能应用以及实现数据处理。
所述存储单元104主要包括程序存储区和数据存储区,其中,程序存储区 可存储操作系统、至少一个功能所需的应用程序,比如声音播放程序、图像播放程序等等;数据存储区可存储根据电子设备的使用所创建的数据(比如音频数据、电话本等)等。
在本发明具体实施例中,存储单元104可以包括易失性存储器,例如非挥发性动态随机存取内存(英文全称:Nonvolatile Random Access Memory,英文简称:NVRAM)、相变化随机存取内存(英文全称:Phase Change RAM,英文简称:PRAM)、磁阻式随机存取内存(英文全称:Magetoresistive RAM,英文简称:MRAM)等,还可以包括非易失性存储器,例如至少一个磁盘存储器件、电子可擦除可编程只读存储器(英文全称:Electrically Erasable Programmable Read-Only Memory,英文简称:EEPROM)、闪存器件,例如反或闪存(英文全称:NOR flash memory)或是反及闪存(英文全称:NAND flash memory)。
非易失存储器储存处理器103所执行的操作系统及应用程序。所述处理器103从所述非易失存储器加载运行程序与数据到内存并将数字内容储存于大量储存装置中。所述操作系统包括用于控制和管理常规系统任务,例如内存管理、存储设备控制、电源管理等,以及有助于各种软硬件之间通信的各种组件和/或驱动程序。
在本发明实施例中,所述操作系统可以是Google公司的Android系统、Apple公司开发的iOS系统或Microsoft公司开发的Windows操作系统等,或者是Vxworks这类的嵌入式操作系统。
所述应用程序包括安装在电子设备上的任何应用,包括但不限于浏览器、电子邮件、即时消息服务、文字处理、键盘虚拟、窗口小部件、加密、数字版权管理、语音识别、语音复制、定位(例如由全球定位系统提供的功能)、音乐播放等等。
在本发明实施例中,所述存储单元104用于存储代码和数据,代码供所述处理器103运行,数据包括透明面板的光学形变参数、曲率参数,图像的压缩参数、像素权重参数等中的至少一种。
输入单元107,所述输入单元107用于实现用户与电子设备的交互和/或信 息输入到电子设备中。
例如,所述输入单元107可以接收用户输入的数字或字符信息,以产生与用户设置或功能控制有关的信号输入。
在本发明具体实施例中,所述输入单元107可以是触控面板,也可以是其他人机交互界面,例如实体输入键、麦克风等,还可是其他外部信息撷取装置,例如摄像头等。触控面板,也称为触摸屏或触控屏,可收集用户在其上触摸或接近的操作动作。
比如用户使用手指、触笔等任何适合的物体或附件在触控面板上或接近触控面板的位置的操作动作,并根据预先设定的程式驱动相应的连接装置。
可选的,触控面板可包括触摸检测装置和触摸控制器两个部分。
其中,触摸检测装置检测用户的触摸操作,并将检测到的触摸操作转换为电信号,以及将所述电信号传送给触摸控制器,触摸控制器从触摸检测装置上接收所述电信号,并将它转换成触点坐标,再送给所述处理器103。
所述触摸控制器还可以接收处理器发来的命令并执行。
此外,所述输入单元107可以采用电阻式、电容式、红外线(Infrared)以及表面声波等多种类型实现触控面板。
在本发明的其他实施例中,所述输入单元107所采用的实体输入键可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。麦克风形式的所述输入单元107可以收集用户或环境输入的语音并将其转换成电信号形式的、处理器可执行的命令。
在本发明的一些实施例中,所述输入单元107还可以是各类传感器件,例如霍尔器件,用于侦测电子设备的物理量,例如力、力矩、压力、应力、位置、位移、速度、加速度、角度、角速度、转数、转速以及工作状态发生变化的时间等,转变成电量来进行检测和控制。
其他的一些传感器件还可以包括重力感应计、三轴加速计、陀螺仪、电子罗盘、环境光传感器、接近传感器、温度传感器、湿度传感器、压力传感器、心率传感器、指纹识别器等。
摄像头模块108,所述摄像头模块108能够根据用户的操作进行图像的拍 摄,并将拍摄完成的图像发送给处理器103,以使处理器103对图像进行处理。
通信单元109,所述通信单元109用于建立通信信道,使电子设备通过所述通信信道以连接至远程服务器,并从所述远程服务器下媒体数据。
所述通信单元109可以包括无线局域网(英文全称:Wireless Local Area Network,英文简称wireless LAN)模块、蓝牙模块、基带模块等通信模块,以及所述通信模块对应的射频(英文全称:Radio Frequency,英文简称RF)电路,用于进行无线局域网络通信、蓝牙通信、红外线通信及/或蜂窝式通信系统通信,例如宽带码分多重接入(英文全称:Wideband Code Division Multiple Access,英文简称:W-CDMA)及/或高速下行封包存取(英文全称:High Speed Downlink Packet Access,英文简称:HSDPA)、长期演进(英文全称:Long Term Evolution,英文简称:LTE)系统等。
所述通信单元109用于控制电子设备中的各组件的通信,并且可以支持直接内存存取(英文全称:Direct Memory Access)。
在本发明的不同实施例中,所述通信单元109中的各种通信模块一般以集成电路芯片(英文全称:Integrated Circuit Chip)的形式出现,并可进行选择性组合,而不必包括所有通信模块及对应的天线组。
例如,所述通信单元109可以仅包括基带芯片、射频芯片以及相应的天线以在一个蜂窝通信系统中提供通信功能。经由所述通信单元109建立的无线通信连接,例如无线局域网接入或WCDMA接入,所述电子设备可以连接至蜂窝网(英文全称:Cellular Network)或因特网。在本发明的一些可选实施例中,所述通信单元109中的通信模块,例如基带模块可以集成到处理器中,典型的如高通公司提供的APQ+MDM系列平台。
射频电路110,所述射频电路110用于信息收发或通话过程中接收和发送信号。
例如,将基站的下行信息接收后,给处理器103处理;另外,将设计上行的数据发送给基站。通常,所述射频电路110包括用于执行这些功能的公知电路,包括但不限于天线系统、射频收发机、一个或多个放大器、调谐器、一个或多个振荡器、数字信号处理器、编解码(Codec)芯片组、用户身份模块(SIM) 卡、存储器等。
此外,射频电路110还可以通过无线通信与网络和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(英文全称:Global System of Mobile communication,英文简称:GSM)、通用分组无线服务(英文全称:General Packet Radio Service,英文简称:GPRS)、码分多址(英文全称:Code Division Multiple Access,英文简称:CDMA)、宽带码分多址(英文全称:Wideband Code Division Multiple Access,英文简称:WCDMA)、高速上行行链路分组接入技术(英文全称:High Speed Uplink Packet Access,英文简称:HSUPA)、长期演进(英文全称:Long Term Evolution,英文简称:LTE)电子邮件、短消息服务(英文全称:Short Messaging Service,英文简称:SMS)。
电源111,所述电源111用于给电子设备的不同组件进行供电以维持其运行。作为一般性理解,所述电源111可以是内置的电池,例如常见的锂离子电池、镍氢电池等,也包括直接向电子设备供电的外接电源,例如AC适配器等。
在本发明的一些实施例中,所述电源111还可以作更为广泛的定义,例如还可以包括电源管理系统、充电系统、电源故障检测电路、电源转换器或逆变器、电源状态指示器(如发光二极管),以及与电子设备的电能生成、管理及分布相关联的其他任何组件。
实施例二
以下结合图2所示对本实施例对本发明所提供的图像处理方法进行详细说明:
步骤201、接收输入图像。
本实施例中,该输入图像可为图1所示的电子设备所拍摄的图像,或者为图1所示的电子设备接收到其他电子设备所发送的图像。
本实施例对该输入图像的来源不做限定,只要本实施例所示的电子设备能够对该输入图像进行处理即可。
本实施例所示的输入图像的噪声可分为高频噪声、中频噪声和低频噪声。
具体的,输入图像的高频噪声存在于输入图像的高频频段,输入图像的中频噪声存在于输入图像的中频频段,输入图像的低频噪声存在于输入图像的低频频段。
步骤202、获取所述输入图像每个像素的统计特性edge。
其中,所述输入图像的统计特性edge为所述输入图像的边缘强度或所述输入图像的高频信息的强度。
可选的,获取所述输入图像每个像素的统计特性edge的方式可为索贝尔算子sobel边缘提取算法,图像梯度提取算法等,具体请参见现有技术所示,具体在本实施例中不做赘述。
本实施例对获取所述输入图像每个像素的统计特性edge的具体方法不做限定,只要能够确定所述输入图像的每个像素的统计特性edge即可。
具体的,所述输入图像的边缘强度为所述输入图像沿边缘的法线方向输入图像局部变化强度的量度。
步骤203、根据第八公式计算所述高频增强的强度sharpenLevel。
其中,所述第八公式为:
Figure PCTCN2016078346-appb-000010
其中,W1、W2、W3、W4为依次递增的大于或等于0的常数,且MinLevel1、MinLevel2为小于MaxLevel的常数。
可见,本实施例所示的第八公式建立了所述输入图像的统计特性edge和所述高频增强的强度sharpenLevel的对应关系。
可选的,本实施例所示的W1、W2、W3、W4、MinLevel1、MinLevel2以及MaxLevel可为出厂时厂商设定的。
可选的,也可通过测试的方式获取W1、W2、W3、W4、MinLevel1、 MinLevel2以及MaxLevel。
具体的,本实施例所示的电子设备可预先获取测试图像,获取所述测试图像的每个像素的统计特性edge,并逐步调试W1、W2、W3、W4、MinLevel1、MinLevel2以及MaxLevel的数值,并比较不同的W1、W2、W3、W4、MinLevel1、MinLevel2以及MaxLevel时获取测试图像的输出图像的清晰度和信噪比,当输出图像的清晰度和信噪比满足要求时,即可确定W1、W2、W3、W4、MinLevel1、MinLevel2以及MaxLevel的具体数值。
需明确的是,本实施例对获取W1、W2、W3、W4、MinLevel1、MinLevel2以及MaxLevel的具体数值的方式为可选的示例,不做限定,只要已确定的W1、W2、W3、W4、MinLevel1、MinLevel2以及MaxLevel使得输入图像能够获取到清晰度和信噪比满足一定要求的输出图像即可。
在执行本实施例的步骤203的过程中,因W1、W2、W3、W4、MinLevel1、MinLevel2以及MaxLevel已确定,则通过所述第八公式能够建立所述输入图像的统计特性edge和所述高频增强的强度sharpenLevel的对应关系。
可选的,本实施例所述输入图像的统计特性edge和所述高频增强的强度sharpenLevel的对应关系可参见图3所示。
需明确的是,通过第八公式建立的如图3所示的所述输入图像的统计特性edge和所述高频增强的强度sharpenLevel的对应关系为示例性说明,具体对应关系不做限定。
由图3所示可知,通过所述第八公式将输入图像的所有像素分别5种,即第一像素、第二像素、第三像素、第四像素以及第五像素。
其中,所述第一像素为统计特性edge小于W1的像素,第二像素为统计特性edge大于等于W1且小于等于W2的像素,第三像素为统计特性edge大于W2且小于W3的像素,第四像素为统计特性edge大于等于W3小于等于W4的像素,第五像素为统计特性edge大于W4的像素。
以图像所示为例,W1=100,W2=300,W3=500,W4=800,W5=1000,MinLevel1=0.2、MinLevel2=0.5以及MaxLevel=1.5。
需明确的是,本实施例对W1、W2、W3、W4、MinLevel1、MinLevel2以及MaxLevel的具体数值的说明为可选的示例,不做限定。
可见,将输入图像的每个图像的统计特性edge代入第八公式中即可获取每个像素的高频增强的强度sharpenLevel。
步骤204、通过基于边缘的滤波器EPF对所述输入图像的所有区域I0进行滤波以获取滤波后图像A0。
可选的,本实施例所示的基于边缘的滤波器(英文全称:Edge Preserve Filter,英文简称:EPF)可为非局部均值滤波器NLMean或核回归滤波器SKR。
对所述基于边缘的滤波器的具体说明请详见现有技术所示,具体在本实施例中不做赘述。
更具体的,本实施例所示的所述输入图像的所有区域I0包括输入图像的平坦区域,边缘区域以及纹理区域。
更具体的,基于边缘的滤波器对所述输入图像的所有区域I0进行滤波的具体方式为:
A0=I0⊙EPF,⊙表示卷积操作。
基于边缘的滤波器的具体工作原理请详见现有技术所示,具体在本实施例中不做赘述。
步骤205、通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0。
具体的,本实施例中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel。
其中,sharpenLevel为高频增强的强度。
其中,所述sharpenLevel具体的获取方法请详见步骤202所示,具体在本步骤中不做赘述。
步骤206、确定所述多层图像的目标层数。
其中,所述多层图像的目标层数为[1,log2(min(width,height))]内的任一自然数。
width为所述输入图像的宽度,height为所述输入图像的高度。
可见,本实施例中,能够确定所述多层图像的目标层数的范围[1,log2(min(width,height))],在具体确定所述多层图像的目标层数的过程中,即可在该范围内任意选择一个数值。
步骤207、对所述高频增强后图像B0进行低通滤波和下采样操作以获取 面积递减的多层图像。
具体的,本实施例通过步骤206即可确定多层图像的目标层数,在步骤207中,根据已确定的所述多层图像的目标层数对所述高频增强后图像B0逐层进行低通滤波和下采样操作。
首先通过滤波器系数为[.0625,.25,.375,.25,.0625]的低通滤波器对所述高频增强后图像B0进行低通滤波。
本实施例通过对所述高频增强后图像B0进行低通滤波,从而能够提取所述高频增强后图像B0的低频信息,过滤所述高频增强后图像B0的高频信息。
对低通滤波后的B0进行X1倍的下采样操作以形成图像I1。
本实施例中,对X1的具体数值不做限定,只要X1大于1即可。
本实施例中,通过低通滤波后的B0进行X1倍的下采样操作之后所形成的所述图像I1的图像宽度是所述高频增强后图像B0的图像宽度的1/X1,且所述图像I1的图像高度是所述高频增强后图像B0的图像高度的1/X1。
本实施例以图4所示为例,则以X1等于2进行示例性说明,可见,通过低通滤波后的B0进行2倍的下采样操作之后所形成的所述图像I1的图像宽度是所述高频增强后图像B0的图像宽度的一半,且所述图像I1的图像高度是所述高频增强后图像B0的图像高度的一半。
在步骤208中,当获取图像I1后,则以图像I1为基础进行低通滤波和下采样操作。
对图像I1进行低通滤波的具体方式与对图像B0进行低通滤波的方式相同,具体在本实施例中不做赘述。
具体的,对低通滤波后的I1进行X1倍的下采样操作以形成图像I2。
其中,采样是指将时间上、幅值上都连续的信号,在采样脉冲的作用下,转换成时间、幅值上离散的信号。
下采样也就是对信号的抽取。其实,下采样是对数字信号进行重采,重采的采样率与原来所获得的数字信号(比如从模拟信号采样而来)的采样率比较,小于原信号的则称为下采样。
所述下采样的具体的原理性说明也可详见现有技术所示,具体在本实施例 中不做赘述。
可见,本实施例以相同的倍数X1逐层进行低通滤波和下采样操作为例进行说明。
需明确的是,也可以不同的倍数逐层进行低通滤波和下采样操作,具体在本实施例中不做限定。
通过本步骤所示的分解,即低通滤波和下采样操作直至获取第n层图像In,其中,n的数值等于步骤205所获取到的所述多层图像的目标层数。
本实施例所示的分解可为多尺度分解,其中,所述多尺度分解可为通过数学分析方法,把图像分解在不同的尺度上,来处理的一种分解方法。
其中,对多尺度分解的具体说明请详见现有技术所示,具体在本实施例中不做赘述。
需明确的是,通过多尺度分解的方式获取多层图像的方式为可选的示例,不做限定。
本实施例以采用所述多尺度分解对图像进行分解为例进行示例性说明。
具体的,若n等于1时,则I1为对所述高频增强后图像B0进行多尺度分解以获取的第1层图像,若n大于1时,In为对第n-1层图像进行多尺度分解以获取的第n层图像。
步骤209、对每个尺度的图像进行上采样操作以获取高频信息图像。
其中,上采样就是采集图像的模拟信号的样本。
上采样也可理解为对数字信号进行重采,重采的采样率与原来获得该数字信号(比如从模拟信号采样而来)的采样率比较,大于原信号的称为上采样。
所述上采样的具体的原理性说明也可详见现有技术所示,具体在本实施例中不做赘述。
本实施例中,通过步骤206获取到了面积逐渐递减的图像I1、I2……In‐1、In。
具体的,根据第四公式对每个尺度的图像进行上采样操作以获取高频信息图像。
更具体的,根据第四公式分别对图像I1、I2……In‐1、In进行上采样操作以 获取高频信息图像。
其中,所述第四公式为Hn=In‐U(In)。
Hn为In的高频信息图像,U表示上采样操作。
更具体的,根据第四公式分别对图像I1、I2……In‐1、In进行X2倍的上采样操作以获取高频信息图像。
其中,X1.X2=1。
即通过本实施例所示的步骤207能够获取到图像I1的高频信息图像H1、获取到图像I2的高频信息图像H2、获取到图像In的高频信息图像Hn。
步骤210、根据第五公式将In的高频信息图像Hn按面积递增的顺序逐层重建至重建图像R0。
其中,第五公式为Rn=In;Rn‐1=U(Rn)+Hn‐1。
本实施例所示的第五公式为递归公式,根据该递归公式指导Rn后,带入后面的等式Rn‐1=U(Rn)+Hn‐1,就可以得到Rn‐1。n输入图像最大的层数,一直到1。
可见,获取的重建图像R0的图像宽度和图像高度与输入图像的图像宽度和图像高度相等。
本实施例中,通过以下所示的步骤209至步骤212确定输入图像的所述平坦区域和所述边缘区域。
需明确的是,本实施例所示的步骤211至步骤214的执行时序与步骤202至步骤209之间无执行时序上的先后关系。
步骤211、确定目标像素的选定区域。
其中,所述目标像素为所述输入图像的任一像素。
所述选定区域以所述目标像素为中心。
本实施例以所述选定区域为正方形为例进行示例说明,且本实施例所示的所述选定区域的边长为5个像素。
需明确的是,本实施例以正方形的选定区域为示例进行说明,本实施例所示的选定区域也可为其他形状,具体在本实施例中不做限定。
还需明确的是,本实施例对所述选定区域大小的说明为示例性的说明,不做限定,例如,所述选定区域的边长可以大于5个像素,也可以小于5个像素。
可选的,本实施例所示的电子设备可对所述输入图像进行分析,从而确定所述选定区域的边长的大小,根据电子设备所确定的选定区域的边长的大小能够精确的分析出所述输入图像的平坦区域,边缘区域以及纹理区域。
具体的,本实施例分别以所述输入图像的每一个像素为中心确定所述选定区域。
步骤212、对所述目标像素的所述选定区域进行奇异值分解以获取第一特征值S0和第二特征值S1。
具体的,对所述目标像素的所述选定区域进行奇异值分解(英文全称:Singular Value Decomposition,英文简称:SVD)。
本实施例所示的奇异值分解的具体过程请详见现有技术所示,具体在本实施例中不做赘述。
具体的,通过对所述目标像素的所述选定区域进行奇异值分解以获取所述目标区域的梯度分布的两个主方向。
更具体的,根据所述目标区域的梯度分布的两个主方向获取所述目标区域的梯度在这两个主方向上的投影。
本实施例中,确定所述目标区域的梯度在这一个主方向上的投影为第一特征值S0,确定所述目标区域的梯度在这另一个主方向上的投影为第二特征值S1。
步骤213、根据第一公式计算所述目标像素的所述纹理特征参数gammaMap。
具体的,所述第一公式为:
Figure PCTCN2016078346-appb-000011
其中,所述kSum为所述目标像素的所述选定区域的面积。
本实施例所示的电子设备能够获取所述输入图像的所述目标像素的选定区域的面积。
具体的,所述lambda为大于0且小于或等于1之间的任一常数,所述alpha为大于0且小于或等于1之间的任一常数。
还例如,电子设备可调试不同的lambda以及alpha,从而确定不同的lambda以及alpha时输出图像的清晰度和信噪比,从而根据输出图像的清晰度和信噪比选择lambda以及alpha具体的数值大小。
步骤214、根据所述纹理特征参数确定所述平坦区域,所述边缘区域以及所述纹理区域。
具体的,本实施例根据所述纹理特征参数gammaMap的大小确定目标像素位于所述输入图像的所述平坦区域,所述边缘区域还是所述纹理区域。
更具体的,若目标像素的选定区域的gammaMap小于第一阈值,则确定该目标像素位于所述输入图像的平坦区域内。
若目标像素的选定区域的gammaMap大于第二阈值,则确定该目标像素位于所述输入图像的边缘区域内。
若目标像素的选定区域的gammaMap大于或等于所述第一阈值且小于或等于所述第二阈值,则确定该目标像素位于所述输入图像的纹理区域内。
本实施例对所述第一阈值、第二阈值不做限定,只要根据所述第一阈值和所述第二阈值能够确定输入图像的平坦区域,所述边缘区域以及所述纹理区域即可。
可选的,本实施例所述电子设备可预先获取测试图像,其中,该测试图像的平坦区域,所述边缘区域以及所述纹理区域是已知的,则电子设备根据该测试图像即可确定已知的平坦区域,所述边缘区域以及所述纹理区域确定第一阈值以及第二阈值的大小。
需明确的是,本实施例对确定第一阈值以及第二阈值的大小的方式为可选的示例,不做限定。
所述电子设备对所述输入图像的每个像素的所述gammaMap进行分析以确定所述输入图像的每个像素所位于的区域。
当对所述输入图像的每个像素的所述gammaMap分析完成之后,即可确定所述平坦区域内的所有像素的所述纹理特征参数小于第一阈值,所述边缘区域内的所有像素的所述纹理特征参数大于第二阈值,所述纹理区域内的所有像素的所述纹理特征参数大于或等于所述第一阈值且小于或等于所述第二阈值。
步骤215、确定所述第一图像。
其中,所述第一图像为所述重建图像R0中与所述输入图像的所述平坦区域和所述边缘区域对应的图像。
本实施例中,电子设备对获取到的输入图像进行纹理分析以获取输入图像 的所述平坦区域,所述边缘区域以及所述纹理区域后,对纹理分析后的输入图像进行存储。
在执行步骤213之前,经过步骤203至步骤208获取到了重建图像R0。
在执行步骤213的过程中,电子设备对纹理分析后的输入图像和重建图像R0进行对比,以获取重建图像R0中与所述输入图像的所述平坦区域和所述边缘区域对应的图像,并确定与所述输入图像的所述平坦区域和所述边缘区域对应的图像为第一图像。
步骤216、通过各向同性的滤波器LPF对所述输入图像的所有区域I0进行滤波以获取目标图像。
其中,所述各向同性的滤波器LPF为在所述输入图像的每个边缘方向上滤波器的特性相同的滤波器。
对所述各向同性的滤波器具体的说明请详见现有技术所示,具体在本实施例中不做赘述。
所述输入图像的所有区域I0包括所述输入图像的第二区域。
其中,所述目标图像=I0+(I0‐I0⊙LPF)*sharpenLevel;
⊙表示卷积操作,sharpenLevel为高频增强的强度。
确定sharpenLevel的具体过程请详见上述步骤所示,具体在本步骤中不做赘述。
步骤217、确定所述第二图像。
其中,所述第二图像为所述目标图像中与所述输入图像的所述纹理区域对应的图像。
本实施例中,电子设备对获取到的输入图像进行纹理分析以获取输入图像的所述平坦区域,所述边缘区域以及所述纹理区域后,对纹理分析后的输入图像进行存储。
在执行步骤214获取到了目标图像。
在执行步骤215的过程中,电子设备对纹理分析后的输入图像和目标图像进行对比,以获取目标图像中与所述输入图像的所述纹理区域对应的图像,并确定与所述输入图像的所述纹理区域对应的图像为第二图像。
步骤218、根据第二公式确定权重weight。
所述第二公式为:
Figure PCTCN2016078346-appb-000012
其中,T1、T2、T3、T4为依次递增且大于或等于0的常数。
可选的,本实施例所示的T1、T2、T3、T4可为出厂时厂商设定的。
可选的,也可通过测试的方式获取T1、T2、T3、T4。
具体的,本实施例所示的电子设备可预先获取测试图像,并逐步调试T1、T2、T3、T4的数值,并比较不同的T1、T2、T3、T4时获取测试图像的输出图像的清晰度和信噪比,当输出图像的清晰度和信噪比最高时,即可确定T1、T2、T3、T4的具体数值。
需明确的是,本实施例对获取T1、T2、T3、T4的具体数值的方式为可选的示例,不做限定,只要已确定的T1、T2、T3、T4使得输入图像能够获取到清晰度和信噪比满足一定要求的输出图像即可。
在执行本实施例的步骤216的过程中,因T1、T2、T3、T4已确定,则通过所述第二公式能够建立weight和gammaMap的对应关系。
本实施例中,weight和gammaMap的对应关系可参见图5所示,需明确的是,图5所示的weight和gammaMap的对应关系为一种可选的示例,具体不做限定。
步骤219、根据第三公式对所述第一图像R1和所述第二图像R2进行图像融合以获取输出图像R。
其中,所述第三公式为:R=weight*R1+(1‐weight)*R2。
weight的具体获取方式请详见步骤216所示,具体在本步骤中不做赘述。
采用本实施例所示的图像处理方法的有益效果在于:
本实施例所示的图像处理方法能够对输入图像进行分解,还可对每个尺度的图像进行上采样操作,以使获取到的第一图像不仅可以去除输入图像各个频段的噪声,同时还可以提升输入图像边缘的清晰度和平整度。
本实施例所示的图像处理方法能够通过各向同性的滤波器对所述输入图像进行滤波以获取第二图像,从而很好的提升第二图像的清晰度并维持原来的自然度。
本实施例所示的图像处理方法能够将第一图像和第二图像进行图像融合以形成输出图像,从而使得输出图像在噪声控制、清晰度提升、纹理自然度等方面都可以达到较好的效果,即输出图像在提升图像清晰度的同时,有效的控制噪声放大问题。
采用本实施例所示的图像处理方法对输入图像进行处理的图像效果请参见图7至图10所示。
具体的,图7至图10为对同一幅输入图像的不同区域未采用本实施例所提供的图像处理方法和采用本实施例所提供的图像处理方法的显示效果对比示意图。
更具体的,图7中的左侧为对输入图像的区域701未采用本实施例所提供的图像处理方法时的电子设备所显示的图像的效果,图7中的右侧为对输入图像的区域701采用本实施例所提供的图像处理方法时的电子设备所显示的图像的效果。
图8中的左侧为对输入图像的区域801未采用本实施例所提供的图像处理方法时的电子设备所显示的图像的效果,图8中的右侧为对输入图像的区域801采用本实施例所提供的图像处理方法时的电子设备所显示的图像的效果。
图9中的左侧为对输入图像的区域901未采用本实施例所提供的图像处理方法时的电子设备所显示的图像的效果,图9中的右侧为对输入图像的区域901采用本实施例所提供的图像处理方法时的电子设备所显示的图像的效果。
图10中的左侧为对输入图像的区域1001未采用本实施例所提供的图像处理方法时的电子设备所显示的图像的效果,图10中的右侧为对输入图像的区域1001采用本实施例所提供的图像处理方法时的电子设备所显示的图像的效果。
由图7和图10所示可进一步得出采用本实施例所示的图像处理方法能够使得输出图像在噪声控制、清晰度提升、纹理自然度等方面都可以达到较好的效果,在保证了输出图像在提升图像清晰度的同时,有效的控制噪声放大问题。
实施例三
实施例二所示说明了首先对输入图像的所有区域进行多尺度分解以及进行滤波时是如何获取输出图像的,以下结合图6所示的实施例三说明首先确定输入图像的平坦区域,边缘区域以及纹理区域时是如何获取输出图像的。
步骤601、接收输入图像。
步骤602、获取所述输入图像每个像素的统计特性edge。
步骤603、根据第八公式计算所述高频增强的强度sharpenLevel。
本实施例中,步骤601至步骤603的具体过程,请详见图2所示的步骤201至步骤203所示,具体在本实施例中不做赘述。
步骤604、确定目标像素的选定区域。
步骤605、对所述目标像素的所述选定区域进行奇异值分解以获取第一特征值S0和第二特征值S1。
步骤606、根据第一公式计算所述目标像素的所述纹理特征参数gammaMap。
步骤607、根据所述纹理特征参数确定所述平坦区域,所述边缘区域以及所述纹理区域。
本实施例所示的步骤604至步骤607的具体过程,请详见图2所示的步骤211至步骤214所示,具体在本实施例中不做赘述。
步骤608、确定所述输入图像的第一区域。
具体的,确定所述输入图像的第一区域为所述输入图像的平坦区域和边缘区域。
步骤609、通过基于边缘的滤波器EPF对所述第一区域Im进行滤波以获取滤波后图像A0
可选的,本实施例所示的基于边缘的滤波器(英文全称:Edge Preserve Filter,英文简称:EPF)可为非局部均值滤波器NLMean或核回归滤波器SKR。
更具体的,基于边缘的滤波器对所述输入图像的第一区域Im进行滤波的具体方式为:
其中,A0=Im⊙EPF,⊙表示卷积操作;
基于边缘的滤波器的具体工作原理请详见现有技术所示,具体在本实施例中不做赘述。
步骤610、通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0
具体的,本实施例中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
其中,sharpenLevel为高频增强的强度。
其中,所述sharpenLevel具体的获取方法请详见上述步骤所示,具体在本步骤中不做赘述。
步骤611、确定所述多层图像的目标层数。
本实施例中的步骤611的具体过程请详见图2所示的步骤206所示,具体在本实施例中不做赘述。
步骤612、逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像。
具体的,本实施例通过步骤611即可确定多层图像的目标层数,在步骤612中,根据已确定的所述多层图像的目标层数对所述高频增强后图像B0逐层进行低通滤波和下采样操作。
首先通过滤波器系数为[.0625,.25,.375,.25,.0625]的低通滤波器对所述高频增强后图像B0进行低通滤波。
对低通滤波后的B0进行X1倍的下采样操作以形成图像I1。
本实施例中,对X1的具体数值不做限定,只要X1大于1即可。
本实施例中,通过低通滤波后的B0进行X1倍的下采样操作之后所形成的所述图像I1的图像宽度是所述高频增强后图像B0的图像宽度的1/X1,且所述图像I1的图像高度是所述高频增强后图像B0的图像高度的1/X1。
可选的,本实施例以X1为2进行示例性说明。
可见,通过低通滤波后的B0进行2倍的下采样操作之后所形成的所述图像I1的图像宽度是所述高频增强后图像B0的图像宽度的一半,且所述图像I1的图像高度是所述高频增强后图像B0的图像高度的一半。
在步骤612中,当获取图像I1后,则以图像I1为基础进行低通滤波和下 采样操作。
对图像I1进行低通滤波的具体方式与对图像B0进行低通滤波的方式相同,具体在本实施例中不做赘述。
具体的,对低通滤波后的I1进行X1倍的下采样操作以形成图像I2。
对滤波后的图像进行下采样的操作方式与图2所示的实施例相同,具体在本实施例对下采样操作的具体方式不做赘述。
本实施例以相同的倍数X1逐层进行低通滤波和下采样操作为例进行说明。
需明确的是,也可以不同的倍数逐层进行低通滤波和下采样操作,具体在本实施例中不做限定。
本实施例所示的分解可为多尺度分解,其中,所述多尺度分解可为通过数学分析方法,把图像分解在不同的尺度上,来处理的一种分解方法。
其中,对多尺度分解的具体说明请详见现有技术所示,具体在本实施例中不做赘述。
需明确的是,通过多尺度分解的方式获取多层图像的方式为可选的示例,不做限定。
本实施例以采用所述多尺度分解对图像进行分解为例进行示例性说明。
通过本步骤所示的多尺度分解,即低通滤波和下采样操作直至获取第m层图像Im,其中,n的数值等于步骤611所获取到的所述多层图像的目标层数。
具体的,若m等于1时,则I1为对所述输入图像的所述第一区域Im进行多尺度分解以获取的第1层图像,若m大于1时,Im为对第m-1层图像进行多尺度分解以获取的第m层图像。
步骤613、对每个尺度的图像进行上采样操作以获取高频信息图像。
本实施例中,通过步骤612获取到了面积逐渐递减的图像I1、I2……Im‐1、Im。
具体的,根据第六公式对每个尺度的图像进行上采样操作以获取高频信息图像。
更具体的,根据第六公式分别对图像I1、I2……Im‐1、Im进行上采样操作以获取高频信息图像。
其中,所述第六公式为Hm=Im‐U(Im)。
Hm为Im的高频信息图像,U表示上采样操作。
更具体的,根据第六公式分别对图像I1、I2……Im‐1、Im进行X2倍的上采样操作以获取高频信息图像。
其中,X1.X2=1。
即通过本实施例所示的步骤613能够获取到图像I1的高频信息图像H1、获取到图像I2的高频信息图像H2、获取到图像Im的高频信息图像Hm。
步骤614、根据第七公式将Im的高频信息图像Hm按面积递增的顺序逐层重建至重建图像R0
其中,所述第七公式为递归公式,且所述第七公式为:Rm=Im;Rm‐1=U(Rm)+Hm‐1;
本实施例所示的第七公式为递归公式,根据该递归公式指导Rm后,带入后面的等式Rm‐1=U(Rm)+Hm‐1,就可以得到R m‐1。m输入图像最大的层数,一直到1。
可见,获取的重建图像R0的图像宽度和图像高度与输入图像的第一区域的图像宽度和图像高度相等。
步骤615、确定所述重建图像R0为所述第一图像。
步骤616、确定输入图像的第二区域。
具体的,确定所述输入图像的第二区域为所述输入图像的纹理区域。
步骤617、通过所述各向同性的滤波器LPF对所述输入图像的所述第二区域M0进行滤波以获取第二图像R2。
其中,R2=M0+(M0‐M0⊙LPF)*sharpenLevel。
⊙表示卷积操作,sharpenLevel为高频增强的强度。
确定sharpenLevel的具体过程请详见上述步骤所示,具体在本步骤中不做赘述。
步骤618、根据第二公式确定权重weight。
步骤619、根据第三公式对所述第一图像R1和所述第二图像R2进行图像融合以获取输出图像R。
本实施例中的步骤618至步骤619的具体执行过程,请详见图2所示的步 骤218至步骤219所示,具体在本实施例中不做赘述。
采用本实施例所示的图像处理方法的有益效果在于:
本实施例所示的图像处理方法能够对输入图像的第一区域进行分解,还可对每个尺度的图像进行上采样操作,以使获取到的第一图像不仅可以去除输入图像第一区域各个频段的噪声,同时还可以提升输入图像第一区域边缘的清晰度和平整度。
本实施例所示的图像处理方法能够通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像,从而很好的提升第二图像的清晰度并维持原来的自然度。
本实施例所示的图像处理方法能够将第一图像和第二图像进行图像融合以形成输出图像,从而使得输出图像在噪声控制、清晰度提升、纹理自然度等方面都可以达到较好的效果,即输出图像在提升图像清晰度的同时,有效的控制噪声放大问题。
采用本实施例所示的图像处理方法对输入图像进行处理的图像效果请参见图7至图10所示。由图7和图10所示可进一步得出采用本实施例所示的图像处理方法能够使得输出图像在噪声控制、清晰度提升、纹理自然度等方面都可以达到较好的效果,在保证了输出图像在提升图像清晰度的同时,有效的控制噪声放大问题。
实施例四
本实施例提供了一种能够实现图2所示的图像处理方法的电子设备。
以下结合图11所示从功能模块角度对本实施例所提供的电子设备的具体结构进行详细说明:
本实施例所提供的电子设备包括:
第四获取单元1101,用于获取所述输入图像每个像素的统计特性edge,其中,所述输入图像每个像素的统计特性edge为所述输入图像的边缘强度或所述输入图像的高频信息的强度;
第五获取单元1102,用于根据第八公式计算所述高频增强的强度sharpenLevel;
其中,所述第八公式为:
Figure PCTCN2016078346-appb-000013
其中,W1、W2、W3、W4为依次递增的大于或等于0的常数,且MinLevel1、MinLevel2为小于MaxLevel的常数。
第二确定单元1103,用于对所述输入图像的每个像素进行纹理分析以确定每个像素的纹理特征参数;
具体的,所述第二确定单元1103包括:
第一确定模块11031,用于确定目标像素的选定区域,其中,所述目标像素为所述输入图像的任一像素,且所述选定区域以所述目标像素为中心;
第二确定模块11032,用于对所述目标像素的所述选定区域进行奇异值分解以获取第一特征值S0和第二特征值S1;
第三确定模块11033,用于根据第一公式计算所述目标像素的所述纹理特征参数gammaMap;
所述第一公式为:
Figure PCTCN2016078346-appb-000014
其中,所述kSum为所述目标像素的所述选定区域的面积,所述lambda为大于0且小于或等于1之间的任一常数,所述alpha为大于0且小于或等于1之间的任一常数。
第三确定单元1104,用于根据所述纹理特征参数确定所述平坦区域,所述边缘区域以及所述纹理区域,其中,所述平坦区域内的所有像素的所述纹理特征参数小于第一阈值,所述边缘区域内的所有像素的所述纹理特征参数大于第二阈值,所述纹理区域内的所有像素的所述纹理特征参数大于或等于所述第一阈值且小于或等于所述第二阈值,所述第一阈值小于所述第二阈值。
第一确定单元1105,用于确定目标层数,其中,所述目标层数为 [1,log2(min(width,height))]内的任一自然数,width为输入图像的宽度,height为输入图像的高度;
第一获取单元1106,用于对所述输入图像的第一区域进行分解以获取面积递减的多层图像,其中,所述第一区域为所述输入图像的平坦区域和边缘区域,且所述多层图像的层数等于所述目标层数;
具体的,第一获取单元1106包括:
第一获取模块11061,用于通过基于边缘的滤波器EPF对所述输入图像的所有区域I0进行滤波以获取滤波后图像A0;
其中,A0=I0⊙EPF,⊙表示卷积操作;
第二获取模块11062,用于通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0;
其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
第三获取模块11063,用于逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像。
第二获取单元1107,用于对每个尺度的图像进行上采样操作以获取高频信息图像;
具体的,所述第二获取单元1107还用于,根据第四公式对每个尺度的图像进行上采样操作以获取高频信息图像,其中,所述第四公式为Hn=In‐U(In),若n等于1时,则I1为对所述高频增强后图像B0进行多尺度分解以获取的第1层图像,若n大于1时,In为对第n‐1层图像进行多尺度分解以获取的第n层图像,Hn为In的高频信息图像,U表示上采样操作;
第三获取单元1108,用于对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像,且所述第一图像的面积等于所述输入图像的第一区域的面积;
具体的,所述第三获取单元1108包括:
第六确定模块11081,用于根据第五公式将In的高频信息图像Hn按面积递增的顺序逐层重建至重建图像R0,其中,所述第五公式为递归公式,且所述第五公式为:Rn=In;Rn‐1=U(Rn)+Hn‐1;
第七确定模块11082,用于确定所述第一图像,其中,所述第一图像为所述重建图像R0中与所述输入图像的所述平坦区域和所述边缘区域对应的图像。
滤波单元1109,用于通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像,其中,所述第二区域为所述输入图像的纹理区域;
具体的,所述滤波单元1109包括:
第八获取模块11091,用于通过所述各向同性的滤波器LPF对所述输入图像的所有区域I0进行滤波以获取目标图像;
其中,所述目标图像=I0+(I0‐I0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度;
第十确定模块11092,用于确定所述第二图像,其中,所述第二图像为所述目标图像中与所述输入图像的所述纹理区域对应的图像。
融合单元1110,用于对所述第一图像和所述第二图像进行图像融合以获取输出图像。
具体的,所述融合单元1110包括:
第四确定模块11101,用于根据第二公式确定权重weight;
所述第二公式为:
Figure PCTCN2016078346-appb-000015
其中,T1、T2、T3、T4为依次递增的大于或等于0的常数;
第五确定模块11102,用于根据第三公式对所述第一图像R1和所述第二图像R2进行图像融合以获取输出图像R;
其中,所述第三公式为:R=weight*R1+(1‐weight)*R2。
采用本实施例所示的电子设备所执行的图像处理方法的具体执行过程请详见图2所示,具体在本实施例中不做赘述。
采用本实施例所示的电子设备进行图像处理的有益效果请参见图2所示,具体在本实施例中不做赘述。
实施例五
本实施例提供了一种能够实现图6所示的图像处理方法的电子设备。
以下结合图12所示从功能模块角度对本实施例所提供的电子设备的具体结构进行详细说明:
本实施例所提供的电子设备包括:
第四获取单元1201,用于获取所述输入图像每个像素的统计特性edge,其中,所述输入图像每个像素的统计特性edge为所述输入图像的边缘强度或所述输入图像的高频信息的强度;
第五获取单元1202,用于根据第八公式计算所述高频增强的强度sharpenLevel;
其中,所述第八公式为:
Figure PCTCN2016078346-appb-000016
其中,W1、W2、W3、W4为依次递增的大于或等于0的常数,且MinLevel1、MinLevel2为小于MaxLevel的常数。
第二确定单元1203,用于对所述输入图像的每个像素进行纹理分析以确定每个像素的纹理特征参数;
具体的,所述第二确定单元1203包括:
第一确定模块12031,用于确定目标像素的选定区域,其中,所述目标像素为所述输入图像的任一像素,且所述选定区域以所述目标像素为中心;
第二确定模块12032,用于对所述目标像素的所述选定区域进行奇异值分解以获取第一特征值S0和第二特征值S1;
第三确定模块12033,用于根据第一公式计算所述目标像素的所述纹理特 征参数gammaMap;
所述第一公式为:
Figure PCTCN2016078346-appb-000017
其中,所述kSum为所述目标像素的所述选定区域的面积,所述lambda为大于0且小于或等于1之间的任一常数,所述alpha为大于0且小于或等于1之间的任一常数。
第三确定单元1204,用于根据所述纹理特征参数确定所述平坦区域,所述边缘区域以及所述纹理区域,其中,所述平坦区域内的所有像素的所述纹理特征参数小于第一阈值,所述边缘区域内的所有像素的所述纹理特征参数大于第二阈值,所述纹理区域内的所有像素的所述纹理特征参数大于或等于所述第一阈值且小于或等于所述第二阈值,所述第一阈值小于所述第二阈值。
第一确定单元1205,用于确定目标层数,其中,所述目标层数为[1,log2(min(width,height))]内的任一自然数,width为输入图像的宽度,height为输入图像的高度;
第一获取单元1206,用于对所述输入图像的第一区域进行分解以获取面积递减的多层图像,其中,所述第一区域为所述输入图像的平坦区域和边缘区域,且所述多层图像的层数等于所述目标层数;
具体的,第一获取单元1206包括:
第四获取模块12061,用于通过基于边缘的滤波器EPF对所述第一区域Im进行滤波以获取滤波后图像A0
其中,A0=Im⊙EPF,⊙表示卷积操作;
第五获取模块12062,用于通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0
其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
第六获取模块12063,用于逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像。
第二获取单元1207,用于对每个尺度的图像进行上采样操作以获取高频信息图像;
具体的,所述第二获取单元1207还用于,根据第六公式对每个尺度的图 像进行上采样操作以获取高频信息图像,所述第六公式为Hm=Im‐U(Im),其中,若m等于1时,则I1为对所述输入图像的所述第一区域Im进行多尺度分解以获取的第1层图像,若m大于1时,Im为对第m‐1层图像进行多尺度分解以获取的第m层图像,Hm为Im的高频信息图像,U表示上采样操作;
第三获取单元1208,用于对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像,且所述第一图像的面积等于所述输入图像的第一区域的面积;
具体的,所述第三获取单元1208包括:
第八确定模块12081,用于根据第七公式将Im的高频信息图像Hm按面积递增的顺序逐层重建至重建图像R0,其中,所述第七公式为递归公式,且所述第七公式为:Rm=Im;Rm‐1=U(Rm)+Hm‐1;
第九确定模块12082,用于确定所述重建图像R0为所述第一图像。
滤波单元1209,用于通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像,其中,所述第二区域为所述输入图像的纹理区域;
具体的,所述滤波单元1208还用于,通过所述各向同性的滤波器LPF对所述输入图像的所述第二区域M0进行滤波以获取第二图像R2;
其中,R2=M0+(M0‐M0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度。
融合单元1210,用于对所述第一图像和所述第二图像进行图像融合以获取输出图像。
具体的,所述融合单元1209包括:
第四确定模块12101,用于根据第二公式确定权重weight;
所述第二公式为:
Figure PCTCN2016078346-appb-000018
其中,T1、T2、T3、T4为依次递增的大于或等于0的常数;
第五确定模块12102,用于根据第三公式对所述第一图像R1和所述第二图像R2进行图像融合以获取输出图像R;
其中,所述第三公式为:R=weight*R1+(1‐weight)*R2。
采用本实施例所示的电子设备所执行的图像处理方法的具体执行过程请详见图6所示,具体在本实施例中不做赘述。
采用本实施例所示的电子设备进行图像处理的有益效果请参见图6所示,具体在本实施例中不做赘述。
实施例六
实施例四和实施例五从功能模块的角度对能够实现本发明实施例所提供的图像处理方法的电子设备的结构进行说明,本实施例结合图1所示从结构实体角度对电子设备的具体结构进行详细说明。
需明确的是,本实施例所提供的电子设备的具体结构请详见图1所示,具体在本实施例中不做赘述。
本实施例对图1所示的处理器103、输出单元101以及输入单元107的具体功能进行进一步的详细说明,以使图1所示的电子设备能够实现本发明实施例所提供的图像处理方法。
所述处理器103用于通过所述输入单元107获取输入图像;
所述处理器103,还用于确定目标层数,其中,所述目标层数为[1,log2(min(width,height))]内的任一自然数,width为所述输入图像的宽度,height为所述输入图像的高度;
所述处理器103,还用于对所述输入图像的第一区域进行分解以获取面积递减的多层图像,其中,所述第一区域为所述输入图像的平坦区域和边缘区域,且所述多层图像的层数等于所述目标层数;
所述处理器103,还用于对每个尺度的图像进行上采样操作以获取高频信息图像;
所述处理器103,还用于对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像,且所述第一图像的面积等于所述输入图像的第一区域的面积;
所述处理器103,还用于通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像,其中,所述第二区域为所述输入图像的纹理区域;
所述处理器103,还用于对所述第一图像和所述第二图像进行图像融合以获取输出图像;
所述处理器103通过所述输出单元101显示所述输出图像。
可选的,所述处理器103,还用于对所述输入图像的每个像素进行纹理分析以确定每个像素的纹理特征参数;
所述处理器103,还用于根据所述纹理特征参数确定所述平坦区域,所述边缘区域以及所述纹理区域,其中,所述平坦区域内的所有像素的所述纹理特征参数小于第一阈值,所述边缘区域内的所有像素的所述纹理特征参数大于第二阈值,所述纹理区域内的所有像素的所述纹理特征参数大于或等于所述第一阈值且小于或等于所述第二阈值,所述第一阈值小于所述第二阈值。
可选的,所述处理器103,还用于确定目标像素的选定区域,其中,所述目标像素为所述输入图像的任一像素,且所述选定区域以所述目标像素为中心;
所述处理器103,还用于对所述目标像素的所述选定区域进行奇异值分解以获取第一特征值S0和第二特征值S1;
所述处理器103,还用于根据第一公式计算所述目标像素的所述纹理特征参数gammaMap;
所述第一公式为:
Figure PCTCN2016078346-appb-000019
其中,所述kSum为所述目标像素的所述选定区域的面积,所述lambda为大于0且小于或等于1之间的任一常数,所述alpha为大于0且小于或等于1之间的任一常数。
可选的,所述处理器103,还用于根据第二公式确定权重weight;
所述第二公式为:
Figure PCTCN2016078346-appb-000020
其中,T1、T2、T3、T4为依次递增的大于或等于0的常数;
所述处理器103,还用于根据第三公式对所述第一图像R1和所述第二图像R2进行图像融合以获取输出图像R;
其中,所述第三公式为:R=weight*R1+(1‐weight)*R2。
可选的,所述处理器103,还用于通过基于边缘的滤波器EPF对所述输入图像的所有区域I0进行滤波以获取滤波后图像A0;
其中,A0=I0⊙EPF,⊙表示卷积操作;
所述处理器103,还用于通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0;
其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
所述处理器103,还用于逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像。
可选的,所述处理器103,还用于根据第四公式对每个尺度的图像进行上采样操作以获取高频信息图像,其中,所述第四公式为Hn=In‐U(In),若n等于1时,则I1为对所述高频增强后图像B0进行多尺度分解以获取的第1层图像,若n大于1时,In为对第n‐1层图像进行多尺度分解以获取的第n层图像,Hn为In的高频信息图像,U表示上采样操作;
所述处理器103,还用于根据第五公式将In的高频信息图像Hn按面积递增的顺序逐层重建至重建图像R0,其中,所述第五公式为递归公式,且所述第五公式为:Rn=In;Rn‐1=U(Rn)+Hn‐1;
所述处理器103,还用于确定所述第一图像,其中,所述第一图像为所述重建图像R0中与所述输入图像的所述平坦区域和所述边缘区域对应的图像。
可选的,所述处理器103,还用于通过基于边缘的滤波器EPF对所述第一 区域Im进行滤波以获取滤波后图像A0
其中,A0=Im⊙EPF,⊙表示卷积操作;
所述处理器103,还用于通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0
其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
所述处理器103,还用于逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像。
可选的,所述处理器103,还用于根据第六公式对每个尺度的图像进行上采样操作以获取高频信息图像,所述第六公式为Hm=Im‐U(Im),其中,若m等于1时,则I1为对所述输入图像的所述第一区域Im进行多尺度分解以获取的第1层图像,若m大于1时,Im为对第m‐1层图像进行多尺度分解以获取的第m层图像,Hm为Im的高频信息图像,U表示上采样操作;
所述处理器103,还用于根据第七公式将Im的高频信息图像Hm按面积递增的顺序逐层重建至重建图像R0,其中,所述第七公式为递归公式,且所述第七公式为:Rm=Im;Rm‐1=U(Rm)+Hm‐1;
所述处理器103,还用于确定所述重建图像R0为所述第一图像。
可选的,所述处理器103,还用于通过所述各向同性的滤波器LPF对所述输入图像的所有区域I0进行滤波以获取目标图像;
其中,所述目标图像=I0+(I0‐I0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度;
所述处理器103,还用于确定所述第二图像,其中,所述第二图像为所述目标图像中与所述输入图像的所述纹理区域对应的图像。
可选的,所述处理器103,还用于通过所述各向同性的滤波器LPF对所述输入图像的所述第二区域M0进行滤波以获取第二图像R2;
其中,R2=M0+(M0‐M0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度。
可选的,所述处理器103,还用于获取所述输入图像每个像素的统计特性edge,其中,所述输入图像每个像素的统计特性edge为所述输入图像的边缘强 度或所述输入图像的高频信息的强度;
所述处理器103,还用于根据第八公式计算所述高频增强的强度sharpenLevel;
其中,所述第八公式为:
Figure PCTCN2016078346-appb-000021
其中,W1、W2、W3、W4为依次递增的大于或等于0的常数,且MinLevel1、MinLevel2为小于MaxLevel的常数。
具体的,图1所示的电子设备执行本实施例所示的图像处理方法的具体过程,请详见图2和图6所示,具体在本实施例中不做赘述。
图1所示的电子设备执行本实施例所示的图像处理方法所具有的有益效果,请详见图2和图6所示,具体在本实施例中不做赘述。
实施例七
本实施例提供了一种计算机可读的存储介质。
本实施例所提供的计算机可读的存储介质用于存储一个或多个计算机程序,所述一个或多个计算机程序包括程序代码。
当所述计算机程序在计算机上运行时,所述程序代码用于执行图2和/或图6所示的图像处理方法。
所述程序代码用于执行图2和/或图6所示的图像处理方法的具体过程,请详见图2和/或图6所示,具体在本实施例中不做赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱 离本发明各实施例技术方案的精神和范围。

Claims (34)

  1. 一种图像处理方法,其特征在于,包括:
    确定目标层数,其中,所述目标层数为[1,log2(min(width,height))]内的任一自然数,width为输入图像的宽度,height为输入图像的高度;
    对所述输入图像的第一区域进行分解以获取面积递减的多层图像,其中,所述第一区域为所述输入图像的平坦区域和边缘区域,且所述多层图像的层数等于所述目标层数;
    对每个尺度的图像进行上采样操作以获取高频信息图像;
    对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像,且所述第一图像的面积等于所述输入图像的第一区域的面积;
    通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像,其中,所述第二区域为所述输入图像的纹理区域;
    对所述第一图像和所述第二图像进行图像融合以获取输出图像。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    对所述输入图像的每个像素进行纹理分析以确定每个像素的纹理特征参数;
    根据所述纹理特征参数确定所述平坦区域,所述边缘区域以及所述纹理区域,其中,所述平坦区域内的所有像素的所述纹理特征参数小于第一阈值,所述边缘区域内的所有像素的所述纹理特征参数大于第二阈值,所述纹理区域内的所有像素的所述纹理特征参数大于或等于所述第一阈值且小于或等于所述第二阈值,所述第一阈值小于所述第二阈值。
  3. 根据权利要求2所述的方法,其特征在于,所述对所述输入图像的每个像素进行纹理分析以确定每个像素的纹理特征参数包括:
    确定目标像素的选定区域,其中,所述目标像素为所述输入图像的任一像素,且所述选定区域以所述目标像素为中心;
    对所述目标像素的所述选定区域进行奇异值分解以获取第一特征值S0和第二特征值S1;
    根据第一公式计算所述目标像素的所述纹理特征参数gammaMap;
    所述第一公式为:
    Figure PCTCN2016078346-appb-100001
    其中,所述kSum为所述目标像素的所述选定区域的面积,所述lambda为大于0且小于或等于1之间的任一常数,所述alpha为大于0且小于或等于1之间的任一常数。
  4. 根据权利要求3所述的方法,其特征在于,所述对所述第一图像和所述第二图像进行图像融合以获取输出图像包括:
    根据第二公式确定权重weight;
    所述第二公式为:
    Figure PCTCN2016078346-appb-100002
    其中,T1、T2、T3、T4为依次递增且大于或等于0的常数;
    根据第三公式对所述第一图像R1和所述第二图像R2进行图像融合以获取输出图像R;
    其中,所述第三公式为:R=weight*R1+(1‐weight)*R2。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,所述对所述输入图像的第一区域进行分解以获取面积递减的多层图像包括:
    通过基于边缘的滤波器EPF对所述输入图像的所有区域I0进行滤波以获取滤波后图像A0;
    其中,A0=I0⊙EPF,⊙表示卷积操作;
    通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0;
    其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
    逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像。
  6. 根据权利要求5所述的方法,其特征在于,所述对每个尺度的图像进行上采样操作以获取高频信息图像包括:
    根据第四公式对每个尺度的图像进行上采样操作以获取高频信息图像,其中,所述第四公式为Hn=In‐U(In),若n等于1时,则I1为对所述高频增强后图像B0进行多尺度分解以获取的第1层图像,若n大于1时,In为对第n‐1层图像进行多尺度分解以获取的第n层图像,Hn为In的高频信息图像,U表示上采样操作;
    所述对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像包括:
    根据第五公式将In的高频信息图像Hn按面积递增的顺序逐层重建至重建图像R0,其中,所述第五公式为递归公式,且所述第五公式为:Rn=In;Rn‐1=U(Rn)+Hn‐1;
    确定所述第一图像,其中,所述第一图像为所述重建图像R0中与所述输入图像的所述平坦区域和所述边缘区域对应的图像。
  7. 根据权利要求1至4任一项所述的方法,其特征在于,所述对所述输入图像的第一区域进行分解以获取面积递减的多层图像包括:
    通过基于边缘的滤波器EPF对所述第一区域Im进行滤波以获取滤波后图像A0`;
    其中,A0`=Im⊙EPF,⊙表示卷积操作;
    通过低通滤波器LPF对所述滤波后图像A0`进行高频增强以获取高频增强后图像B0`;
    其中,B0`=A0`+[A0`‐A0`⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
    逐层对所述高频增强后图像B0`进行低通滤波和下采样操作以获取面积递减的多层图像。
  8. 根据权利要求7所述的方法,其特征在于,所述对每个尺度的图像进行上采样操作以获取高频信息图像包括:
    根据第六公式对每个尺度的图像进行上采样操作以获取高频信息图像,所述第六公式为Hm=Im‐U(Im),其中,若m等于1时,则I1为对所述输入图像的所述第一区域Im进行多尺度分解以获取的第1层图像,若m大于1时,Im为对第m‐1层图像进行多尺度分解以获取的第m层图像,Hm为Im的高频信 息图像,U表示上采样操作;
    所述对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像包括:
    根据第七公式将Im的高频信息图像Hm按面积递增的顺序逐层重建至重建图像R0`,其中,所述第七公式为递归公式,且所述第七公式为:Rm=Im;Rm‐1=U(Rm)+Hm‐1;
    确定所述重建图像R0`为所述第一图像。
  9. 根据权利要求1至4任一项所述的方法,其特征在于,所述通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像包括:
    通过所述各向同性的滤波器LPF对所述输入图像的所有区域I0进行滤波以获取目标图像;
    其中,所述目标图像=I0+(I0‐I0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度;
    确定所述第二图像,其中,所述第二图像为所述目标图像中与所述输入图像的所述纹理区域对应的图像。
  10. 根据权利要求1至4任一项所述的方法,其特征在于,所述通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像包括:
    通过所述各向同性的滤波器LPF对所述输入图像的所述第二区域M0进行滤波以获取第二图像R2;
    其中,R2=M0+(M0‐M0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度。
  11. 根据权利要求5、7、9或10任一项所述的方法,其特征在于,所述方法还包括:
    获取所述输入图像每个像素的统计特性edge,其中,所述输入图像每个像素的统计特性edge为所述输入图像的边缘强度或所述输入图像的高频信息的强度;
    根据第八公式计算所述高频增强的强度sharpenLevel;
    其中,所述第八公式为:
    Figure PCTCN2016078346-appb-100003
    其中,W1、W2、W3、W4为依次递增的大于或等于0的常数,且MinLevel1、MinLevel2为小于MaxLevel的常数。
  12. 一种电子设备,其特征在于,包括:
    第一确定单元,用于确定目标层数,其中,所述目标层数为[1,log2(min(width,height))]内的任一自然数,width为输入图像的宽度,height为输入图像的高度;
    第一获取单元,用于对所述输入图像的第一区域进行分解以获取面积递减的多层图像,其中,所述第一区域为所述输入图像的平坦区域和边缘区域,且所述多层图像的层数等于所述目标层数;
    第二获取单元,用于对每个尺度的图像进行上采样操作以获取高频信息图像;
    第三获取单元,用于对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像,且所述第一图像的面积等于所述输入图像的第一区域的面积;
    滤波单元,用于通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像,其中,所述第二区域为所述输入图像的纹理区域;
    融合单元,用于对所述第一图像和所述第二图像进行图像融合以获取输出图像。
  13. 根据权利要求12所述的电子设备,其特征在于,所述电子设备还包括:
    第二确定单元,用于对所述输入图像的每个像素进行纹理分析以确定每个像素的纹理特征参数;
    第三确定单元,用于根据所述纹理特征参数确定所述平坦区域,所述边缘区域以及所述纹理区域,其中,所述平坦区域内的所有像素的所述纹理特征参数小于第一阈值,所述边缘区域内的所有像素的所述纹理特征参数大于第二阈值,所述纹理区域内的所有像素的所述纹理特征参数大于或等于所述第一阈值且小于或等于所述第二阈值,所述第一阈值小于所述第二阈值。
  14. 根据权利要求13所述的电子设备,其特征在于,所述第二确定单元包括:
    第一确定模块,用于确定目标像素的选定区域,其中,所述目标像素为所述输入图像的任一像素,且所述选定区域以所述目标像素为中心;
    第二确定模块,用于对所述目标像素的所述选定区域进行奇异值分解以获取第一特征值S0和第二特征值S1;
    第三确定模块,用于根据第一公式计算所述目标像素的所述纹理特征参数gammaMap;
    所述第一公式为:
    Figure PCTCN2016078346-appb-100004
    其中,所述kSum为所述目标像素的所述选定区域的面积,所述lambda为大于0且小于或等于1之间的任一常数,所述alpha为大于0且小于或等于1之间的任一常数。
  15. 根据权利要求14所述的电子设备,其特征在于,所述融合单元包括:
    第四确定模块,用于根据第二公式确定权重weight;
    所述第二公式为:
    Figure PCTCN2016078346-appb-100005
    其中,T1、T2、T3、T4为依次递增的大于或等于0的常数;
    第五确定模块,用于根据第三公式对所述第一图像R1和所述第二图像R2进行图像融合以获取输出图像R;
    其中,所述第三公式为:R=weight*R1+(1‐weight)*R2。
  16. 根据权利要求12至15任一项所述的电子设备,其特征在于,所述第一获取单元包括:
    第一获取模块,用于通过基于边缘的滤波器EPF对所述输入图像的所有区域I0进行滤波以获取滤波后图像A0;
    其中,A0=I0⊙EPF,⊙表示卷积操作;
    第二获取模块,用于通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0;
    其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
    第三获取模块,用于逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像。
  17. 根据权利要求16所述的电子设备,其特征在于,所述第二获取单元还用于,根据第四公式对每个尺度的图像进行上采样操作以获取高频信息图像,其中,所述第四公式为Hn=In‐U(In),若n等于1时,则I1为对所述高频增强后图像B0进行多尺度分解以获取的第1层图像,若n大于1时,In为对第n‐1层图像进行多尺度分解以获取的第n层图像,Hn为In的高频信息图像,U表示上采样操作;
    所述第三获取单元包括:
    第六确定模块,用于根据第五公式将In的高频信息图像Hn按面积递增的顺序逐层重建至重建图像R0,其中,所述第五公式为递归公式,且所述第五公式为:Rn=In;Rn‐1=U(Rn)+Hn‐1;
    第七确定模块,用于确定所述第一图像,其中,所述第一图像为所述重建图像R0中与所述输入图像的所述平坦区域和所述边缘区域对应的图像。
  18. 根据权利要求12至15任一项所述的电子设备,其特征在于,所述第一获取单元包括:
    第四获取模块,用于通过基于边缘的滤波器EPF对所述第一区域Im进行滤波以获取滤波后图像A0`;
    其中,A0`=Im⊙EPF,⊙表示卷积操作;
    第五获取模块,用于通过低通滤波器LPF对所述滤波后图像A0`进行高频增强以获取高频增强后图像B0`;
    其中,B0`=A0`+[A0`‐A0`⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
    第六获取模块,用于逐层对所述高频增强后图像B0`进行低通滤波和下采样操作以获取面积递减的多层图像。
  19. 根据权利要求18所述的电子设备,其特征在于,所述第二获取单元还用于,根据第六公式对每个尺度的图像进行上采样操作以获取高频信息图像,所述第六公式为Hm=Im‐U(Im),其中,若m等于1时,则I1为对所述输入图像的所述第一区域Im进行多尺度分解以获取的第1层图像,若m大于1时,Im为对第m‐1层图像进行多尺度分解以获取的第m层图像,Hm为Im的高频信息图像,U表示上采样操作;
    所述第三获取单元包括:
    第八确定模块,用于根据第七公式将Im的高频信息图像Hm按面积递增的顺序逐层重建至重建图像R0`,其中,所述第七公式为递归公式,且所述第七公式为:Rm=Im;Rm‐1=U(Rm)+Hm‐1;
    第九确定模块,用于确定所述重建图像R0`为所述第一图像。
  20. 根据权利要求12至15任一项所述的电子设备,其特征在于,所述滤波单元包括:
    第七获取模块,用于通过所述各向同性的滤波器LPF对所述输入图像的所有区域I0进行滤波以获取目标图像;
    其中,所述目标图像=I0+(I0‐I0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度;
    第十确定模块,用于确定所述第二图像,其中,所述第二图像为所述目标图像中与所述输入图像的所述纹理区域对应的图像。
  21. 根据权利要求12至15任一项所述的电子设备,其特征在于,所述滤波单元还用于,通过所述各向同性的滤波器LPF对所述输入图像的所述第二区域M0进行滤波以获取第二图像R2;
    其中,R2=M0+(M0‐M0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel 为高频增强的强度。
  22. 根据权利要求16、18、20或21任一项所述的电子设备,其特征在于,所述电子设备还包括:
    第四获取单元,用于获取所述输入图像每个像素的统计特性edge,其中,所述输入图像每个像素的统计特性edge为所述输入图像的边缘强度或所述输入图像的高频信息的强度;
    第五获取单元,用于根据第八公式计算所述高频增强的强度sharpenLevel;
    其中,所述第八公式为:
    Figure PCTCN2016078346-appb-100006
    其中,W1、W2、W3、W4为依次递增的大于或等于0的常数,且MinLevel1、MinLevel2为小于MaxLevel的常数。
  23. 一种电子设备,其特征在于,包括处理器、输出单元以及输入单元;
    所述处理器用于通过所述输入单元获取输入图像;
    所述处理器,还用于确定目标层数,其中,所述目标层数为[1,log2(min(width,height))]内的任一自然数,width为所述输入图像的宽度,height为所述输入图像的高度;
    所述处理器,还用于对所述输入图像的第一区域进行分解以获取面积递减的多层图像,其中,所述第一区域为所述输入图像的平坦区域和边缘区域,且所述多层图像的层数等于所述目标层数;
    所述处理器,还用于对每个尺度的图像进行上采样操作以获取高频信息图像;
    所述处理器,还用于对所有所述高频信息图像按面积递增的顺序逐层进行重建以获取第一图像,且所述第一图像的面积等于所述输入图像的第一区域的 面积;
    所述处理器,还用于通过各向同性的滤波器对所述输入图像的第二区域进行滤波以获取第二图像,其中,所述第二区域为所述输入图像的纹理区域;
    所述处理器,还用于对所述第一图像和所述第二图像进行图像融合以获取输出图像;
    所述处理器通过所述输出单元显示所述输出图像。
  24. 根据权利要求23所述的电子设备,其特征在于,
    所述处理器,还用于对所述输入图像的每个像素进行纹理分析以确定每个像素的纹理特征参数;
    所述处理器,还用于根据所述纹理特征参数确定所述平坦区域,所述边缘区域以及所述纹理区域,其中,所述平坦区域内的所有像素的所述纹理特征参数小于第一阈值,所述边缘区域内的所有像素的所述纹理特征参数大于第二阈值,所述纹理区域内的所有像素的所述纹理特征参数大于或等于所述第一阈值且小于或等于所述第二阈值,所述第一阈值小于所述第二阈值。
  25. 根据权利要求24所述的电子设备,其特征在于,
    所述处理器,还用于确定目标像素的选定区域,其中,所述目标像素为所述输入图像的任一像素,且所述选定区域以所述目标像素为中心;
    所述处理器,还用于对所述目标像素的所述选定区域进行奇异值分解以获取第一特征值S0和第二特征值S1;
    所述处理器,还用于根据第一公式计算所述目标像素的所述纹理特征参数gammaMap;
    所述第一公式为:
    Figure PCTCN2016078346-appb-100007
    其中,所述kSum为所述目标像素的所述选定区域的面积,所述lambda为大于0且小于或等于1之间的任一常数,所述alpha为大于0且小于或等于1之间的任一常数。
  26. 根据权利要求25所述的电子设备,其特征在于,
    所述处理器,还用于根据第二公式确定权重weight;
    所述第二公式为:
    Figure PCTCN2016078346-appb-100008
    其中,T1、T2、T3、T4为依次递增的大于或等于0的常数;
    所述处理器,还用于根据第三公式对所述第一图像R1和所述第二图像R2进行图像融合以获取输出图像R;
    其中,所述第三公式为:R=weight*R1+(1‐weight)*R2。
  27. 根据权利要求23至26任一项所述的电子设备,其特征在于,
    所述处理器,还用于通过基于边缘的滤波器EPF对所述输入图像的所有区域I0进行滤波以获取滤波后图像A0;
    其中,A0=I0⊙EPF,⊙表示卷积操作;
    所述处理器,还用于通过低通滤波器LPF对所述滤波后图像A0进行高频增强以获取高频增强后图像B0;
    其中,B0=A0+[A0‐A0⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
    所述处理器,还用于逐层对所述高频增强后图像B0进行低通滤波和下采样操作以获取面积递减的多层图像。
  28. 根据权利要求27所述的电子设备,其特征在于,
    所述处理器,还用于根据第四公式对每个尺度的图像进行上采样操作以获取高频信息图像,其中,所述第四公式为Hn=In‐U(In),若n等于1时,则I1为对所述高频增强后图像B0进行多尺度分解以获取的第1层图像,若n大于1时,In为对第n‐1层图像进行多尺度分解以获取的第n层图像,Hn为In的高频信息图像,U表示上采样操作;
    所述处理器,还用于根据第五公式将In的高频信息图像Hn按面积递增的顺序逐层重建至重建图像R0,其中,所述第五公式为递归公式,且所述第五公式为:Rn=In;Rn‐1=U(Rn)+Hn‐1;
    所述处理器,还用于确定所述第一图像,其中,所述第一图像为所述重建 图像R0中与所述输入图像的所述平坦区域和所述边缘区域对应的图像。
  29. 根据权利要求23至26任一项所述的电子设备,其特征在于,
    所述处理器,还用于通过基于边缘的滤波器EPF对所述第一区域Im进行滤波以获取滤波后图像A0`;
    其中,A0`=Im⊙EPF,⊙表示卷积操作;
    所述处理器,还用于通过低通滤波器LPF对所述滤波后图像A0`进行高频增强以获取高频增强后图像B0`;
    其中,B0`=A0`+[A0`‐A0`⊙LPF]*sharpenLevel,sharpenLevel为高频增强的强度;
    所述处理器,还用于逐层对所述高频增强后图像B0`进行低通滤波和下采样操作以获取面积递减的多层图像。
  30. 根据权利要求29所述的电子设备,其特征在于,
    所述处理器,还用于根据第六公式对每个尺度的图像进行上采样操作以获取高频信息图像,所述第六公式为Hm=Im‐U(Im),其中,若m等于1时,则I1为对所述输入图像的所述第一区域Im进行多尺度分解以获取的第1层图像,若m大于1时,Im为对第m‐1层图像进行多尺度分解以获取的第m层图像,Hm为Im的高频信息图像,U表示上采样操作;
    所述处理器,还用于根据第七公式将Im的高频信息图像Hm按面积递增的顺序逐层重建至重建图像R0`,其中,所述第七公式为递归公式,且所述第七公式为:Rm=Im;Rm‐1=U(Rm)+Hm‐1;
    所述处理器,还用于确定所述重建图像R0`为所述第一图像。
  31. 根据权利要求23至26任一项所述的电子设备,其特征在于,
    所述处理器,还用于通过所述各向同性的滤波器LPF对所述输入图像的所有区域I0进行滤波以获取目标图像;
    其中,所述目标图像=I0+(I0‐I0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度;
    所述处理器,还用于确定所述第二图像,其中,所述第二图像为所述目标图像中与所述输入图像的所述纹理区域对应的图像。
  32. 根据权利要求23至26任一项所述的电子设备,其特征在于,
    所述处理器,还用于通过所述各向同性的滤波器LPF对所述输入图像的所述第二区域M0进行滤波以获取第二图像R2;
    其中,R2=M0+(M0‐M0⊙LPF)*sharpenLevel,⊙表示卷积操作,sharpenLevel为高频增强的强度。
  33. 根据权利要求27、29、31或32任一项所述的电子设备,其特征在于,
    所述处理器,还用于获取所述输入图像每个像素的统计特性edge,其中,所述输入图像每个像素的统计特性edge为所述输入图像的边缘强度或所述输入图像的高频信息的强度;
    所述处理器,还用于根据第八公式计算所述高频增强的强度sharpenLevel;
    其中,所述第八公式为:
    Figure PCTCN2016078346-appb-100009
    其中,W1、W2、W3、W4为依次递增的大于或等于0的常数,且MinLevel1、MinLevel2为小于MaxLevel的常数。
  34. 一种计算机可读的存储介质,其特征在于,所述计算机可读的存储介质用于存储一个或多个计算机程序,所述一个或多个计算机程序包括程序代码,当所述计算机程序在计算机上运行时,所述程序代码用于执行上述权利要求1‐11任一项所述的图像处理方法。
PCT/CN2016/078346 2016-04-01 2016-04-01 一种图像处理方法、电子设备以及存储介质 WO2017166301A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/078346 WO2017166301A1 (zh) 2016-04-01 2016-04-01 一种图像处理方法、电子设备以及存储介质
CN201680051160.6A CN108027962B (zh) 2016-04-01 2016-04-01 一种图像处理方法、电子设备以及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/078346 WO2017166301A1 (zh) 2016-04-01 2016-04-01 一种图像处理方法、电子设备以及存储介质

Publications (1)

Publication Number Publication Date
WO2017166301A1 true WO2017166301A1 (zh) 2017-10-05

Family

ID=59963210

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/078346 WO2017166301A1 (zh) 2016-04-01 2016-04-01 一种图像处理方法、电子设备以及存储介质

Country Status (2)

Country Link
CN (1) CN108027962B (zh)
WO (1) WO2017166301A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876734A (zh) * 2018-05-31 2018-11-23 沈阳东软医疗系统有限公司 图像去噪方法、装置、电子设备及存储介质
US11257186B2 (en) * 2016-10-26 2022-02-22 Samsung Electronics Co., Ltd. Image processing apparatus, image processing method, and computer-readable recording medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999016234A2 (en) * 1997-09-26 1999-04-01 Trident Systems Inc. System, method and medium for increasing compression of an image while minimizing image degradation
CN101587586A (zh) * 2008-05-20 2009-11-25 株式会社理光 一种图像处理装置及图像处理方法
US20110129164A1 (en) * 2009-12-02 2011-06-02 Micro-Star Int'l Co., Ltd. Forward and backward image resizing method
CN103778606A (zh) * 2014-01-17 2014-05-07 Tcl集团股份有限公司 一种图像的处理方法及相关装置
CN104182939A (zh) * 2014-08-18 2014-12-03 成都金盘电子科大多媒体技术有限公司 一种医疗影像图像细节增强方法
CN104966092A (zh) * 2015-06-16 2015-10-07 中国联合网络通信集团有限公司 一种图像处理方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7742652B2 (en) * 2006-12-21 2010-06-22 Sharp Laboratories Of America, Inc. Methods and systems for image noise processing
JP5451782B2 (ja) * 2010-02-12 2014-03-26 キヤノン株式会社 画像処理装置および画像処理方法
CN102637292B (zh) * 2011-02-10 2015-04-08 西门子公司 一种图像的处理方法和装置
WO2013161839A1 (ja) * 2012-04-26 2013-10-31 日本電気株式会社 画像処理方法、及び画像処理装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999016234A2 (en) * 1997-09-26 1999-04-01 Trident Systems Inc. System, method and medium for increasing compression of an image while minimizing image degradation
CN101587586A (zh) * 2008-05-20 2009-11-25 株式会社理光 一种图像处理装置及图像处理方法
US20110129164A1 (en) * 2009-12-02 2011-06-02 Micro-Star Int'l Co., Ltd. Forward and backward image resizing method
CN103778606A (zh) * 2014-01-17 2014-05-07 Tcl集团股份有限公司 一种图像的处理方法及相关装置
CN104182939A (zh) * 2014-08-18 2014-12-03 成都金盘电子科大多媒体技术有限公司 一种医疗影像图像细节增强方法
CN104966092A (zh) * 2015-06-16 2015-10-07 中国联合网络通信集团有限公司 一种图像处理方法和装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11257186B2 (en) * 2016-10-26 2022-02-22 Samsung Electronics Co., Ltd. Image processing apparatus, image processing method, and computer-readable recording medium
CN108876734A (zh) * 2018-05-31 2018-11-23 沈阳东软医疗系统有限公司 图像去噪方法、装置、电子设备及存储介质
CN108876734B (zh) * 2018-05-31 2022-06-07 东软医疗系统股份有限公司 图像去噪方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN108027962B (zh) 2020-10-09
CN108027962A (zh) 2018-05-11

Similar Documents

Publication Publication Date Title
CN107977110B (zh) 用于获取指纹信息的电子装置和方法
CN108289161B (zh) 电子设备及其图像捕捉方法
CN106066986B (zh) 用于感测指纹的方法和装置
CN106708181B (zh) 电子装置及配置该电子装置的显示器的方法
EP3335214B1 (en) Method and electronic device for playing a virtual musical instrument
CN110476189B (zh) 用于在电子装置中提供增强现实功能的方法和设备
WO2016037318A1 (zh) 一种指纹识别方法、装置及移动终端
US20160247034A1 (en) Method and apparatus for measuring the quality of an image
EP3141982B1 (en) Electronic device for sensing pressure of input and method for operating the electronic device
US9747007B2 (en) Resizing technique for display content
US11012070B2 (en) Electronic device and method thereof for grip recognition
CN110325993B (zh) 通过使用多个生物特征传感器执行认证的电子设备及其操作方法
US11050968B2 (en) Method for driving display including curved display area, display driving circuit supporting the same, and electronic device including the same
WO2018113512A1 (zh) 图像处理方法以及相关装置
CN107015752B (zh) 用于处理视图层上的输入的电子设备和方法
KR20180081353A (ko) 전자 장치 및 그의 동작 방법
WO2018059328A1 (zh) 一种终端控制方法及终端、存储介质
US20140316777A1 (en) User device and operation method thereof
CN108652594B (zh) 用于测量生物计量信息的电子设备和方法
KR20160017546A (ko) 이미지 검색 장치 및 그 방법
KR20180010029A (ko) 전자 장치의 동작 방법 및 장치
WO2017166301A1 (zh) 一种图像处理方法、电子设备以及存储介质
US20150063577A1 (en) Sound effects for input patterns
US10091436B2 (en) Electronic device for processing image and method for controlling the same
CN110796147B (zh) 图像分割方法及相关产品

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16896080

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16896080

Country of ref document: EP

Kind code of ref document: A1