WO2018112979A1 - 一种图像处理方法、装置以及终端设备 - Google Patents

一种图像处理方法、装置以及终端设备 Download PDF

Info

Publication number
WO2018112979A1
WO2018112979A1 PCT/CN2016/111922 CN2016111922W WO2018112979A1 WO 2018112979 A1 WO2018112979 A1 WO 2018112979A1 CN 2016111922 W CN2016111922 W CN 2016111922W WO 2018112979 A1 WO2018112979 A1 WO 2018112979A1
Authority
WO
WIPO (PCT)
Prior art keywords
original image
information
image
signal
quadrant
Prior art date
Application number
PCT/CN2016/111922
Other languages
English (en)
French (fr)
Inventor
王亮
张洪波
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2016/111922 priority Critical patent/WO2018112979A1/zh
Priority to CN201680091872.0A priority patent/CN110140150B/zh
Publication of WO2018112979A1 publication Critical patent/WO2018112979A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to the field of computer application technologies, and in particular, to an image processing method, apparatus, and terminal device.
  • Image enhancement is a low-level processing of image processing. The purpose is to improve the quality of the image itself, and to improve the visual effect of the image to make it more suitable for human eye observation or for machine analysis and recognition, and to obtain more useful information from the image.
  • a conventional image enhancement method may include a homomorphic filtering method based on an illumination-reflection model, that is, an original image is represented as a product form of an illumination component and a reflection component, an illumination component of the image represents a low frequency spectrum portion of the image, and a reflection component of the image represents an image high frequency.
  • the homomorphic filtering method filters the original image through a low-pass filter, estimates the high-frequency spectrum part of the image, or filters the original image through a high-pass filter to estimate the low-frequency spectrum part of the image to enhance the partial image.
  • the above-described homomorphic filtering method based on the illumination-reflection model needs to be configured with different illumination models for different types of pictures. Before performing image enhancement processing on the image, it is necessary to continuously test the parameters of the high and low frequencies of the segmented image, and then configure the illumination model and the image. The robustness of the parameter selection of the enhancement algorithm is low.
  • the above method does not fully consider the local characteristics of the spatial domain of the image, and enhances some pixels of the image while causing the other pixels to be excessively enhanced, resulting in poor quality of the enhanced image obtained by image enhancement.
  • the embodiment of the invention provides an image processing method, device and terminal device, which can improve the robustness of the parameter selection of the image enhancement algorithm and improve the image quality of the enhanced image.
  • the first aspect of the present invention provides an image processing method.
  • the terminal device can process the original image to obtain an analytical signal of the original image, and obtain a polar coordinate form of the analytical signal based on the analytical signal, and obtain the original based on the polar coordinate form of the analytical signal.
  • Phase information of the image based on the phase information
  • the texture information of the original image is subjected to histogram equalization processing on the original image to obtain amplitude information of the original image, and based on the texture information of the original image and the amplitude information of the original image, an enhanced image of the original image is obtained.
  • the terminal device can acquire the original image, and then can calculate the parsed signal of the acquired original image, and obtain the phase information of the original image by using the polar coordinate form of the parsed signal, thereby improving the acquisition efficiency of the phase information;
  • the phase information of the original image includes the texture information of the original image, and the terminal device may perform image enhancement processing on the original image based on the texture information and the amplitude information of the original image to obtain an enhanced image of the original image, which may weaken the uneven brightness to enhance the image. The effect is to improve the image quality of the enhanced image.
  • the terminal device obtains the texture information from the original image, and does not need to continuously obtain the acquisition parameters of the texture information in the process of acquiring the texture information to establish a texture information acquisition model, and the technical solution can improve the image. Enhance the robustness of the algorithm's parameter selection.
  • the terminal device processes the original image to obtain an analytical signal of the original image, and obtains a polar coordinate form of the parsed signal based on the parsing signal, which may be: when the image type of the original image is a color image, acquiring the original image in each The component image of the color space is processed for the component image of the original image in each color space, and the component image is processed to obtain an analytical signal of the component image, and the polar coordinate form of the analytical signal is obtained based on the analytical signal.
  • the terminal device obtains an enhanced image of the original image based on the texture information of the original image and the amplitude information of the original image, and specifically may be: a component image for each color space of the original image, based on texture information and components of the component image.
  • the amplitude information of the image is obtained, the enhanced image of the component image is acquired, and the enhanced image of each component image is subjected to image synthesis to obtain an enhanced image of the original image.
  • the terminal device processes the original image to obtain an analytical signal of the original image, and obtains a polar coordinate form of the parsed signal based on the parsing signal, which may be: acquiring multiple quadrant information of the original image, and selecting multiple quadrant information of the original image.
  • Each quadrant information is subjected to inverse Fourier transform to obtain an analytical signal of the quadrant information, and the polar coordinate form of the analytical signal is obtained based on the analytical signal of each quadrant information.
  • the terminal device obtains the phase information of the original image according to the polar coordinate form, and specifically: for each quadrant information of the plurality of quadrant information, obtaining the quadrant information according to the polar coordinate form of the parsing signal of the quadrant information.
  • the phase information is subjected to weighted averaging processing on the phase information of each quadrant information to obtain phase information of the original image.
  • the terminal device obtains texture information of the original image based on the phase information, where specifically: Morphological filtering is performed on the phase information of the original image to obtain a phase image, and the phase image is processed to obtain texture information of the original image.
  • the terminal device acquires multiple quadrant information of the original image, where the original image is used as a time domain signal, and Fourier transform is performed on the time domain signal to obtain a Fourier transformed frequency domain signal.
  • the preset filter filters the frequency domain signal to obtain quadrant information corresponding to the preset filter.
  • the terminal device processes the original image to obtain an analysis signal of the original image, where the original image is used as a time domain signal, and the Hurbert transform is performed on the time domain signal to obtain an analysis signal of the original image.
  • the terminal device obtains texture information of the original image based on the phase information, which may be: based on the phase information of the original image and the amplitude information of the parsed signal, obtain a polar coordinate form of the texture information, and obtain a texture based on the polar coordinate form of the texture information.
  • the parsing signal of the information, the real part of the parsed signal is used as texture information.
  • the terminal device obtains an enhanced image of the original image based on the texture information of the original image and the amplitude information of the original image, where the texture information of the original image is normalized, and the amplitude information of the original image is performed.
  • the normalization process performs weighted averaging processing on the normalized texture information and the normalized amplitude information to obtain an enhanced image.
  • a second aspect of the present invention provides a computer storage medium storing a program, the program including all or part of the steps of the image processing method provided by the first aspect of the embodiment of the present invention.
  • a third aspect of the present invention provides an image processing apparatus including a module for performing the image processing method disclosed in the first aspect of the embodiment of the present invention.
  • a fourth aspect of the present invention provides a terminal device, including a processor and a memory, wherein the memory stores a set of program codes, and the processor calls the program code stored in the memory to perform the following operations:
  • An enhanced image of the original image is acquired based on the texture information of the original image and the amplitude information of the original image.
  • the processor processes the original image to obtain an analytical signal of the original image, and obtains a polar coordinate form of the parsed signal based on the parsed signal, which may be:
  • the component image is processed to obtain an analytical signal of the component image, and the polar coordinate form of the analytical signal is obtained based on the analytical signal.
  • the processor obtains the enhanced image of the original image based on the texture information of the original image and the amplitude information of the original image, which may be:
  • Image synthesis is performed on the enhanced image of each component image to obtain an enhanced image of the original image.
  • the processor processes the original image to obtain an analytical signal of the original image, and obtains a polar coordinate form of the parsed signal based on the parsed signal, which may be:
  • the polar coordinate form of the analytical signal is obtained based on the analytical signal of each quadrant information.
  • the processor obtains phase information of the original image based on the polar coordinate form, which may be:
  • phase information of each quadrant information is subjected to weighted averaging processing to obtain phase information of the original image.
  • the processor obtains texture information of the original image based on the phase information, which may be:
  • phase information of the original image is morphologically filtered to obtain a phase image.
  • the phase image is processed to obtain texture information of the original image.
  • the processor obtains multiple quadrant information of the original image, which may be:
  • the frequency domain signal is filtered by a preset filter to obtain quadrant information corresponding to the preset filter.
  • the processor processes the original image to obtain an analytical signal of the original image, which may be:
  • the original image is used as a time domain signal, and the Hilbert transform is performed on the time domain signal to obtain an analytical signal of the original image.
  • the processor obtains texture information of the original image based on the phase information, which may be:
  • the real part of the signal is parsed as texture information.
  • the processor obtains an enhanced image of the original image based on the texture information of the original image and the amplitude information of the original image, which may be:
  • the weighted average processing is performed on the normalized texture information and the normalized amplitude information to obtain an enhanced image.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of an image processing method according to another embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an interface of a cosine signal according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an interface for enhancing an image according to an embodiment of the present invention.
  • the image processing method mentioned in the embodiment of the present invention can be run on a personal computer, a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palmtop computer, a mobile Internet device (MID, Mobile Internet Devices), or a wearable smart device.
  • a smart phone such as an Android mobile phone, an iOS mobile phone, etc.
  • a tablet computer such as a Samsung Galaxy Tabs
  • a mobile Internet device MID, Mobile Internet Devices
  • wearable smart device such as an Android mobile phone, an iOS mobile phone, etc.
  • the terminal device is not specifically limited by the embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
  • the image processing method in the embodiment of the present invention may include:
  • the terminal device may define an original image that needs to be image processed as a two-dimensional signal, obtain quadrant information of the original image in four quadrants, and the terminal device may perform Fourier inverse on the quadrant information of the first quadrant. Transforming, obtaining an analytical signal of the first quadrant information; performing inverse Fourier transform on the quadrant information of the second quadrant to obtain an analytical signal of the second quadrant information; performing inverse Fourier transform on the quadrant of the third quadrant The analysis signal of the three-quadrant information; performing inverse Fourier transform on the quadrant information of the fourth quadrant to obtain an analytical signal of the fourth quadrant information.
  • the original image that needs to be processed by the image may be an image captured by the terminal device through the camera, or may be an image acquired in a memory of the terminal device, or may be an image downloaded through the Internet, or may be sent by another terminal device.
  • the images, etc., are not specifically limited by the embodiments of the present invention.
  • the terminal device can represent the analysis signal of the one-dimensional real signal as follows:
  • f(t) is a one-dimensional real signal
  • ⁇ (t) is an analytical signal of f(t)
  • i is an imaginary unit
  • H ⁇ f(t) ⁇ is a Hilbert transform of f(t).
  • the terminal device can determine that the phase information of f(t) is expressed as: among them, For the phase information of f(t), Arg[ ⁇ ] is the phase angle for calculating the complex number. It can be seen from the interface diagram of the cosine signal shown in FIG. 2 that the analysis of the phase information of the signal is more intuitive and convenient, and the terminal device can analyze the phase information of the signal to realize the analysis of the signal.
  • the terminal device may acquire a component image of the original image in each color space, and acquire a plurality of quadrant information of the component image for the component image of the original image in each color space. And performing inverse Fourier transform on each quadrant information of the plurality of quadrant information to obtain an analytical signal of the quadrant information.
  • the terminal device may decompose the color image into a plurality of component images.
  • the color image may be decomposed into three component images, for example, the three component images may respectively include a component image in which the color space is red (Red, R), a component image in which the color space is green (Green), and A component image in which the color space is blue (Blue, B), and another component image, for example, may include a component image in which the color space is luminance (Y), a component image in which color space is chromatic (U), and
  • the color space is a component image of the density (V)
  • the three component images may include a component image of a hue (Hue, H) in the color space, a component image of saturation (S) in the color space, and
  • the color space is a component image of lightness (V, V)
  • the three component images may respectively include a component image of luminance (Y) in the color space and a component of density offset (
  • the terminal device can take the grayscale image as a component image.
  • the grayscale image can be used as a unique component image in which the gray value of black is 0 and the gray value of white is 255.
  • the terminal device may acquire multiple quadrant information of the first component image, and each quadrant of the first component image The information is inverse Fourier transformed to obtain the respective quadrants of the first component image Analytical signal of information.
  • the terminal device may acquire multiple quadrant information of the second component image, and perform inverse Fourier transform on each quadrant information of the second component image to obtain an analysis signal of each quadrant information of the second component image.
  • the terminal device may further acquire a plurality of quadrant information of the third component image, and perform inverse Fourier transform on each quadrant information of the third component image to obtain an analysis signal of each quadrant information of the third component image.
  • the specific manner in which the terminal device obtains multiple quadrant information of the original image may be: using the original image as a time domain signal, and performing Fourier transform on the time domain signal to obtain a frequency domain signal after Fourier transform.
  • the frequency domain signal is filtered by a preset filter to obtain quadrant information corresponding to the preset filter.
  • the terminal device can define the original image that needs to be image processed as a two-dimensional signal f(x, y), and f(x, y) can be a time domain signal, F(x, y) of Fourier
  • the transform TF[f(x, y)] can be expressed as follows:
  • f(x, y) represents the original image
  • F(u, v) represents the frequency domain signal obtained by Fourier transform of f(x, y)
  • u represents the spatial frequency of the original image in the x direction
  • v Indicates the spatial frequency of the original image in the y direction
  • i is the imaginary unit.
  • the terminal device may filter the frequency domain signal by using the first preset filter to obtain the quadrant information corresponding to the first preset filter, that is, the first quadrant information of the original image.
  • the terminal device may further filter the frequency domain signal by using the second preset filter to obtain quadrant information corresponding to the second preset filter, that is, the second quadrant information of the original image.
  • the terminal device may further filter the frequency domain signal by using a third preset filter to obtain quadrant information corresponding to the third preset filter, that is, third quadrant information of the original image.
  • the terminal device may further filter the frequency domain signal by using the fourth preset filter to obtain quadrant information corresponding to the fourth preset filter, that is, fourth quadrant information of the original image.
  • the first preset filter may be (1+sign(u))(1+sign(v)), and the second preset filter may be (1-sign(u))(1+sign( v)), the third preset filter may be (1-sign(u))(1-sign(v)), and the fourth preset filter may be (1+sign(u))(1-sign( v)), the terminal device may determine that the first quadrant information is (1+sign(u))(1+sign(v))F(u,v), and the second quadrant information is (1-sign(u)) (1+sign(v))F(u,v), the third quadrant information is (1-sign(u))(1-sign(v))F(u,v), and the fourth quadrant information is (1) +sign(u))(1-sign(v))F(u,v). among them as well as Can be a symbolic function, n can be u or v.
  • the inverse Fourier transform TF -1 [G(u,v)] for the frequency domain signal G(u,v) can be expressed as follows:
  • G(u,v) is the frequency domain signal
  • g(x,y) represents the inverse Fourier transform of G(u,v)
  • u represents the spatial frequency of the original image in the x direction
  • v represents the original image
  • the spatial frequency in the y direction, i is an imaginary unit.
  • the terminal device may perform inverse Fourier transform on each quadrant information of the original image to obtain an analysis signal of the quadrant information.
  • the parsing signal of the first quadrant information can be expressed as follows:
  • AS 1 (x, y) represents the analytical signal of the first quadrant information
  • (1+sign(u)) (1+sign(v)) represents the first preset filter
  • F(u, v) represents the original The Fourier transform of the image.
  • the parsing signal of the second quadrant information can be expressed as follows:
  • AS 2 (x, y) represents the analytical signal of the second quadrant information
  • (1-sign(u)) (1+sign(v)) represents the second preset filter
  • F(u, v) represents the original The Fourier transform of the image.
  • the analytical signal of the third quadrant information can be expressed as follows:
  • AS 3 (x, y) represents the analytical signal of the third quadrant information
  • (1-sign(u)) (1-sign(v)) represents the third preset filter
  • F(u, v) represents the original The Fourier transform of the image.
  • the parsing signal of the fourth quadrant information can be expressed as follows:
  • AS 4 (x, y) represents the analytical signal of the fourth quadrant information
  • (1+sign(u)) (1-sign(v)) represents the fourth preset filter
  • F(u, v) represents the original The Fourier transform of the image.
  • the terminal device can obtain the polar coordinate form of the parsed signal based on the parsed signal of each quadrant information.
  • the polar form of the analytical signal of the first quadrant information can be expressed as follows:
  • AS 1 (x, y) represents the analytical signal of the first quadrant information
  • (1+sign(u)) (1+sign(v)) represents the first preset filter
  • F(u, v) represents the original Fourier transform of the image
  • a polar coordinate form indicating the first quadrant information Indicates phase information of the first quadrant information.
  • the polar coordinate form of the analytical signal of the second quadrant information can be expressed as follows:
  • AS 2 (x, y) represents the analytical signal of the second quadrant information
  • (1-sign(u)) (1+sign(v)) represents the second preset filter
  • F(u, v) represents the original Fourier transform of the image
  • a polar form representing the second quadrant information Indicates phase information of the second quadrant information.
  • the polar coordinate form of the analytical signal of the third quadrant information can be expressed as follows:
  • AS 3 (x, y) represents the analytical signal of the third quadrant information
  • (1-sign(u)) (1-sign(v)) represents the third preset filter
  • F(u, v) represents the original Fourier transform of the image
  • a polar form representing the third quadrant information Indicates phase information of the third quadrant information.
  • the polar coordinate form of the analytical signal of the fourth quadrant information can be expressed as follows:
  • AS 4 (x, y) represents the analytical signal of the fourth quadrant information
  • (1+sign(u)) (1-sign(v)) represents the fourth preset filter
  • F(u, v) represents the original Fourier transform of the image
  • a polar coordinate form representing the fourth quadrant information Indicates phase information of the fourth quadrant information.
  • the terminal device may use an index of a polar coordinate form of the parsing signal of each quadrant information as phase information of the quadrant information, for example, the terminal device may Phase information as the first quadrant information, Phase information as the second quadrant information, Phase information as the third quadrant information, Phase information as the fourth quadrant information.
  • S104 Perform weighted averaging processing on phase information of each quadrant information to obtain phase information of the original image.
  • the terminal device may perform weighted averaging processing on the phase information of each quadrant information to obtain phase information of the original image.
  • phase information of the original image can be expressed as follows:
  • Phase information indicating the first quadrant information and w 1 represents the weight of the phase information of the first quadrant information
  • Phase information indicating the second quadrant information Phase information indicating the second quadrant information
  • w 2 indicating the weight of the phase information of the second quadrant information
  • w 3 represents the weight of the phase information of the third quadrant information
  • the phase information indicating the fourth quadrant information, and w 4 represents the weight of the phase information of the fourth quadrant information.
  • S105 Perform morphological filtering on phase information of the original image to obtain a phase image.
  • S106 Processing the phase image to obtain texture information of the original image.
  • the terminal device may obtain the polar coordinate form of the texture information based on the phase information of the original image and the amplitude information of the parsed signal, and obtain the parsed signal of the texture information based on the polar coordinate form of the texture information, and use the real part of the parsed signal as a texture. information.
  • the terminal device may perform morphological filtering on the phase information of the original image by using a black-hat filter to obtain texture information of the original image, and obtain a portion with a large change in luminance in the original image, and filter the original The portion of the image that changes less is used, and the terminal device can use the portion of the acquired original image whose luminance changes greatly as the texture information of the original image.
  • the terminal device may perform morphological filtering on the original image using a circular convolution kernel with a pixel size of 5 ⁇ 5 to obtain a morphological filtering result. That is, the phase information of the original image, when the amplitude information of the parsed signal is a constant amount 1, the polar coordinate form of the texture information obtained by the terminal device based on the phase information of the original image and the amplitude information of the parsed signal can be expressed as follows:
  • the terminal device can take the real part
  • the texture information as the original image that is, the texture information of the original image can be expressed as follows:
  • f 1 (x, y) is the texture information of the original image.
  • the polar form of the texture information corresponds to the real part of the complex number.
  • the amplitude information of the original image can be expressed as follows:
  • f 2 (x, y) is the amplitude information of the original image
  • HistEq[f(x, y)] is the histogram equalization processing function
  • f(x, y) is the original image
  • the terminal device may normalize the texture information of the original image, and normalize the amplitude information of the original image, and normalize the processed texture information and the normalized amplitude information.
  • a weighted averaging process is performed to obtain an enhanced image of the original image.
  • the enhanced image can be expressed as follows:
  • f new (x, y) represents an enhanced image of the original image
  • Norm[f 1 (x, y)] represents a normalized function of the texture information of the original image
  • a 1 represents the weight of the normalization function of the texture information of the original image
  • Norm[f 2 (x, y)] represents the normalization function of the amplitude information of the original image
  • a 2 represents the weight of the normalization function of the amplitude information of the original image.
  • the terminal device may acquire the enhanced image of the component image based on the component image of the original image in each color space, the texture information of the component image, and the amplitude information of the component image, and perform image synthesis on the enhanced image of each component image. , get an enhanced image of the original image.
  • the original image may be decomposed into the first component image, the second component image, and the third component image
  • the terminal device may acquire the plurality of quadrant information of the first component image and in the plurality of quadrant information of the first component image.
  • Performing Fourier inverse transform on each quadrant information to obtain an analytical signal of the quadrant information obtaining a polar coordinate form of the analytical signal based on the analytical signal of each quadrant information, and using an index of the polar coordinate form of the analytical signal as the quadrant information
  • Phase information performing weighted averaging processing on phase information of each quadrant information of the first component image to obtain phase information of the first component image, and performing morphological filtering on phase information of the first component image to obtain a phase image of the first component image
  • the amplitude information of the first component image
  • the terminal device may perform image synthesis on the enhanced image of the first component image, the enhanced image of the second component image, and the enhanced image of the third component image to obtain an enhanced image of the original image.
  • the upper area in FIG. 6 shows the original images of the four retinal fundus
  • the lower area in FIG. 6 shows the enhanced images corresponding to the original images.
  • the conventional image enhancement method is to directly process the pixels of the original image to achieve image enhancement, whereas the same type of image region limits the image enhancement effect and image post-processing due to the difference in pixel value intensity.
  • the original image is placed in the polar coordinate form of the two-dimensional analytical signal
  • the phase information is obtained by using the polar coordinate form of the two-dimensional signal
  • the amplitude information after the histogram equalization of the gray image is merged to realize the image reconstruction. , greatly enhance the texture information of the color image, thereby improving the visual effect of the image.
  • the terminal device acquires a plurality of quadrant information of the original image, Performing an inverse Fourier transform on each quadrant information of the plurality of quadrant information to obtain an analytical signal of the quadrant information, and obtaining a polar coordinate form of the analytical signal based on the analytical signal of each quadrant information, for each of the plurality of quadrant information a quadrant information, the phase information of the quadrant information is obtained according to the polar coordinate form of the analytical signal of the quadrant information, and the phase information of each quadrant information is weighted and averaged to obtain phase information of the original image, and the phase information of the original image is processed.
  • the filtering is performed to obtain the texture information of the original image, and the original image is subjected to histogram equalization processing to obtain the amplitude information of the original image. Based on the texture information of the original image and the amplitude information of the original image, an enhanced image of the original image is obtained, and the image can be improved. Enhance the robustness of the algorithm's parameter selection and improve the image quality of the enhanced image.
  • FIG. 2 is a schematic flowchart of an image processing method according to another embodiment of the present invention.
  • the image processing method in the embodiment of the present invention may include:
  • the terminal device may define an original image that needs to be image processed as a two-dimensional signal, respectively obtain an analytical signal of the original image in the first quadrant, an analytical signal of the original image in the second quadrant, and an analysis of the original image in the third quadrant.
  • the signal, the original image is parsed in the fourth quadrant.
  • the parsed signal of the original image in the first quadrant can be expressed as follows:
  • the parsed signal of the original image in the second quadrant can be expressed as follows:
  • the parsed signal of the original image in the third quadrant can be expressed as follows:
  • the parsed signal of the original image in the fourth quadrant can be expressed as follows:
  • AS 1 (x, y) represents the parsed signal of the original image in the first quadrant
  • f(x, y) represents the original image
  • H ⁇ f(x, y) ⁇ represents the full hill of f(x, y) Bert transform
  • H x ⁇ f(x, y) ⁇ represents a partial Hilbert transform of f(x, y) in the x direction
  • H y ⁇ f(x, y) ⁇ denotes f(x, y) A partial Hilbert transform in the y direction.
  • ⁇ (x) and ⁇ (y) can be Dirac functions
  • m can be x or y.
  • "**" means two-dimensional convolution.
  • the terminal device may acquire a component image of the original image in each color space, and the component device may perform the component image on the component image of the original image in each color space.
  • the Albert transform produces an analytical signal for each quadrant of the component image.
  • the terminal device may perform a Hilbert transform on the first component image to obtain each quadrant of the first component image. Analyze the signal.
  • the terminal device can perform a Hilbert transform on the second component image to obtain an analytical signal of each quadrant of the second component image.
  • the terminal device may also perform a Hilbert transform on the third component image to obtain an analytical signal of each quadrant of the third component image.
  • the terminal device may further perform a Hilbert transform on the fourth component image to obtain an analytical signal of each quadrant of the fourth component image.
  • the terminal device can obtain the polar coordinate form of the parsed signal based on the parsed signals of the respective quadrants.
  • the polar coordinate form of the analytical signal of the first quadrant can be expressed as follows:
  • the polar form of the analytical signal of the second quadrant can be expressed as follows:
  • the polar form of the analytical signal of the third quadrant can be expressed as follows:
  • the polar form of the analytical signal of the fourth quadrant can be expressed as follows:
  • AS 1 (x, y) represents the analytical signal of the first quadrant, a polar coordinate form representing the analytical signal of the first quadrant, Indicates the phase information of the first quadrant.
  • AS 2 (x, y) represents the analytical signal of the second quadrant, a polar coordinate form representing the analytical signal of the second quadrant, Indicates the phase information of the second quadrant.
  • AS 3 (x, y) represents the analytical signal of the third quadrant, a polar coordinate form representing the analytical signal of the third quadrant, Indicates the phase information of the third quadrant.
  • AS 4 (x, y) represents the analytical signal of the fourth quadrant, a polar coordinate form representing the analytical signal of the fourth quadrant, Indicates the phase information of the fourth quadrant.
  • the terminal device may use the index of the polar coordinate form of the parsing signal of each quadrant as the phase information of the quadrant, for example, the terminal device may As the phase information of the first quadrant, As the phase information of the second quadrant, As the phase information of the third quadrant, Phase information as the fourth quadrant.
  • the terminal device may perform weighted averaging processing on the phase information of each quadrant information to obtain phase information of the original image.
  • phase information of the original image can be expressed as follows:
  • Phase information indicating the first quadrant information and w 1 represents the weight of the phase information of the first quadrant information
  • Phase information indicating the second quadrant information Phase information indicating the second quadrant information
  • w 2 indicating the weight of the phase information of the second quadrant information
  • w 3 represents the weight of the phase information of the third quadrant information
  • the phase information indicating the fourth quadrant information, and w 4 represents the weight of the phase information of the fourth quadrant information.
  • S205 Perform morphological filtering on phase information of the original image to obtain a phase image.
  • the terminal device may obtain the polar coordinate form of the texture information based on the phase information of the original image and the amplitude information of the parsed signal, obtain an analytical signal of the texture information based on the polar coordinate form of the texture information, and use the real part of the parsed signal as Texture information.
  • the terminal device can use the black-hat filter to enter the phase information of the original image.
  • the morphological filtering obtains the texture information of the original image, and can obtain a portion with a large change in luminance in the original image, and filter the smaller portion of the original image, and the terminal device can change the brightness of the obtained original image.
  • the terminal device may perform morphological filtering on the original image using a circular convolution kernel with a pixel size of 5 ⁇ 5 to obtain a morphological filtering result. That is, the phase information of the original image, when the amplitude information of the parsed signal is a constant amount 1, the polar coordinate form of the texture information obtained by the terminal device based on the phase information of the original image and the amplitude information of the parsed signal can be expressed as follows:
  • the terminal device can take the real part
  • the texture information as the original image that is, the texture information of the original image can be expressed as follows:
  • f 1 (x, y) is the texture information of the original image.
  • the polar form of the texture information corresponds to the real part of the complex number.
  • the amplitude information of the original image can be expressed as follows:
  • f 2 (x, y) is the amplitude information of the original image
  • HistEq[f(x, y)] is the histogram equalization processing function
  • f(x, y) is the original image
  • the terminal device may normalize the texture information of the original image, and normalize the amplitude information of the original image, and normalize the processed texture information and the normalized amplitude information.
  • a weighted averaging process is performed to obtain an enhanced image of the original image.
  • the enhanced image can be expressed as follows:
  • f new (x, y) represents an enhanced image of the original image
  • Norm[f 1 (x, y)] represents a normalized function of the texture information of the original image
  • a 1 represents the weight of the normalization function of the texture information of the original image
  • Norm[f 2 (x, y)] represents the normalization function of the amplitude information of the original image
  • a 2 represents the weight of the normalization function of the amplitude information of the original image.
  • the terminal device may acquire the enhanced image of the component image based on the texture information of the component image and the amplitude information of the component image, and enhance the image of each component image. Image synthesis is performed to obtain an enhanced image of the original image.
  • the original image may be decomposed into a first component image, a second component image, and a third component image
  • the terminal device may perform a Hilbert transform on the first component image to obtain an analytical signal of each quadrant of the first component image.
  • Processing obtaining phase information of the first component image, performing morphological filtering on the phase information of the first component image, obtaining a phase image of the first component image, and processing the phase image of the first component image to obtain a texture of the first component image
  • performing histogram equalization processing on the first component image to obtain amplitude information of the first component image, and obtaining an enhanced image of the first component image based on the texture information of the first component image and the amplitude information of the first component image.
  • the terminal device may perform image synthesis on the enhanced image of the first component image, the enhanced image of the second component image, and the enhanced image of the third component image to obtain an enhanced image of the original image.
  • the terminal device performs a Hilbert transform on the original image to obtain an analytical signal of each quadrant of the original image, and obtains a polar coordinate form of the analytical signal of the quadrant based on the analytical signal of each quadrant.
  • Analytic signal for the quadrant for each of the plurality of quadrants The exponent of the polar coordinate form is used as the phase information of the quadrant, and the phase information of each quadrant is weighted and averaged to obtain the phase information of the original image, and the phase information of the original image is morphologically filtered to obtain the texture information of the original image.
  • the original image is subjected to histogram equalization processing to obtain the amplitude information of the original image. Based on the texture information of the original image and the amplitude information of the original image, an enhanced image of the original image is obtained, which can improve the robustness of the parameter selection of the image enhancement algorithm. And improve the image quality of the enhanced image.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium can store a program, and the program includes some or all of the steps in the method embodiment shown in FIG. 1 and FIG. 2 when executed.
  • FIG. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • the image processing apparatus may be used to implement part or all of the method embodiments shown in FIG. 1 and FIG.
  • the image processing apparatus may include at least an analysis signal acquisition module 301, a phase information determination module 302, a texture information acquisition module 303, an amplitude information acquisition module 304, and an image enhancement module 305, where:
  • the analysis signal acquisition module 301 is configured to process the original image to obtain an analysis signal of the original image, and obtain a polar coordinate form of the analysis signal based on the analysis signal.
  • the phase information determining module 302 is configured to acquire phase information of the original image based on the polar coordinate form.
  • the texture information obtaining module 303 is configured to obtain texture information of the original image based on the phase information.
  • the amplitude information obtaining module 304 is configured to perform histogram equalization processing on the original image to obtain amplitude information of the original image.
  • the image enhancement module 305 is configured to acquire an enhanced image of the original image based on the texture information of the original image and the amplitude information of the original image.
  • the parsing signal acquiring module 301 is specifically configured to:
  • the component images of the original image in the respective color spaces are acquired.
  • the component image is processed to obtain an analytical signal of the component image, and the polar coordinate form of the analytical signal is obtained based on the analytical signal.
  • the image enhancement module 305 is specifically configured to:
  • an enhanced image of the component image is obtained.
  • Image synthesis is performed on the enhanced image of each component image to obtain an enhanced image of the original image.
  • the parsing signal acquiring module 301 is specifically configured to:
  • the parsing signal based on each quadrant information obtains the polar coordinate form of the parsed signal.
  • phase information determining module 302 is specifically configured to:
  • the phase information of the quadrant information is acquired based on the polar coordinate form of the parsing signal of the quadrant information.
  • phase information of each quadrant information is subjected to weighted averaging processing to obtain phase information of the original image.
  • the texture information obtaining module 303 is specifically configured to:
  • phase information of the original image is morphologically filtered to obtain a phase image.
  • the phase image is processed to obtain texture information of the original image.
  • the parsing signal acquiring module 301 acquires multiple quadrant information of the original image, specifically for:
  • the original image is used as a time domain signal, and the time domain signal is Fourier transformed to obtain a Fourier transformed frequency domain signal.
  • the frequency domain signal is filtered by a preset filter to obtain quadrant information corresponding to the preset filter.
  • the parsing signal acquiring module 301 processes the original image to obtain an parsed signal of the original image, and is specifically configured to:
  • the original image is used as a time domain signal, and the Hilbert transform is performed on the time domain signal to obtain an analytical signal of the original image.
  • the texture information obtaining module 303 is specifically configured to:
  • a polar coordinate form of the texture information is obtained.
  • the analytical signal of the texture information is obtained based on the polar coordinate form of the texture information.
  • the real part of the signal is parsed as texture information.
  • the image enhancement module 305 is specifically configured to:
  • the texture information of the original image is normalized, and the amplitude information of the original image is normalized.
  • the weighted average processing is performed on the normalized texture information and the normalized amplitude information to obtain an enhanced image.
  • the analysis signal acquisition module 301 processes the original image to obtain an analysis signal of the original image, and obtains a polar coordinate form of the analysis signal based on the analysis signal, and the phase information determination module 302 obtains the polar coordinate form.
  • the texture information acquisition module 303 obtains texture information of the original image based on the phase information
  • the amplitude information acquisition module 304 performs histogram equalization processing on the original image to obtain amplitude information of the original image
  • the image enhancement module 305 is based on the original image.
  • the texture information and the amplitude information of the original image obtain an enhanced image of the original image, which can improve the robustness of the parameter selection of the image enhancement algorithm and improve the image quality of the enhanced image.
  • FIG. 4 is a schematic structural diagram of a terminal device according to a first embodiment of the present invention.
  • the terminal device provided by the embodiment of the present invention may be used to implement the foregoing embodiments of the present invention shown in FIG. 1 and FIG.
  • FIG. 1 and FIG. 1 For the convenience of the description, only the parts related to the embodiments of the present invention are shown. The specific technical details are not disclosed. Please refer to the embodiments of the present invention shown in FIG. 1 and FIG.
  • the terminal device comprises: at least one processor 401, such as a CPU, at least one input device 403, at least one output device 404, a memory 405, and at least one communication bus 402.
  • the communication bus 402 is used to implement connection communication between these components.
  • the input device 403 can optionally include a camera for acquiring an original image.
  • the output device 404 can optionally include a display screen for displaying the enhanced image.
  • the memory 405 may include a high speed RAM memory, and may also include a non-volatile memory such as at least one disk memory.
  • the memory 405 can optionally include at least one storage device located remotely from the aforementioned processor 401.
  • a set of program codes is stored in the memory 405, and the processor 401 calls the program code stored in the memory 405 for performing the following operations:
  • the original image is processed to obtain an analytical signal of the original image, and a polar coordinate form of the analytical signal is obtained based on the analytical signal.
  • the phase information of the original image is obtained based on the polar coordinate form.
  • the texture information of the original image is obtained based on the phase information.
  • Histogram equalization processing is performed on the original image to obtain amplitude information of the original image.
  • An enhanced image of the original image is obtained based on the texture information of the original image and the amplitude information of the original image.
  • the processor 401 processes the original image to obtain an analytical signal of the original image, and obtains a polar coordinate form of the parsed signal based on the parsing signal, which may be:
  • the component images of the original image in the respective color spaces are acquired.
  • the component image is processed to obtain an analytical signal of the component image, and the polar coordinate form of the analytical signal is obtained based on the analytical signal.
  • the processor 401 obtains an enhanced image of the original image based on the texture information of the original image and the amplitude information of the original image, and specifically:
  • an enhanced image of the component image is acquired.
  • Image synthesis is performed on the enhanced image of each component image to obtain an enhanced image of the original image.
  • the processor 401 processes the original image to obtain an analytical signal of the original image, and obtains a polar coordinate form of the parsed signal based on the parsing signal, which may be:
  • the parsing signal based on each quadrant information obtains the polar coordinate form of the parsed signal.
  • the processor 401 obtains phase information of the original image according to the polar coordinate form, which may be specifically:
  • the phase information of the quadrant information is acquired based on the polar coordinate form of the parsing signal of the quadrant information.
  • phase information of each quadrant information is subjected to weighted averaging processing to obtain phase information of the original image.
  • the processor 401 obtains texture information of the original image based on the phase information, and specifically:
  • phase information of the original image is morphologically filtered to obtain a phase image.
  • the phase image is processed to obtain texture information of the original image.
  • the processor 401 obtains multiple quadrant information of the original image, which may be specifically:
  • the original image is used as a time domain signal, and the time domain signal is Fourier transformed to obtain a Fourier transformed frequency domain signal.
  • the frequency domain signal is filtered by a preset filter to obtain quadrant information corresponding to the preset filter.
  • the processor 401 processes the original image to obtain an analysis signal of the original image, which may be:
  • the original image is used as a time domain signal, and the Hilbert transform is performed on the time domain signal to obtain an analytical signal of the original image.
  • the processor 401 obtains texture information of the original image based on the phase information, and specifically:
  • a polar coordinate form of the texture information is obtained.
  • the analytical signal of the texture information is obtained based on the polar coordinate form of the texture information.
  • the real part of the signal is parsed as texture information.
  • the processor 401 obtains an enhanced image of the original image based on the texture information of the original image and the amplitude information of the original image, which may be specifically:
  • the texture information of the original image is normalized, and the amplitude information of the original image is normalized.
  • the weighted average processing is performed on the normalized texture information and the normalized amplitude information to obtain an enhanced image.
  • terminal introduced in the embodiment of the present invention may be used to implement some or all of the processes in the method embodiments introduced in conjunction with FIG. 1 and FIG.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • Discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Abstract

本发明实施例公开了一种图像处理方法、装置以及终端设备,其中,所述图像处理方法包括:对原始图像进行处理得到原始图像的解析信号,并基于解析信号得到解析信号的极坐标形式;基于极坐标形式获取原始图像的相位信息;基于相位信息得到原始图像的纹理信息;对原始图像进行直方图均衡化处理,得到原始图像的幅度信息;基于原始图像的纹理信息和原始图像的幅度信息,得到原始图像的增强图像。采用本发明实施例,可提高图像增强算法的参数选择方面的鲁棒性,并提高增强图像的图像质量。

Description

一种图像处理方法、装置以及终端设备 技术领域
本发明涉及计算机应用技术领域,尤其涉及一种图像处理方法、装置以及终端设备。
背景技术
图像作为传递和获取信息的手段,能提供比文字或者声音更为直观的信息。图像增强作为图像处理的低层次处理环节,目的是为了改善图像本身的质量,将图像的视觉效果改善成为更适合让人眼观察或者让机器分析识别,进而从图像中获取更加有用的信息。传统的图像增强方法可以包括基于照明-反射模型的同态滤波方法,即将原始图像表示为照明分量和反射分量的乘积形式,图像的照明分量表示图像低频频谱部分,图像的反射分量表示图像高频频谱部分,同态滤波方法通过低通滤波器对原始图像进行滤波,估算图像的高频频谱部分,或者通过高通滤波器对原始图像进行滤波,估算图像的低频频谱部分,以增强局部图像。但是上述基于照明-反射模型的同态滤波方法对于不同类型的图片需要配置不同的照明模型,则对图像进行图像增强处理之前,需要不断试验得到分割图像高低频率的参数,进而配置照明模型,图像增强算法的参数选择方面的鲁棒性较低。另外,上述方法没有充分考虑图像的空间域局部特性,增强图像某部分像素的同时导致另一部分像素过度增强,导致图像增强得到的增强图像的质量较差。
发明内容
本发明实施例提供了一种图像处理方法、装置以及终端设备,可提高图像增强算法的参数选择方面的鲁棒性,并提高增强图像的图像质量。
本发明第一方面提供了一种图像处理方法,终端设备可以对原始图像进行处理得到原始图像的解析信号,并基于解析信号得到该解析信号的极坐标形式,基于解析信号的极坐标形式获取原始图像的相位信息,基于相位信息得到 原始图像的纹理信息,对原始图像进行直方图均衡化处理,得到原始图像的幅度信息,并基于原始图像的纹理信息和原始图像的幅度信息,得到原始图像的增强图像。
在该技术方案中,终端设备可以获取原始图像,进而可以计算所获取的原始图像的解析信号,通过该解析信号的极坐标形式获取原始图像的相位信息,可提高相位信息的获取效率;另外,原始图像的相位信息包含了该原始图像的纹理信息,终端设备可以基于原始图像的纹理信息和幅度信息对原始图像进行图像增强处理,得到原始图像的增强图像,可减弱不均匀亮度对增强图像的影响,提高增强图像的图像质量;另外,终端设备从原始图像中获取纹理信息,在获取纹理信息的过程中无需不断实验得到纹理信息的获取参数以建立纹理信息获取模型,该技术方案可提高图像增强算法的参数选择方面的鲁棒性。
可选的,终端设备对原始图像进行处理得到原始图像的解析信号,并基于解析信号得到解析信号的极坐标形式,具体可以为:当原始图像的图像类型为彩色图像时,获取原始图像在各个色彩空间的分量图像,针对原始图像在每一个色彩空间的分量图像,对分量图像进行处理得到该分量图像的解析信号,并基于该解析信号得到该解析信号的极坐标形式。
可选的,终端设备基于原始图像的纹理信息和原始图像的幅度信息,得到原始图像的增强图像,具体可以为:针对原始图像在每一个色彩空间的分量图像,基于分量图像的纹理信息和分量图像的幅度信息,获取分量图像的增强图像,对各个分量图像的增强图像进行图像合成,得到原始图像的增强图像。
可选的,终端设备对原始图像进行处理得到原始图像的解析信号,并基于解析信号得到解析信号的极坐标形式,具体可以为:获取原始图像的多个象限信息,并对多个象限信息中的每一个象限信息进行傅里叶逆变换,得到象限信息的解析信号,基于各个象限信息的解析信号得到该解析信号的极坐标形式。
可选的,终端设备基于极坐标形式获取原始图像的相位信息,具体可以为:针对多个象限信息中的每一个象限信息,基于该象限信息的解析信号的极坐标形式,获取该象限信息的相位信息,对各个象限信息的相位信息进行加权平均处理,得到原始图像的相位信息。
可选的,终端设备基于相位信息得到原始图像的纹理信息,具体可以为: 对原始图像的相位信息进行形态学滤波,得到相位图像,对相位图像进行处理,得到原始图像的纹理信息。
可选的,终端设备获取原始图像的多个象限信息,具体可以为:将原始图像作为时域信号,并对时域信号进行傅里叶变换,得到傅里叶变换后的频域信号,通过预置滤波器对频域信号进行滤波,得到预置滤波器对应的象限信息。
可选的,终端设备对原始图像进行处理得到原始图像的解析信号,具体可以为:将原始图像作为时域信号,对时域信号进行希尔伯特变换,得到原始图像的解析信号。
可选的,终端设备基于相位信息得到原始图像的纹理信息,具体可以为:基于原始图像的相位信息以及解析信号的幅度信息,得到纹理信息的极坐标形式,基于纹理信息的极坐标形式得到纹理信息的解析信号,将解析信号的实数部分作为纹理信息。
可选的,终端设备基于原始图像的纹理信息和原始图像的幅度信息,得到原始图像的增强图像,具体可以为:对原始图像的纹理信息进行归一化处理,并对原始图像的幅度信息进行归一化处理,对归一化处理后的纹理信息和归一化处理后的幅度信息进行加权平均处理,得到增强图像。
本发明第二方面提供一种计算机存储介质,所述计算机存储介质存储有程序,所述程序执行时包括本发明实施例第一方面提供的图像处理方法中全部或部分的步骤。
本发明第三方面提供一种图像处理装置,该图像处理装置包括用于执行本发明实施例第一方面公开的图像处理方法的模块。
本发明第四方面提供一种终端设备,其特征在于,包括处理器以及存储器,存储器中存储一组程序代码,且处理器调用存储器中存储的程序代码,用于执行以下操作:
对原始图像进行处理得到原始图像的解析信号,并基于解析信号得到该解析信号的极坐标形式;
基于极坐标形式获取原始图像的相位信息;
基于相位信息得到原始图像的纹理信息;
对原始图像进行直方图均衡化处理,得到原始图像的幅度信息;
基于原始图像的纹理信息和原始图像的幅度信息,获取原始图像的增强图像。
可选的,处理器对原始图像进行处理得到原始图像的解析信号,并基于解析信号得到解析信号的极坐标形式,具体可以为:
当原始图像的图像类型为彩色图像时,获取原始图像在各个色彩空间的分量图像;
针对原始图像在每一个色彩空间的分量图像,对分量图像进行处理得到该分量图像的解析信号,并基于解析信号得到该解析信号的极坐标形式。
可选的,处理器基于原始图像的纹理信息和原始图像的幅度信息,获取原始图像的增强图像,具体可以为:
针对原始图像在每一个色彩空间的分量图像,基于分量图像的纹理信息和分量图像的幅度信息,得到分量图像的增强图像;
对各个分量图像的增强图像进行图像合成,得到原始图像的增强图像。
可选的,处理器对原始图像进行处理得到原始图像的解析信号,并基于解析信号得到解析信号的极坐标形式,具体可以为:
获取原始图像的多个象限信息,并对多个象限信息中的每一个象限信息进行傅里叶逆变换,得到象限信息的解析信号;
基于各个象限信息的解析信号得到该解析信号的极坐标形式。
可选的,处理器基于极坐标形式获取原始图像的相位信息,具体可以为:
针对多个象限信息中的每一个象限信息,基于象限信息的解析信号的极坐标形式,获取该象限信息的相位信息;
对各个象限信息的相位信息进行加权平均处理,得到原始图像的相位信息。
可选的,处理器基于相位信息得到原始图像的纹理信息,具体可以为:
对原始图像的相位信息进行形态学滤波,得到相位图像。
对相位图像进行处理,得到原始图像的纹理信息。
可选的,处理器获取原始图像的多个象限信息,具体可以为:
将原始图像作为时域信号,并对时域信号进行傅里叶变换,得到傅里叶变换后的频域信号;
通过预置滤波器对频域信号进行滤波,得到预置滤波器对应的象限信息。
可选的,处理器对原始图像进行处理得到原始图像的解析信号,具体可以为:
将原始图像作为时域信号,对时域信号进行希尔伯特变换,得到原始图像的解析信号。
可选的,处理器基于相位信息得到原始图像的纹理信息,具体可以为:
基于原始图像的相位信息以及解析信号的幅度信息,得到纹理信息的极坐标形式;
基于纹理信息的极坐标形式得到纹理信息的解析信号;
将解析信号的实数部分作为纹理信息。
可选的,处理器基于原始图像的纹理信息和原始图像的幅度信息,得到原始图像的增强图像,具体可以为:
对原始图像的纹理信息进行归一化处理,并对原始图像的幅度信息进行归一化处理;
对归一化处理后的纹理信息和归一化处理后的幅度信息进行加权平均处理,得到增强图像。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例中提供的一种图像处理方法的流程示意图;
图2为本发明另一实施例中提供的一种图像处理方法的流程示意图;
图3为本发明实施例中提供的一种图像处理装置的结构示意图;
图4为本发明实施例中提供的一种终端设备的结构示意图;
图5为本发明实施例中提供的一种余弦信号的界面示意图;
图6为本发明实施例中提供的一种增强图像的界面示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例提及到的图像处理方法可以运行于个人电脑、智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑、移动互联网设备(MID,Mobile Internet Devices)或穿戴式智能设备等终端设备中,具体不受本发明实施例的限制。
请参见图1,图1为本发明实施例中提供的一种图像处理方法的流程示意图,如图所示本发明实施例中的图像处理方法可以包括:
S101,获取原始图像的多个象限信息,并对多个象限信息中的每一个象限信息进行傅里叶逆变换,得到象限信息的解析信号。
举例来说,终端设备可以把需要进行图像处理的原始图像定义为一个二维信号,获取该原始图像在四个象限上的象限信息,终端设备可以对第一象限的象限信息进行傅里叶逆变换,得到第一象限信息的解析信号;对第二象限的象限信息进行傅里叶逆变换,得到第二象限信息的解析信号;对第三象限的象限信息进行傅里叶逆变换,得到第三象限信息的解析信号;对第四象限的象限信息进行傅里叶逆变换,得到第四象限信息的解析信号。其中,需要进行图像处理的原始图像可以是终端设备通过摄像头采集到的图像,也可以是在终端设备的存储器中获取到的图像,还可以是通过互联网下载的图像,还可以是其他终端设备发送的图像,等等,具体不受本发明实施例的限制。
以图5所示的余弦信号的界面示意图为例,终端设备可以将一维实信号的解析信号表示如下:
φ(t)=f(t)+i·H{f(t)}
其中,f(t)为一维实信号,φ(t)为f(t)的解析信号,i为虚数单位,H{f(t)}为f(t)的希尔伯特变换。
示例性的,当f(t)=10cos(4πt)时,H{f(t)}=sin(4πt),则φ(t)=f(t)+i·H{f(t)}=10cos(4πt)+i·sin(4πt)=10ei4πt,即该解析信号的极坐标形式为φ(t)=10ei4πt。终端设备基于该极坐标形式可以确定f(t)的相位信息表示为:
Figure PCTCN2016111922-appb-000001
其中,
Figure PCTCN2016111922-appb-000002
为f(t)的相位信息,Arg[·]为计算复数的相角。通过图2所示的余弦信号的界面示意图可知,对信号的相位信息进行分析更为直观便捷,终端设备可以通过对信号的相位信息进行分析,以实现对该信号的分析。
可选的,当原始图像的图像类型为彩色图像时,终端设备可以获取原始图像在各个色彩空间的分量图像,针对原始图像在每一个色彩空间的分量图像,获取该分量图像的多个象限信息,并对多个象限信息中的每一个象限信息进行傅里叶逆变换,得到象限信息的解析信号。
具体实现中,当原始图像的图像类型为彩色图像时,终端设备可以将彩色图像分解成多个分量图像。示例性的,彩色图像可以分解为三个分量图像,例如三个分量图像可以分别包括在色彩空间为红色(Red,R)的分量图像、在色彩空间为绿色(Green,G)的分量图像以及在色彩空间为蓝色(Blue,B)的分量图像,又如三个分量图像可以分别包括在色彩空间为亮度(Y)的分量图像、在色彩空间为色度(U)的分量图像以及在色彩空间为浓度(V)的分量图像,又如三个分量图像可以分别包括在色彩空间为色调(Hue,H)的分量图像、在色彩空间为饱和度(Saturation,S)的分量图像以及在色彩空间为明度(Value,V)的分量图像,又如三个分量图像可以分别包括在色彩空间为亮度(Y)的分量图像、在色彩空间为蓝色的浓度偏移量(Cb)的分量图像以及在色彩空间为红色的浓度偏移量(Cr)的分量图像等等。当原始图像的图像类型为灰度图像时,终端设备可以将灰度图像作为一个分量图像。示例性的,灰度图像可以作为唯一分量图像,其中黑色的灰度值为0,白色的灰度值为255。
进一步的,当终端设备将原始图像分解为第一分量图像、第二分量图像以及第三分量图像时,终端设备可以获取第一分量图像的多个象限信息,并对第一分量图像的各个象限信息进行傅里叶逆变换,得到第一分量图像的各个象限 信息的解析信号。同理,终端设备可以获取第二分量图像的多个象限信息,并对第二分量图像的各个象限信息进行傅里叶逆变换,得到第二分量图像的各个象限信息的解析信号。终端设备还可以获取第三分量图像的多个象限信息,并对第三分量图像的各个象限信息进行傅里叶逆变换,得到第三分量图像的各个象限信息的解析信号。
可选的,终端设备获取原始图像的多个象限信息的具体方式可以为:将原始图像作为时域信号,并对时域信号进行傅里叶变换,得到傅里叶变换后的频域信号,通过预置滤波器对频域信号进行滤波,得到预置滤波器对应的象限信息。
示例性的,终端设备可以把需要进行图像处理的原始图像定义为一个二维信号f(x,y),f(x,y)可以为时域信号,f(x,y)的傅里叶变换TF[f(x,y)]可以表示如下:
F(u,v)=TF[f(x,y)]=∫∫f(x,y)e-i(2πux+2πvy)dxdy
其中,f(x,y)表示原始图像,F(u,v)表示对f(x,y)进行傅里叶变换得到的频域信号,u表示原始图像在x方向上的空间频率,v表示原始图像在y方向上的空间频率,i为虚数单位。
进一步的,终端设备可以通过第一预置滤波器对频域信号进行滤波,得到第一预置滤波器对应的象限信息,即原始图像的第一象限信息。终端设备还可以通过第二预置滤波器对频域信号进行滤波,得到第二预置滤波器对应的象限信息,即原始图像的第二象限信息。终端设备还可以通过第三预置滤波器对频域信号进行滤波,得到第三预置滤波器对应的象限信息,即原始图像的第三象限信息。终端设备还可以通过第四预置滤波器对频域信号进行滤波,得到第四预置滤波器对应的象限信息,即原始图像的第四象限信息。
示例性的,第一预置滤波器可以为(1+sign(u))(1+sign(v)),第二预置滤波器可以为(1-sign(u))(1+sign(v)),第三预置滤波器可以为(1-sign(u))(1-sign(v)),第四预置滤波器可以为(1+sign(u))(1-sign(v)),则终端设备可以确定第一象限信息为(1+sign(u))(1+sign(v))F(u,v),第二象限信息为(1-sign(u))(1+sign(v))F(u,v),第三 象限信息为(1-sign(u))(1-sign(v))F(u,v),第四象限信息为(1+sign(u))(1-sign(v))F(u,v)。其中
Figure PCTCN2016111922-appb-000003
以及
Figure PCTCN2016111922-appb-000004
可以为符号函数,
Figure PCTCN2016111922-appb-000005
n可以为u或者v。
示例性的,对于频域信号G(u,v)的傅里叶逆变换TF-1[G(u,v)]可以表示如下:
g(x,y)=TF-1[G(u,v)]=∫∫G(u,v)ei(2πux+2πvy)dudv
其中,G(u,v)为频域信号,g(x,y)表示G(u,v)的傅里叶逆变换,u表示原始图像在x方向上的空间频率,v表示原始图像在y方向上的空间频率,i为虚数单位。
进一步的,终端设备可以对原始图像的各个象限信息进行傅里叶逆变换,得到该象限信息的解析信号。
示例性的,第一象限信息的解析信号可以表示如下:
AS1(x,y)=TF-1[(1+sign(u))(1+sign(v))F(u,v)]
其中,AS1(x,y)表示第一象限信息的解析信号,(1+sign(u))(1+sign(v))表示第一预置滤波器,F(u,v)表示原始图像的傅里叶变换。
第二象限信息的解析信号可以表示如下:
AS2(x,y)=TF-1[(1-sign(u))(1+sign(v))F(u,v)]
其中,AS2(x,y)表示第二象限信息的解析信号,(1-sign(u))(1+sign(v))表示第二预置滤波器,F(u,v)表示原始图像的傅里叶变换。
第三象限信息的解析信号可以表示如下:
AS3(x,y)=TF-1[(1-sign(u))(1-sign(v))F(u,v)]
其中,AS3(x,y)表示第三象限信息的解析信号,(1-sign(u))(1-sign(v))表示第三预置滤波器,F(u,v)表示原始图像的傅里叶变换。
第四象限信息的解析信号可以表示如下:
AS4(x,y)=TF-1[(1+sign(u))(1-sign(v))F(u,v)]
其中,AS4(x,y)表示第四象限信息的解析信号,(1+sign(u))(1-sign(v))表示 第四预置滤波器,F(u,v)表示原始图像的傅里叶变换。
S102,基于各个象限信息的解析信号得到该解析信号的极坐标形式。
终端设备可以基于各个象限信息的解析信号得到该解析信号的极坐标形式。示例性的,第一象限信息的解析信号的极坐标形式可以表示如下:
Figure PCTCN2016111922-appb-000006
其中,AS1(x,y)表示第一象限信息的解析信号,(1+sign(u))(1+sign(v))表示第一预置滤波器,F(u,v)表示原始图像的傅里叶变换,
Figure PCTCN2016111922-appb-000007
表示第一象限信息的极坐标形式,
Figure PCTCN2016111922-appb-000008
表示第一象限信息的相位信息。
第二象限信息的解析信号的极坐标形式可以表示如下:
Figure PCTCN2016111922-appb-000009
其中,AS2(x,y)表示第二象限信息的解析信号,(1-sign(u))(1+sign(v))表示第二预置滤波器,F(u,v)表示原始图像的傅里叶变换,
Figure PCTCN2016111922-appb-000010
表示第二象限信息的极坐标形式,
Figure PCTCN2016111922-appb-000011
表示第二象限信息的相位信息。
第三象限信息的解析信号的极坐标形式可以表示如下:
Figure PCTCN2016111922-appb-000012
其中,AS3(x,y)表示第三象限信息的解析信号,(1-sign(u))(1-sign(v))表示第三预置滤波器,F(u,v)表示原始图像的傅里叶变换,
Figure PCTCN2016111922-appb-000013
表示第三象限信息的极坐标形式,
Figure PCTCN2016111922-appb-000014
表示第三象限信息的相位信息。
第四象限信息的解析信号的极坐标形式可以表示如下:
Figure PCTCN2016111922-appb-000015
其中,AS4(x,y)表示第四象限信息的解析信号,(1+sign(u))(1-sign(v))表示第四预置滤波器,F(u,v)表示原始图像的傅里叶变换,
Figure PCTCN2016111922-appb-000016
表示第四象限信息的极坐标形式,
Figure PCTCN2016111922-appb-000017
表示第四象限信息的相位信息。
S103,针对多个象限信息中的每一个象限信息,基于该象限信息的解析信号的极坐标形式,获取该象限信息的相位信息。
终端设备可以将各个象限信息的解析信号的极坐标形式的指数作为该象限信息的相位信息,例如终端设备可以将
Figure PCTCN2016111922-appb-000018
作为第一象限信息的相位信息,
Figure PCTCN2016111922-appb-000019
作为第二象限信息的相位信息,
Figure PCTCN2016111922-appb-000020
作为第三象限信息的相位信息,
Figure PCTCN2016111922-appb-000021
作为第四象限信息的相位信息。
S104,对各个象限信息的相位信息进行加权平均处理,得到原始图像的相位信息。
终端设备获取到各个象限信息的相位信息之后,可以将各个象限信息的相位信息进行加权平均处理,得到原始图像的相位信息。
示例性的,原始图像的相位信息可以表示如下:
Figure PCTCN2016111922-appb-000022
其中,
Figure PCTCN2016111922-appb-000023
表示原始图像的相位信息,
Figure PCTCN2016111922-appb-000024
表示第一象限信息的相位信息,w1表示第一象限信息的相位信息的权重,
Figure PCTCN2016111922-appb-000025
表示第二象限信息的相位信息,w2表示第二象限信息的相位信息的权重,
Figure PCTCN2016111922-appb-000026
表示第三象限信息的相位信息,w3表示第三象限信息的相位信息的权重,
Figure PCTCN2016111922-appb-000027
表示第四象限信息的相位信息,w4表示第四象限信息的相位信息的权重。例如,当w1=w2=w3=w4时,
Figure PCTCN2016111922-appb-000028
S105,对原始图像的相位信息进行形态学滤波,得到相位图像。
S106,对相位图像进行处理,得到原始图像的纹理信息。
可选的,终端设备可以基于原始图像的相位信息以及解析信号的幅度信息,得到纹理信息的极坐标形式,基于纹理信息的极坐标形式得到纹理信息的解析信号,将解析信号的实数部分作为纹理信息。
举例来说,终端设备可以使用黑帽(black-hat)滤波器对原始图像的相位信息进行形态学滤波,得到原始图像的纹理信息,可获取原始图像中亮度变化较大的部分,并过滤原始图像中变化较小的部分,则终端设备可以将获取到的原始图像中亮度变化较大的部分作为原始图像的纹理信息。
示例性的,对于尺寸为256×256的原始图像,终端设备可以使用像素尺寸为5×5的圆形卷积核对该原始图像进行形态学滤波,得到形态学滤波结果
Figure PCTCN2016111922-appb-000029
即原始图像的相位信息,当解析信号的幅度信息为恒定量1时,终端设备基于原始图像的相位信息以及解析信号的幅度信息得到的纹理信息的极坐标形式可以表示如下:
Figure PCTCN2016111922-appb-000030
其中,1表示解析信号的幅度信息,i为虚数单位,
Figure PCTCN2016111922-appb-000031
表示原始图像的相位信息。
由于
Figure PCTCN2016111922-appb-000032
则终端设备可以将实数部分
Figure PCTCN2016111922-appb-000033
作为原始图像的纹理信息,即原始图像的纹理信息可以表示如下:
Figure PCTCN2016111922-appb-000034
其中,f1(x,y)为原始图像的纹理信息,
Figure PCTCN2016111922-appb-000035
为纹理信息的极坐标形式对应复数中的实数部分。
S107,对原始图像进行直方图均衡化处理,得到原始图像的幅度信息。
示例性的,原始图像的幅度信息可以表示如下:
f2(x,y)=HistEq[f(x,y)]
其中,f2(x,y)为原始图像的幅度信息,HistEq[f(x,y)]为直方图均衡化处理函数,f(x,y)为原始图像。
S108,基于原始图像的纹理信息和原始图像的幅度信息,得到原始图像的增强图像。
可选的,终端设备可以对原始图像的纹理信息进行归一化处理,并对原始图像的幅度信息进行归一化处理,对归一化处理后的纹理信息和归一化处理后的幅度信息进行加权平均处理,得到原始图像的增强图像。
示例性的,增强图像可以表示如下:
Figure PCTCN2016111922-appb-000036
其中,fnew(x,y)表示原始图像的增强图像,Norm[f1(x,y)]表示原始图像的纹理信息的归一化函数,
Figure PCTCN2016111922-appb-000037
a1表示原始图像的纹理信息的归一化函数的权重,Norm[f2(x,y)]表示原始图像的幅度信息的归一化函数,
Figure PCTCN2016111922-appb-000038
a2表示原始图像的幅度信息的归一化函数的权重。示例性的,当a1=a2=1时,
Figure PCTCN2016111922-appb-000039
可选的,终端设备可以针对原始图像在每一个色彩空间的分量图像,基于分量图像的纹理信息和分量图像的幅度信息,获取分量图像的增强图像,并对各个分量图像的增强图像进行图像合成,得到原始图像的增强图像。
例如,原始图像可以分解为第一分量图像、第二分量图像以及第三分量图像,则终端设备可以获取第一分量图像的多个象限信息,并对第一分量图像的多个象限信息中的每一个象限信息进行傅里叶逆变换,得到该象限信息的解析信号,基于各个象限信息的解析信号得到该解析信号的极坐标形式,将该解析信号的极坐标形式的指数作为该象限信息的相位信息,对第一分量图像的各个象限信息的相位信息进行加权平均处理,得到第一分量图像的相位信息,对第一分量图像的相位信息进行形态学滤波,得到第一分量图像的相位图像,对第一分量图像的相位图像进行处理得到第一分量图像的纹理信息,对第一分量图像进行直方图均衡化处理,得到第一分量图像的幅度信息,基于第一分量图像的纹理信息和第一分量图像的幅度信息对第一分量图像的进行图像增强处理,得到第一分量图像的增强图像。同理,终端设备还可以通过上述方法获取第二分量图像的增强图像以及第三分量图像的增强图像。
进一步的,终端设备可以对第一分量图像的增强图像、第二分量图像的增强图像以及第三分量图像的增强图像进行图像合成,得到原始图像的增强图像。
以图6所示的增强图像的界面示意图为例,图6中上方区域显示的是四个视网膜眼底的原始图像,图6中下方区域显示的是各个原始图像对应的增强图像。传统的图像增强方法是直接对原始图像的像素进行处理来实现图像增强,然而相同类型的图像区域由于像素值强度不同限制了图像增强效果以及图像后处理。本发明实施例通过把原始图像放在二维解析信号的极坐标形式框架下,利用二维信号的极坐标形式得到相位信息,再融合灰度图像直方图均衡化后的幅度信息,实现图像重建,极大地实现了彩色图像的纹理信息的增强,从而改善图像视觉效果。
在图1所示的图像处理方法中,终端设备获取原始图像的多个象限信息, 并对多个象限信息中的每一个象限信息进行傅里叶逆变换,得到象限信息的解析信号,基于各个象限信息的解析信号得到该解析信号的极坐标形式,针对多个象限信息中的每一个象限信息,基于该象限信息的解析信号的极坐标形式获取该象限信息的相位信息,对各个象限信息的相位信息进行加权平均处理,得到原始图像的相位信息,对原始图像的相位信息进行形态学滤波,得到原始图像的纹理信息,对原始图像进行直方图均衡化处理,得到原始图像的幅度信息,基于原始图像的纹理信息和原始图像的幅度信息,得到原始图像的增强图像,可提高图像增强算法的参数选择方面的鲁棒性,并提高增强图像的图像质量。
请参见图2,图2为本发明另一实施例中提供的一种图像处理方法的流程示意图,如图所示本发明实施例中的图像处理方法可以包括:
S201,对原始图像进行希尔伯特变换,得到原始图像的各个象限的解析信号。
例如,终端设备可以把需要进行图像处理的原始图像定义为一个二维信号,分别获取原始图像在第一象限的解析信号,原始图像在第二象限的解析信号,原始图像在第三象限的解析信号,原始图像在第四象限的解析信号。
示例性的,原始图像在第一象限的解析信号可以表示如下:
AS1(x,y)=(f(x,y)-H{f(x,y)})+i(Hx{f(x,y)}+Hy{f(x,y)})
原始图像在第二象限的解析信号可以表示如下:
AS2(x,y)=(f(x,y)+H{f(x,y)})-i(Hx{f(x,y)}-Hy{f(x,y)})
原始图像在第三象限的解析信号可以表示如下:
AS3(x,y)=(f(x,y)-H{f(x,y)})-i(Hx{f(x,y)}+Hy{f(x,y)})
原始图像在第四象限的解析信号可以表示如下:
AS4(x,y)=(f(x,y)+H{f(x,y)})+i(Hx{f(x,y)}-Hy{f(x,y)})
其中,AS1(x,y)表示原始图像在第一象限的解析信号,f(x,y)表示原始图像,H{f(x,y)}表示f(x,y)的全希尔伯特变换,Hx{f(x,y)}表示f(x,y)在x方向上的部分希尔伯特变换,Hy{f(x,y)}表示f(x,y)在y方向上的部分希尔伯 特变换。
Figure PCTCN2016111922-appb-000040
Figure PCTCN2016111922-appb-000041
其中δ(x)以及δ(y)可以为狄拉克函数,
Figure PCTCN2016111922-appb-000042
m可以为x或者y。“**”表示二维卷积。
可选的,当原始图像的图像类型为彩色图像时,终端设备可以获取原始图像在各个色彩空间的分量图像,针对原始图像在每一个色彩空间的分量图像,终端设备可以对该分量图像进行希尔伯特变换,得到该分量图像的各个象限的解析信号。
例如,当终端设备将原始图像分解为第一分量图像、第二分量图像以及第三分量图像时,终端设备可以对第一分量图像进行希尔伯特变换,得到第一分量图像的各个象限的解析信号。同理,终端设备可以对第二分量图像进行希尔伯特变换,得到第二分量图像的各个象限的解析信号。终端设备还可以对第三分量图像进行希尔伯特变换,得到第三分量图像的各个象限的解析信号。终端设备还可以对第四分量图像进行希尔伯特变换,得到第四分量图像的各个象限的解析信号。
S202,基于各个象限的解析信号得到该解析信号的极坐标形式。
终端设备可以基于各个象限的解析信号得到该解析信号的极坐标形式。示例性的,第一象限的解析信号的极坐标形式可以表示如下:
Figure PCTCN2016111922-appb-000043
第二象限的解析信号的极坐标形式可以表示如下:
Figure PCTCN2016111922-appb-000044
第三象限的解析信号的极坐标形式可以表示如下:
Figure PCTCN2016111922-appb-000045
第四象限的解析信号的极坐标形式可以表示如下:
Figure PCTCN2016111922-appb-000046
其中,AS1(x,y)表示第一象限的解析信号,
Figure PCTCN2016111922-appb-000047
表示第一象限的解析信号的极坐标形式,
Figure PCTCN2016111922-appb-000048
表示第一象限的相位信息。AS2(x,y)表示第二象限的解析信号,
Figure PCTCN2016111922-appb-000049
表示第二象限的解析信号的极坐标形式,
Figure PCTCN2016111922-appb-000050
表示第二象限的相位信息。AS3(x,y)表示第三象限的解析信号,
Figure PCTCN2016111922-appb-000051
表示第三象限的解析信号的极坐标形式,
Figure PCTCN2016111922-appb-000052
表示第三象限的相位信息。AS4(x,y)表示第四象限的解析信号,
Figure PCTCN2016111922-appb-000053
表示第四象限的解析信号的极坐标形式,
Figure PCTCN2016111922-appb-000054
表示第四象限的相位信息。
S203,针对多个象限中的每一个象限,基于该象限的解析信号的极坐标形式获取该象限的相位信息。
终端设备可以将各个象限的解析信号的极坐标形式的指数作为该象限的相位信息,例如终端设备可以将
Figure PCTCN2016111922-appb-000055
作为第一象限的相位信息,
Figure PCTCN2016111922-appb-000056
作为第二象限的相位信息,
Figure PCTCN2016111922-appb-000057
作为第三象限的相位信息,
Figure PCTCN2016111922-appb-000058
作为第四象限的相位信息。
S204,对各个象限的相位信息进行加权平均处理,得到原始图像的相位信息。
终端设备获取到各个象限信息的相位信息之后,可以将各个象限信息的相位信息进行加权平均处理,得到原始图像的相位信息。
示例性的,原始图像的相位信息可以表示如下:
Figure PCTCN2016111922-appb-000059
其中,
Figure PCTCN2016111922-appb-000060
表示原始图像的相位信息,
Figure PCTCN2016111922-appb-000061
表示第一象限信息的相位信息,w1表示第一象限信息的相位信息的权重,
Figure PCTCN2016111922-appb-000062
表示第二象限信息的相位信息,w2表示第二象限信息的相位信息的权重,
Figure PCTCN2016111922-appb-000063
表示第三象限信息的相位信息,w3表示第三象限信息的相位信息的权重,
Figure PCTCN2016111922-appb-000064
表示第四象限信息的相位信息,w4表示第四象限信息的相位信息的权重。例如,当w1=w2=w3=w4时,
Figure PCTCN2016111922-appb-000065
S205,对原始图像的相位信息进行形态学滤波,得到相位图像。
S206,对相位图像进行处理,得到原始图像的纹理信息。
可选的,终端设备可以基于原始图像的相位信息以及解析信号的幅度信息,得到纹理信息的极坐标形式,基于纹理信息的极坐标形式得到纹理信息的解析信号,并将解析信号的实数部分作为纹理信息。
举例来说,终端设备可以使用black-hat滤波器对原始图像的相位信息进 行形态学滤波,得到原始图像的纹理信息,可获取原始图像中亮度变化较大的部分,并过滤原始图像中变化较小的部分,则终端设备可以将获取到的原始图像中亮度变化较大的部分作为原始图像的纹理信息。
示例性的,对于尺寸为256×256的原始图像,终端设备可以使用像素尺寸为5×5的圆形卷积核对该原始图像进行形态学滤波,得到形态学滤波结果
Figure PCTCN2016111922-appb-000066
即原始图像的相位信息,当解析信号的幅度信息为恒定量1时,终端设备基于原始图像的相位信息以及解析信号的幅度信息得到的纹理信息的极坐标形式可以表示如下:
Figure PCTCN2016111922-appb-000067
其中,1表示解析信号的幅度信息,i为虚数单位,
Figure PCTCN2016111922-appb-000068
表示原始图像的相位信息。
由于
Figure PCTCN2016111922-appb-000069
则终端设备可以将实数部分
Figure PCTCN2016111922-appb-000070
作为原始图像的纹理信息,即原始图像的纹理信息可以表示如下:
Figure PCTCN2016111922-appb-000071
其中,f1(x,y)为原始图像的纹理信息,
Figure PCTCN2016111922-appb-000072
为纹理信息的极坐标形式对应复数中的实数部分。
S207,对原始图像进行直方图均衡化处理,得到原始图像的幅度信息。
示例性的,原始图像的幅度信息可以表示如下:
f2(x,y)=HistEq[f(x,y)]
其中,f2(x,y)为原始图像的幅度信息,HistEq[f(x,y)]为直方图均衡化处理函数,f(x,y)为原始图像。
S208,基于原始图像的纹理信息和原始图像的幅度信息,得到原始图像的增强图像。
可选的,终端设备可以对原始图像的纹理信息进行归一化处理,并对原始图像的幅度信息进行归一化处理,对归一化处理后的纹理信息和归一化处理后的幅度信息进行加权平均处理,得到原始图像的增强图像。
示例性的,增强图像可以表示如下:
Figure PCTCN2016111922-appb-000073
其中,fnew(x,y)表示原始图像的增强图像,Norm[f1(x,y)]表示原始图像的纹理信息的归一化函数,
Figure PCTCN2016111922-appb-000074
a1表示原始图像的纹理信息的归一化函数的权重,Norm[f2(x,y)]表示原始图像的幅度信息的归一化函数,
Figure PCTCN2016111922-appb-000075
a2表示原始图像的幅度信息的归一化函数的权重。示例性的,当a1=a2=1时,
Figure PCTCN2016111922-appb-000076
可选的,针对原始图像在每一个色彩空间的分量图像,终端设备可以基于该分量图像的纹理信息和该分量图像的幅度信息,获取该分量图像的增强图像,并对各个分量图像的增强图像进行图像合成,得到原始图像的增强图像。
例如,原始图像可以分解为第一分量图像、第二分量图像以及第三分量图像,则终端设备可以对第一分量图像进行希尔伯特变换,得到第一分量图像的各个象限的解析信号,基于各个象限的解析信号得到该象限的解析信号的极坐标形式,将该象限的解析信号的极坐标形式的指数作为该象限的相位信息,对第一分量图像的各个象限的相位信息进行加权平均处理,得到第一分量图像的相位信息,对第一分量图像的相位信息进行形态学滤波,得到第一分量图像的相位图像,对第一分量图像的相位图像进行处理得到第一分量图像的纹理信息,对第一分量图像进行直方图均衡化处理,得到第一分量图像的幅度信息,基于第一分量图像的纹理信息和第一分量图像的幅度信息,得到第一分量图像的增强图像。同理,终端设备还可以通过上述方法获取第二分量图像的增强图像以及第三分量图像的增强图像。
进一步的,终端设备可以对第一分量图像的增强图像、第二分量图像的增强图像以及第三分量图像的增强图像进行图像合成,得到原始图像的增强图像。
在图2所示的图像处理方法中,终端设备对原始图像进行希尔伯特变换,得到原始图像的各个象限的解析信号,基于各个象限的解析信号得到该象限的解析信号的极坐标形式,针对多个象限中的每一个象限,将该象限的解析信号 的极坐标形式的指数作为该象限的相位信息,对各个象限的相位信息进行加权平均处理,得到原始图像的相位信息,对原始图像的相位信息进行形态学滤波,得到原始图像的纹理信息,对原始图像进行直方图均衡化处理,得到原始图像的幅度信息,基于原始图像的纹理信息和原始图像的幅度信息,得到原始图像的增强图像,可提高图像增强算法的参数选择方面的鲁棒性,并提高增强图像的图像质量。
本发明实施例还提供了一种计算机存储介质,其中,所述计算机存储介质可存储有程序,该程序执行时包括上述图1、图2所示的方法实施例中的部分或全部步骤。
请参见图3,图3为本发明实施例中提供的一种图像处理装置的结构示意图,所述图像处理装置可以用于实施结合图1、图2所示的方法实施例中的部分或全部步骤,所述图像处理装置至少可以包括解析信号获取模块301、相位信息确定模块302、纹理信息获取模块303、幅度信息获取模块304以及图像增强模块305,其中:
解析信号获取模块301,用于对原始图像进行处理得到原始图像的解析信号,并基于解析信号得到解析信号的极坐标形式。
相位信息确定模块302,用于基于极坐标形式获取原始图像的相位信息。
纹理信息获取模块303,用于基于相位信息得到原始图像的纹理信息。
幅度信息获取模块304,用于对原始图像进行直方图均衡化处理,得到原始图像的幅度信息。
图像增强模块305,用于基于原始图像的纹理信息和原始图像的幅度信息,获取原始图像的增强图像。
可选的,解析信号获取模块301,具体用于:
当原始图像的图像类型为彩色图像时,获取原始图像在各个色彩空间的分量图像。
针对原始图像在每一个色彩空间的分量图像,对分量图像进行处理得到分量图像的解析信号,并基于解析信号得到解析信号的极坐标形式。
可选的,图像增强模块305,具体用于:
针对原始图像在每一个色彩空间的分量图像,基于分量图像的纹理信息和分量图像的幅度信息,得到分量图像的增强图像。
对各个分量图像的增强图像进行图像合成,得到原始图像的增强图像。
可选的,解析信号获取模块301,具体用于:
获取原始图像的多个象限信息,并对多个象限信息中的每一个象限信息进行傅里叶逆变换,得到象限信息的解析信号。
基于各个象限信息的解析信号得到解析信号的极坐标形式。
可选的,相位信息确定模块302,具体用于:
针对多个象限信息中的每一个象限信息,基于象限信息的解析信号的极坐标形式,获取象限信息的相位信息。
对各个象限信息的相位信息进行加权平均处理,得到原始图像的相位信息。
可选的,纹理信息获取模块303,具体用于:
对原始图像的相位信息进行形态学滤波,得到相位图像。
对相位图像进行处理,得到原始图像的纹理信息。
可选的,解析信号获取模块301获取原始图像的多个象限信息,具体用于:
将原始图像作为时域信号,并对时域信号进行傅里叶变换,得到傅里叶变换后的频域信号。
通过预置滤波器对频域信号进行滤波,得到预置滤波器对应的象限信息。
可选的,解析信号获取模块301对原始图像进行处理得到原始图像的解析信号,具体用于:
将原始图像作为时域信号,对时域信号进行希尔伯特变换,得到原始图像的解析信号。
可选的,纹理信息获取模块303,具体用于:
基于原始图像的相位信息以及解析信号的幅度信息,得到纹理信息的极坐标形式。
基于纹理信息的极坐标形式得到纹理信息的解析信号。
将解析信号的实数部分作为纹理信息。
可选的,图像增强模块305,具体用于:
对原始图像的纹理信息进行归一化处理,并对原始图像的幅度信息进行归一化处理。
对归一化处理后的纹理信息和归一化处理后的幅度信息进行加权平均处理,得到增强图像。
在图3所示的图像处理装置中,解析信号获取模块301对原始图像进行处理得到原始图像的解析信号,并基于解析信号得到解析信号的极坐标形式,相位信息确定模块302基于极坐标形式得到原始图像的相位信息,纹理信息获取模块303基于相位信息得到原始图像的纹理信息,幅度信息获取模块304对原始图像进行直方图均衡化处理,得到原始图像的幅度信息,图像增强模块305基于原始图像的纹理信息和原始图像的幅度信息,得到原始图像的增强图像,可提高图像增强算法的参数选择方面的鲁棒性,并提高增强图像的图像质量。
请参见图4,图4为本发明第一实施例提供的一种终端设备的结构示意图,本发明实施例提供的终端设备可以用于实施上述图1、图2所示的本发明各实施例实现的方法,为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照图1、图2所示的本发明各实施例。
如图4所示,该终端设备包括:至少一个处理器401,例如CPU,至少一个输入装置403,至少一个输出装置404,存储器405,至少一个通信总线402。其中,通信总线402用于实现这些组件之间的连接通信。其中,输入装置403可选的可以包括摄像头,用于采集原始图像。输出装置404可选的可以包括显示屏幕,用于显示增强图像。其中,存储器405可能包含高速RAM存储器,也可能还包括非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。存储器405可选的可以包含至少一个位于远离前述处理器401的存储装置。存储器405中存储一组程序代码,且处理器401调用存储器405中存储的程序代码,用于执行以下操作:
对原始图像进行处理得到原始图像的解析信号,并基于解析信号得到解析信号的极坐标形式。
基于极坐标形式获取原始图像的相位信息。
基于相位信息得到原始图像的纹理信息。
对原始图像进行直方图均衡化处理,得到原始图像的幅度信息。
基于原始图像的纹理信息和原始图像的幅度信息,得到原始图像的增强图像。
可选的,处理器401对原始图像进行处理得到原始图像的解析信号,并基于解析信号得到解析信号的极坐标形式,具体可以为:
当原始图像的图像类型为彩色图像时,获取原始图像在各个色彩空间的分量图像。
针对原始图像在每一个色彩空间的分量图像,对所述分量图像进行处理得到分量图像的解析信号,并基于解析信号得到解析信号的极坐标形式。
可选的,处理器401基于原始图像的纹理信息和原始图像的幅度信息,获取原始图像的增强图像,具体可以为:
针对原始图像在每一个色彩空间的分量图像,基于分量图像的纹理信息和分量图像的幅度信息,获取分量图像的增强图像。
对各个分量图像的增强图像进行图像合成,得到原始图像的增强图像。
可选的,处理器401对原始图像进行处理得到原始图像的解析信号,并基于解析信号得到解析信号的极坐标形式,具体可以为:
获取原始图像的多个象限信息,并对多个象限信息中的每一个象限信息进行傅里叶逆变换,得到象限信息的解析信号。
基于各个象限信息的解析信号得到解析信号的极坐标形式。
可选的,处理器401基于极坐标形式获取原始图像的相位信息,具体可以为:
针对多个象限信息中的每一个象限信息,基于象限信息的解析信号的极坐标形式,获取象限信息的相位信息。
对各个象限信息的相位信息进行加权平均处理,得到原始图像的相位信息。
可选的,处理器401基于相位信息得到原始图像的纹理信息,具体可以为:
对原始图像的相位信息进行形态学滤波,得到相位图像。
对相位图像进行处理,得到原始图像的纹理信息。
可选的,处理器401获取原始图像的多个象限信息,具体可以为:
将原始图像作为时域信号,并对时域信号进行傅里叶变换,得到傅里叶变换后的频域信号。
通过预置滤波器对频域信号进行滤波,得到预置滤波器对应的象限信息。
可选的,处理器401对原始图像进行处理得到原始图像的解析信号,具体可以为:
将原始图像作为时域信号,对时域信号进行希尔伯特变换,得到原始图像的解析信号。
可选的,处理器401基于相位信息得到原始图像的纹理信息,具体可以为:
基于原始图像的相位信息以及解析信号的幅度信息,得到纹理信息的极坐标形式。
基于纹理信息的极坐标形式得到纹理信息的解析信号。
将解析信号的实数部分作为纹理信息。
可选的,处理器401基于原始图像的纹理信息和原始图像的幅度信息,得到原始图像的增强图像,具体可以为:
对原始图像的纹理信息进行归一化处理,并对原始图像的幅度信息进行归一化处理。
对归一化处理后的纹理信息和归一化处理后的幅度信息进行加权平均处理,得到增强图像。
具体的,本发明实施例中介绍的终端可以用以实施本发明结合图1、图2介绍的方法实施例中的部分或全部流程。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不是必须针对相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在每一个个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的程序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的每一个项或他们的组合来实现:具 有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (21)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    对原始图像进行处理得到所述原始图像的解析信号,并基于所述解析信号得到所述解析信号的极坐标形式;
    基于所述极坐标形式获取所述原始图像的相位信息;
    基于所述相位信息得到所述原始图像的纹理信息;
    对所述原始图像进行直方图均衡化处理,得到所述原始图像的幅度信息;
    基于所述原始图像的纹理信息和所述原始图像的幅度信息,获取所述原始图像的增强图像。
  2. 如权利要求1所述的方法,其特征在于,所述对原始图像进行处理得到所述原始图像的解析信号,并基于所述解析信号得到所述解析信号的极坐标形式,包括:
    当所述原始图像的图像类型为彩色图像时,获取所述原始图像在各个色彩空间的分量图像;
    针对所述原始图像在每一个色彩空间的分量图像,对所述分量图像进行处理得到所述分量图像的解析信号,并基于所述解析信号得到所述解析信号的极坐标形式。
  3. 如权利要求2所述的方法,其特征在于,所述基于所述原始图像的纹理信息和所述原始图像的幅度信息,获取所述原始图像的增强图像,包括:
    针对所述原始图像在每一个色彩空间的分量图像,基于所述分量图像的纹理信息和所述分量图像的幅度信息,获取所述分量图像的增强图像;
    对各个所述分量图像的增强图像进行图像合成,得到所述原始图像的增强图像。
  4. 如权利要求1所述的方法,其特征在于,所述对原始图像进行处理得 到所述原始图像的解析信号,并基于所述解析信号得到所述解析信号的极坐标形式,包括:
    获取所述原始图像的多个象限信息,并对所述多个象限信息中的每一个象限信息进行傅里叶逆变换,得到所述象限信息的解析信号;
    基于各个所述象限信息的解析信号得到所述解析信号的极坐标形式。
  5. 如权利要求4所述的方法,其特征在于,所述基于所述极坐标形式获取所述原始图像的相位信息,包括:
    针对所述多个象限信息中的每一个象限信息,基于所述象限信息的解析信号的极坐标形式,获取所述象限信息的相位信息;
    对各个所述象限信息的相位信息进行加权平均处理,得到所述原始图像的相位信息。
  6. 如权利要求5所述的方法,其特征在于,所述基于所述相位信息得到所述原始图像的纹理信息,包括:
    对所述原始图像的相位信息进行形态学滤波,得到相位图像;
    对所述相位图像进行处理,得到所述原始图像的纹理信息。
  7. 如权利要求4所述的方法,其特征在于,所述获取所述原始图像的多个象限信息,包括:
    将所述原始图像作为时域信号,并对所述时域信号进行傅里叶变换,得到傅里叶变换后的频域信号;
    通过预置滤波器对所述频域信号进行滤波,得到所述预置滤波器对应的象限信息。
  8. 如权利要求1所述的方法,其特征在于,所述对原始图像进行处理得到所述原始图像的解析信号,包括:
    将所述原始图像作为时域信号,对所述时域信号进行希尔伯特变换,得到所述原始图像的解析信号。
  9. 如权利要求1所述的方法,其特征在于,所述基于所述相位信息得到所述原始图像的纹理信息,包括:
    基于所述原始图像的相位信息以及所述解析信号的幅度信息,得到所述纹理信息的极坐标形式;
    基于所述纹理信息的极坐标形式得到所述纹理信息的解析信号;
    将所述解析信号的实数部分作为所述纹理信息。
  10. 如权利要求1所述的方法,其特征在于,所述基于所述原始图像的纹理信息和所述原始图像的幅度信息,得到所述原始图像的增强图像,包括:
    对所述原始图像的纹理信息进行归一化处理,并对所述原始图像的幅度信息进行归一化处理;
    对归一化处理后的纹理信息和归一化处理后的幅度信息进行加权平均处理,得到所述增强图像。
  11. 一种图像处理装置,其特征在于,所述装置包括:
    解析信号获取模块,用于对原始图像进行处理得到所述原始图像的解析信号,并基于所述解析信号得到所述解析信号的极坐标形式;
    相位信息确定模块,用于基于所述极坐标形式获取所述原始图像的相位信息;
    纹理信息获取模块,用于基于所述相位信息得到所述原始图像的纹理信息;
    幅度信息获取模块,用于对所述原始图像进行直方图均衡化处理,得到所述原始图像的幅度信息;
    图像增强模块,用于基于所述原始图像的纹理信息和所述原始图像的幅度信息,获取所述原始图像的增强图像。
  12. 如权利要求11所述的装置,其特征在于,所述解析信号获取模块,具体用于:
    当所述原始图像的图像类型为彩色图像时,获取所述原始图像在各个色彩空间的分量图像;
    针对所述原始图像在每一个色彩空间的分量图像,对所述分量图像进行处理得到所述分量图像的解析信号,并基于所述解析信号得到所述解析信号的极坐标形式。
  13. 如权利要求12所述的装置,其特征在于,所述图像增强模块,具体用于:
    针对所述原始图像在每一个色彩空间的分量图像,基于所述分量图像的纹理信息和所述分量图像的幅度信息,获取所述分量图像的增强图像;
    对各个所述分量图像的增强图像进行图像合成,得到所述原始图像的增强图像。
  14. 如权利要求11所述的装置,其特征在于,所述解析信号获取模块,具体用于:
    获取所述原始图像的多个象限信息,并对所述多个象限信息中的每一个象限信息进行傅里叶逆变换,得到所述象限信息的解析信号;
    基于各个所述象限信息的解析信号得到所述解析信号的极坐标形式。
  15. 如权利要求14所述的装置,其特征在于,所述相位信息确定模块,具体用于:
    针对所述多个象限信息中的每一个象限信息,基于所述象限信息的解析信号的极坐标形式,获取所述象限信息的相位信息;
    对各个所述象限信息的相位信息进行加权平均处理,得到所述原始图像的相位信息。
  16. 如权利要求15所述的装置,其特征在于,所述纹理信息获取模块,具体用于:
    对所述原始图像的相位信息进行形态学滤波,得到相位图像;
    对所述相位图像进行处理,得到所述原始图像的纹理信息。
  17. 如权利要求14所述的装置,其特征在于,所述解析信号获取模块获取所述原始图像的多个象限信息,具体用于:
    将所述原始图像作为时域信号,并对所述时域信号进行傅里叶变换,得到傅里叶变换后的频域信号;
    通过预置滤波器对所述频域信号进行滤波,得到所述预置滤波器对应的象限信息。
  18. 如权利要求11所述的装置,其特征在于,所述解析信号获取模块对原始图像进行处理得到所述原始图像的解析信号,具体用于:
    将所述原始图像作为时域信号,对所述时域信号进行希尔伯特变换,得到所述原始图像的解析信号。
  19. 如权利要求11所述的装置,其特征在于,所述纹理信息获取模块,具体用于:
    基于所述原始图像的相位信息以及所述解析信号的幅度信息,得到所述纹理信息的极坐标形式;
    基于所述纹理信息的极坐标形式得到所述纹理信息的解析信号;
    将所述解析信号的实数部分作为所述纹理信息。
  20. 如权利要求11所述的装置,其特征在于,所述图像增强模块,具体用于:
    对所述原始图像的纹理信息进行归一化处理,并对所述原始图像的幅度信息进行归一化处理;
    对归一化处理后的纹理信息和归一化处理后的幅度信息进行加权平均处理,得到所述增强图像。
  21. 一种终端设备,其特征在于,包括处理器以及存储器,所述存储器中 存储一组程序代码,且所述处理器调用所述存储器中存储的程序代码,用于执行以下操作:
    对原始图像进行处理得到所述原始图像的解析信号,并基于所述解析信号得到所述解析信号的极坐标形式;
    基于所述极坐标形式获取所述原始图像的相位信息;
    基于所述相位信息得到所述原始图像的纹理信息;
    对所述原始图像进行直方图均衡化处理,得到所述原始图像的幅度信息;
    基于所述原始图像的纹理信息和所述原始图像的幅度信息,得到所述原始图像的增强图像。
PCT/CN2016/111922 2016-12-24 2016-12-24 一种图像处理方法、装置以及终端设备 WO2018112979A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/111922 WO2018112979A1 (zh) 2016-12-24 2016-12-24 一种图像处理方法、装置以及终端设备
CN201680091872.0A CN110140150B (zh) 2016-12-24 2016-12-24 一种图像处理方法、装置以及终端设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/111922 WO2018112979A1 (zh) 2016-12-24 2016-12-24 一种图像处理方法、装置以及终端设备

Publications (1)

Publication Number Publication Date
WO2018112979A1 true WO2018112979A1 (zh) 2018-06-28

Family

ID=62624299

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/111922 WO2018112979A1 (zh) 2016-12-24 2016-12-24 一种图像处理方法、装置以及终端设备

Country Status (2)

Country Link
CN (1) CN110140150B (zh)
WO (1) WO2018112979A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242876B (zh) * 2020-01-17 2023-10-03 北京联合大学 低对比度图像增强方法、装置及计算机可读存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101317183A (zh) * 2006-01-11 2008-12-03 三菱电机株式会社 在获取的眼睛的图像中定位表示虹膜的像素的方法
CN101344913A (zh) * 2007-07-10 2009-01-14 电子科技大学中山学院 一种通过提取虹膜纹理特征进行身份识别的方法
CN102306289A (zh) * 2011-09-16 2012-01-04 兰州大学 基于脉冲耦合神经网络的虹膜特征提取方法
CN103065299A (zh) * 2012-12-22 2013-04-24 深圳先进技术研究院 超声图像边缘提取方法和装置
US20130223734A1 (en) * 2012-02-24 2013-08-29 Oncel Tuzel Upscaling Natural Images
CN103356162A (zh) * 2012-04-04 2013-10-23 佳能株式会社 图像处理设备和图像处理方法
CN104484425A (zh) * 2014-12-20 2015-04-01 辽宁师范大学 基于多特征的彩色图像检索方法
CN104537681A (zh) * 2015-01-21 2015-04-22 北京联合大学 一种谱分离的视觉显著区域提取方法及系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69215916T2 (de) * 1991-01-29 1997-04-03 Victor Company Of Japan System zur Bildverbesserung
US7023126B2 (en) * 2003-12-03 2006-04-04 Itt Manufacturing Enterprises Inc. Surface structures for halo reduction in electron bombarded devices
CN101236646B (zh) * 2007-01-30 2011-09-14 宝利微系统控股公司 在频率域检测与估计图像显著的强相关方向的方法和系统
CN101271525B (zh) * 2008-04-10 2011-05-04 复旦大学 一种快速的图像序列特征显著图获取方法
US8860744B2 (en) * 2012-03-30 2014-10-14 Sharp Laboratories Of America, Inc. System for image enhancement
US9536288B2 (en) * 2013-03-15 2017-01-03 Samsung Electronics Co., Ltd. Creating details in an image with adaptive frequency lifting
CN103177458B (zh) * 2013-04-17 2015-11-25 北京师范大学 一种基于频域分析的可见光遥感图像感兴趣区域检测方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101317183A (zh) * 2006-01-11 2008-12-03 三菱电机株式会社 在获取的眼睛的图像中定位表示虹膜的像素的方法
CN101344913A (zh) * 2007-07-10 2009-01-14 电子科技大学中山学院 一种通过提取虹膜纹理特征进行身份识别的方法
CN102306289A (zh) * 2011-09-16 2012-01-04 兰州大学 基于脉冲耦合神经网络的虹膜特征提取方法
US20130223734A1 (en) * 2012-02-24 2013-08-29 Oncel Tuzel Upscaling Natural Images
CN103356162A (zh) * 2012-04-04 2013-10-23 佳能株式会社 图像处理设备和图像处理方法
CN103065299A (zh) * 2012-12-22 2013-04-24 深圳先进技术研究院 超声图像边缘提取方法和装置
CN104484425A (zh) * 2014-12-20 2015-04-01 辽宁师范大学 基于多特征的彩色图像检索方法
CN104537681A (zh) * 2015-01-21 2015-04-22 北京联合大学 一种谱分离的视觉显著区域提取方法及系统

Also Published As

Publication number Publication date
CN110140150B (zh) 2021-10-26
CN110140150A (zh) 2019-08-16

Similar Documents

Publication Publication Date Title
JP5544764B2 (ja) 画像処理装置および方法、並びにプログラム
US7983511B1 (en) Methods and apparatus for noise reduction in digital images
US20190197693A1 (en) Automated detection and trimming of an ambiguous contour of a document in an image
US20160225126A1 (en) Method for image processing using local statistics convolution
Bhowmik et al. Visual attention-based image watermarking
US8406561B2 (en) Methods and systems for estimating illumination source characteristics from a single image
CN107123124B (zh) 视网膜图像分析方法、装置和计算设备
CN107038704B (zh) 视网膜图像渗出区域分割方法、装置和计算设备
Ancuti et al. Image and video decolorization by fusion
KR102195047B1 (ko) 3d이미지 품질을 향상시키는 방법과 장치
Deng A generalized logarithmic image processing model based on the gigavision sensor model
Vazquez-Corral et al. A fast image dehazing method that does not introduce color artifacts
Nnolim Single image de-hazing using adaptive dynamic stochastic resonance and wavelet-based fusion
CN110473176B (zh) 图像处理方法及装置、眼底图像处理方法、电子设备
Ramaraj et al. Homomorphic filtering techniques for WCE image enhancement
Rana et al. Optimizing tone mapping operators for keypoint detection under illumination changes
WO2018112979A1 (zh) 一种图像处理方法、装置以及终端设备
CN112884666A (zh) 图像处理方法、装置及计算机存储介质
WO2023215371A1 (en) System and method for perceptually optimized image denoising and restoration
CN111598794A (zh) 一种去除水下重叠情况的图像成像方法及装置
CN116468636A (zh) 低照度增强方法、装置、电子设备和可读存储介质
CN111311610A (zh) 图像分割的方法及终端设备
CN110136085A (zh) 一种图像的降噪方法及装置
Song et al. Contrast enhancement algorithm considering surrounding information by illumination image
David Low illumination image enhancement algorithm using iterative recursive filter and visual gamma transformation function

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16924451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16924451

Country of ref document: EP

Kind code of ref document: A1