WO2016045242A1 - 一种图像放大方法、图像放大装置及显示设备 - Google Patents

一种图像放大方法、图像放大装置及显示设备 Download PDF

Info

Publication number
WO2016045242A1
WO2016045242A1 PCT/CN2015/070053 CN2015070053W WO2016045242A1 WO 2016045242 A1 WO2016045242 A1 WO 2016045242A1 CN 2015070053 W CN2015070053 W CN 2015070053W WO 2016045242 A1 WO2016045242 A1 WO 2016045242A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
interpolation algorithm
interpolation
interpolated
Prior art date
Application number
PCT/CN2015/070053
Other languages
English (en)
French (fr)
Inventor
张丽杰
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to EP15794806.8A priority Critical patent/EP3200147B1/en
Priority to US14/771,340 priority patent/US9824424B2/en
Publication of WO2016045242A1 publication Critical patent/WO2016045242A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3876Recombination of partial images to recreate the original image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6002Corrections within particular colour systems
    • H04N1/6008Corrections within particular colour systems with primary colour signals, e.g. RGB or CMY(K)
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to an image enlargement method, an image enlargement device, and a display device.
  • image enlargement is to improve the resolution of the enlarged image to meet people's visual requirements or practical application requirements.
  • Image enlargement has very important applications in the fields of high definition television and medical images.
  • the image includes a high frequency component and a low frequency component, wherein the high frequency component is mainly distributed in the edge contour portion and the detail portion of each theme in the image, and the low frequency component is mainly distributed in each theme non-edge contour portion in the image.
  • the interpolation algorithm is the most commonly used image enlargement algorithm for image enlargement, and the nearest neighbor interpolation, bilinear interpolation and cubic convolution interpolation are widely used.
  • the nearest neighbor interpolation algorithm is the simplest, but the nearest neighbor interpolation algorithm is also the most likely to produce pixel values discontinuous, resulting in blockiness, which in turn causes image blurring.
  • the image quality effect after amplification is generally not ideal.
  • the bilinear interpolation algorithm is more complicated. The bilinear interpolation algorithm does not cause pixel values to be discontinuous. The image quality after amplification is high.
  • the bilinear interpolation has the property of a low-pass filter, the high-frequency components are affected. Damage, so the edge contours and details of each topic in the image may be blurred to some extent.
  • the cubic convolution interpolation algorithm is complex, and can retain relatively clear edge contours and details. It can reduce or avoid the aliasing phenomenon of the contours of each theme edge in the enlarged image and the comb phenomenon of the detail part.
  • the interpolation effect is relatively real, so that after amplification Image quality is more complete.
  • an image magnifying algorithm is used to magnify an image in a technique known to the inventors, based on the technique known to the inventors above, since a grayscale value of an image pixel in a low-frequency component of an image is small, a simple algorithm is employed. It is also a complex algorithm, which has a similar effect on the low-frequency component of the image; and a complex algorithm for the high-frequency components of the image can achieve good image effects, but it is complicated for high-frequency components and low-frequency components of the image.
  • the algorithm performs amplification processing, it will add extra calculation amount, so it is impossible to simultaneously take into account the calculation amount of the amplification processing and the image placement. The image quality after the big.
  • Embodiments of the present disclosure provide an image enlargement method, an image enlargement device, and a display device, which are capable of ensuring an image quality after enlargement while reducing an amount of calculation.
  • an image enlargement method comprising the steps of:
  • the image magnifying device acquires a high frequency component and a low frequency component of the source image
  • the image amplifying device performs pixel interpolation on the low frequency component of the source image by using a first interpolation algorithm to obtain a low frequency sub image;
  • the image magnifying device performs pixel interpolation on a high frequency component of the source image by a second interpolation algorithm to obtain a high frequency sub-image
  • the image amplifying device fuses the low frequency sub-image and the high frequency sub-image to obtain a fusion result image
  • the first interpolation algorithm and the second interpolation algorithm use different algorithms.
  • the image amplifying device acquires a high frequency component and a low frequency component of the source image by a wavelet packet decomposition method.
  • the image amplifying device fuses the low frequency sub image and the high frequency sub image by wavelet packet inverse transform to obtain a fusion result image.
  • the method before the image amplifying device acquires the high frequency component and the low frequency component of the source image, the method further includes:
  • the image magnifying device performs RGB-YUV space conversion on the source image.
  • the method further includes:
  • the image magnifying device performs a YUV-RGB spatial inverse transform on the fused result image.
  • the first interpolation algorithm includes: a nearest neighbor interpolation algorithm, a bilinear interpolation algorithm, and a cubic convolution interpolation algorithm;
  • the second interpolation algorithm includes: a nearest neighbor interpolation algorithm and a cubic convolution interpolation algorithm.
  • the first interpolation algorithm is a bilinear interpolation algorithm
  • the image amplifying device performs pixel interpolation on the low frequency component of the source image by using a first interpolation algorithm
  • the step of acquiring the low frequency sub image includes:
  • the image magnifying device selects four pixel points adjacent to the pixel to be interpolated among the low frequency components of the source image
  • the image magnifying device acquires the pixel to be interpolated adjacent to the pixel to be interpolated according to the pixel gradation difference in the horizontal direction, the pixel gradation difference in the vertical direction, and the pixel gradation difference in the diagonal direction.
  • the image magnifying device sets a weighting factor of four pixel points adjacent to the pixel to be interpolated according to a distance from the pixel to be interpolated to four pixel points adjacent to the pixel to be interpolated;
  • the image amplifying device performs pixel interpolation on the pixel to be interpolated by a bilinear interpolation algorithm according to the weighting factor, and acquires the interpolated low frequency sub-pixel image.
  • an image magnifying apparatus including:
  • An image decomposition unit configured to acquire a high frequency component and a low frequency component of the source image
  • An image interpolation unit configured to perform pixel interpolation on a low-frequency component of the source image acquired by the image decomposition unit by using a first interpolation algorithm to obtain a low-frequency sub-image
  • the image interpolation unit is further configured to perform pixel interpolation on a high-frequency component of the source image acquired by the image decomposition unit by using a second interpolation algorithm to obtain a high-frequency sub-image;
  • An image fusion unit configured to fuse the low-frequency sub-image obtained by the image interpolation unit and the high-frequency sub-image to obtain a fusion result image
  • the first interpolation algorithm and the second interpolation algorithm use different algorithms.
  • the image decomposition unit is configured to acquire high frequency components and low frequency components of the source image by a method of wavelet packet decomposition.
  • the image fusion unit is configured to fuse the low-frequency sub-image and the high-frequency sub-image acquired by the image interpolation unit by wavelet packet inverse transform to obtain a fusion result image.
  • the apparatus further includes: a transforming unit, configured to perform RGB-YUV spatial conversion on the source image.
  • a transforming unit configured to perform RGB-YUV spatial conversion on the source image.
  • the transforming unit is further configured to perform a YUV-RGB spatial inverse transform on the fused result image.
  • the first interpolation algorithm includes: a nearest neighbor interpolation algorithm, a bilinear interpolation algorithm, Cubic convolution interpolation algorithm;
  • the second interpolation algorithm includes: a nearest neighbor interpolation algorithm and a cubic convolution interpolation algorithm.
  • the first interpolation algorithm is a bilinear interpolation algorithm
  • the image interpolation unit includes:
  • a sampling subunit configured to select four pixel points adjacent to the pixel to be interpolated among the low frequency components of the source image
  • a pixel gradation difference obtaining sub-unit configured to acquire a horizontal direction of the adjacent four pixel points according to a position of the four pixel points adjacent to the pixel to be interpolated and a pixel gradation selected by the sampling subunit Pixel gray level difference, pixel gray level difference in the vertical direction, and pixel gray level difference in the diagonal direction;
  • a distance obtaining sub-unit configured to acquire, according to the pixel gradation difference in the horizontal direction, the pixel gradation difference in the vertical direction, and the pixel gradation difference in the diagonal direction acquired by the pixel gradation difference obtaining subunit, a distance from the pixel point to be interpolated to four pixel points adjacent to the pixel to be interpolated;
  • a weighting factor setting sub-unit configured to set, according to a distance of the pixel to be interpolated obtained by the distance acquiring sub-unit to four pixel points adjacent to the pixel to be interpolated, to set the pixel to be interpolated adjacent to the pixel point to be interpolated The weighting factor of the four pixels;
  • the image interpolation sub-unit is configured to set the weighting factor set by the sub-unit according to the weighting factor, perform pixel interpolation on the pixel to be interpolated by a bilinear interpolation algorithm, and obtain the interpolated low-frequency sub-pixel image.
  • a display device comprising any of the image magnifying devices described above.
  • the image enlargement method, the image enlargement method device and the display device provided by the embodiments of the present disclosure first acquire high frequency components and low frequency components in the source image, and then interpolate the low frequency component pixels by the first interpolation algorithm to obtain low frequency sub-images, and pass the second
  • the interpolation algorithm interpolates the high-frequency component pixels to obtain the high-frequency sub-image, and finally fuses the low-frequency sub-image and the high-frequency sub-image to obtain the fusion result image; respectively, the high-frequency component and the low-frequency component of the source image are respectively separated by different interpolation algorithms. Interpolation, so the image quality after amplification can be guaranteed while reducing the amount of interpolation.
  • FIG. 1 is a schematic flowchart of an image enlargement method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of an image enlargement method according to another embodiment of the present disclosure.
  • FIG. 3 is a schematic flowchart of decomposing a source image of a YUV space by using wavelet packet decomposition according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a principle for decomposing a source image of a YUV space by using wavelet packet decomposition according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a principle based on bilinear interpolation according to an embodiment of the present disclosure
  • FIG. 5 is a schematic flowchart of a method for performing pixel interpolation on a low frequency component by a bilinear interpolation method according to an embodiment of the present disclosure
  • FIG. 6 is a schematic flowchart of merging the low frequency sub-image and the high frequency sub-image by wavelet packet inverse transform according to an embodiment of the present disclosure
  • FIG. 7 is a schematic structural diagram of an image enlargement apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an image enlargement device according to another embodiment of the present disclosure.
  • an embodiment of the present disclosure provides an image enlargement method, including:
  • the image amplifying device acquires a high frequency component and a low frequency component of the source image
  • a high-frequency component and a low-frequency component of the source image may be acquired by using a wavelet packet decomposition method;
  • the optional step 101 is: the image amplifying device acquires a high-frequency component of the source image by using a wavelet packet decomposition method. And low frequency components;
  • wavelet packet decomposition is an extension of wavelet decomposition.
  • Wavelet packet decomposition not only decomposes the low-frequency part of the image, but also further decomposes the high-frequency part of the image.
  • the wavelet packet decomposition can also be based on image signal characteristics and analysis requirements. Adapting the corresponding frequency band to match the image spectrum is a more elaborate decomposition method than wavelet decomposition, and has more accurate analysis capabilities.
  • the image amplifying device performs pixel interpolation on the low frequency component by using a first interpolation algorithm to obtain a low frequency sub-image.
  • the first interpolation algorithm used in step 102 may include: a nearest neighbor interpolation algorithm, a bilinear interpolation algorithm, and a cubic convolution interpolation algorithm.
  • the image amplifying device performs pixel interpolation on the low frequency component by using a second interpolation algorithm. Take a high frequency sub-image;
  • the second interpolation algorithm used in step 103 may include: the second interpolation algorithm includes: a nearest neighbor interpolation algorithm, a cubic convolution interpolation algorithm.
  • the image amplifying device combines the low frequency sub-image and the high frequency sub-image to obtain a fusion result image.
  • the low frequency sub-image and the high frequency sub-image may be fused by wavelet packet inverse transform.
  • the first interpolation algorithm is different from the second interpolation algorithm.
  • the high frequency component and the low frequency component in the source image are first obtained, and then the low frequency sub-image is obtained by interpolating the low-frequency component pixel by the first interpolation algorithm, and the high-frequency component pixel is interpolated by the second interpolation algorithm.
  • the image enlargement method and apparatus provided by the embodiments of the present disclosure can ensure the image quality after amplification while reducing the amount of interpolation calculation.
  • An embodiment of the present disclosure provides another embodiment of an image enlargement method, as shown in FIG. 2, including:
  • the image amplifying device performs RGB-YUV space conversion on the source image
  • the RGB-YUV space conversion can convert the source image of the RGB color space to the source image of the YUV color space;
  • RGB is a color standard in the industry, through the red (R-red), green (G- Green), blue (B-blue) three color channel changes and their superposition to each other to obtain a variety of colors;
  • YUV is a PAL (Phase Alteration Line) color coding method Wherein the luminance signal Y and the chrominance signals U, V of the YUV are separated, wherein since the sensitivity of the visual signal to the luminance signal is higher than that of the chrominance signal, the RGB-YUV spatial conversion can ensure the following interpolation of the source image.
  • the source image brightness signal is stable; thereby ensuring that the enlarged image has a good visual effect.
  • the image amplifying device acquires a high frequency component and a low frequency component of the source image by using a wavelet packet decomposition method
  • step S202 the high-frequency component and the low-frequency component of the source image are acquired in the source image of the YUV color space.
  • step S202 includes the following steps:
  • the image magnifying device loads a source image of the YUV space
  • the image amplifying device performs single-scale one-dimensional discrete wavelet packetization on the source image of the YUV space. solution;
  • the image amplifying device decomposes the source image of the YUV space into a low frequency a1 and a high frequency d1 by a wavelet packet decomposition method, wherein the low frequency a1 is represented by a corresponding low frequency coefficient cA1, and the high frequency d1 is correspondingly
  • the high frequency coefficient cD1; cA1 and cD1 are the wavelet coefficients generated during the wavelet packet decomposition process; the information lost in the low frequency a1 is captured by the high frequency d1 in the decomposition; then, in the decomposition of the next layer, a1 is decomposed into the low frequency a1a2 And the high frequency a1d2 two parts, and generate the corresponding low frequency coefficient cA1A2 and high frequency coefficient cA1D2, decompose d1 into low frequency d1a2 and high frequency d1d2 and generate corresponding low frequency coefficient cD1A2 and high frequency coefficient cD
  • the value of N can be adjusted according to the quality of the source image and the target image. For example: using energy logarithm As a cost function to characterize the correlation between the coefficients of the source image of the YUV space, the value of n corresponding to the minimum M is obtained, which is the optimal basis for wavelet packet decomposition, that is, the current value of N. Where x n is the wavelet coefficient.
  • the image amplifying device reconstructs the low-frequency component by the low-frequency coefficient obtained by the single-scale one-dimensional discrete wavelet packet decomposition, and reconstructs the high-frequency component by the high-frequency coefficient obtained by the single-scale one-dimensional discrete wavelet packet decomposition;
  • the image amplifying means acquires the high frequency component and the low frequency component of the source image.
  • the image amplifying device performs pixel interpolation on the low frequency component by using the first interpolation algorithm to obtain a low frequency sub-image.
  • step S203 is:
  • the image amplifying device selects four pixel points A, B, C, D adjacent to the pixel to be interpolated among the low frequency components of the source image;
  • the point to be interpolated is a virtual point, because the pixels in the image are arranged in rows and columns, Therefore, when interpolating the pixel points, any of the interpolation points has four adjacent pixels, so in step 501, the four pixel points A, B, C, and D are any of all the interpolated pixels that can achieve the magnification ratio. Interpolate the pixel points corresponding to the pixel.
  • the image amplifying device acquires, according to the position of the four pixel points adjacent to the pixel to be interpolated and the pixel gray scale, the pixel gray level difference in the horizontal direction of the adjacent four pixel points, and the pixel gray in the vertical direction. Pixel gray difference between the difference and the diagonal direction;
  • the image amplifying device acquires the pixel to be interpolated to the pixel to be interpolated according to the pixel gray level difference in the horizontal direction, the pixel gray level difference in the vertical direction, and the pixel gray level difference in the diagonal direction.
  • the image amplifying device sets, according to a distance from the pixel to be interpolated to four pixel points adjacent to the pixel to be interpolated, a weighting factor of four pixel points adjacent to the pixel to be interpolated;
  • step 504 according to the distance between the pixels to be interpolated and the four pixel points adjacent to the pixel to be interpolated, the image magnifying device acquires the pixel to be interpolated and the horizontal direction. Displacement, the displacement of the pixel to be interpolated and the vertical direction; and normalize the displacement of the interpolated pixel and the horizontal direction, the pixel to be interpolated and the displacement in the vertical direction, and obtain the displacement of the pixel to be interpolated and the horizontal direction
  • the normalized displacement dy and the displacement dx normalized on the displacement of the pixel to be interpolated and the vertical direction, and the weighting factors of the four pixel points adjacent to the pixel to be interpolated are set according to dx and dy;
  • step 502 dy is used to indicate that the pixel gradation difference in the horizontal direction is:
  • dx is used to indicate that the vertical direction pixel gradation difference is:
  • the pixel gradation difference between pixel point A and pixel point C in the first diagonal direction is:
  • the pixel gradation difference between the pixel point B and the pixel point D in the second diagonal direction is:
  • the total pixel gradation difference in the diagonal direction of the rectangle consisting of four pixels shown in Fig. 5a is:
  • the distance between the pixel to be interpolated to the diagonal direction of the pixel B and the pixel D is:
  • f(m, n) is a gradation value correlation function of each pixel
  • m is a horizontal coordinate
  • n is a vertical coordinate
  • the size of the pixel gray difference is the correlation degree. The smaller the correlation is, the more likely the pixel to be interpolated is located in the direction; otherwise, the pixel to be interpolated deviates from the direction.
  • the image amplifying device performs pixel interpolation on the pixel to be interpolated by a bilinear interpolation algorithm according to the weighting factor to obtain a low frequency sub-pixel image.
  • step 505 the bilinear interpolation algorithm used is first determined according to the absolute values D_Val of the difference between C(d_H), C(d_V) and C(d_D), and the calculation is as follows:
  • the weighting factor is taken as the direction coefficient into the corresponding bilinear interpolation algorithm to obtain the gray level of the pixel to be interpolated.
  • the image amplifying device performs pixel interpolation on the high frequency component by using a second interpolation algorithm to obtain a high frequency sub-image.
  • the second interpolation algorithm when the first interpolation algorithm adopts the bilinear interpolation algorithm, the second interpolation algorithm needs to adopt an interpolation algorithm different from the bilinear interpolation algorithm, such as a cubic convolution interpolation algorithm, in which a cubic convolution interpolation algorithm is used for the image.
  • an interpolation algorithm different from the bilinear interpolation algorithm such as a cubic convolution interpolation algorithm, in which a cubic convolution interpolation algorithm is used for the image.
  • Performing pixel interpolation is a technique known to the inventors and will not be described herein.
  • the image amplifying device fuses the low frequency sub-image and the high frequency sub-image by inverse wavelet transform to obtain a fusion result image.
  • step S205 includes:
  • the recovered low frequency component can be defined as a low frequency fusion sub-image; the restored high frequency component can be defined as a high frequency fusion sub-image; the process is the inverse process of FIG. 4 and is a technique known to the inventors and will not be described herein.
  • the image amplifying device performs inverse YUV-RGB spatial transformation on the fused result image.
  • the interpolation algorithm in S203 may be: a nearest neighbor interpolation algorithm, a bilinear interpolation algorithm, and a cubic convolution interpolation algorithm; the interpolation algorithm in S204 may be a nearest neighbor interpolation algorithm and a cubic convolution interpolation algorithm.
  • the high frequency component and the low frequency component in the source image are first obtained, and then the low frequency sub-image is obtained by interpolating the low-frequency component pixel by the first interpolation algorithm, and the high-frequency component pixel is interpolated by the second interpolation algorithm.
  • the high-frequency sub-image is obtained by interpolating the low-frequency component pixel by the first interpolation algorithm, and the high-frequency component pixel is interpolated by the second interpolation algorithm.
  • an image enlargement apparatus 700 including:
  • the image decomposition unit 710 is configured to acquire high frequency components and low frequency components of the source image
  • the image interpolation unit 720 is configured to perform pixel interpolation on the low-frequency component of the source image acquired by the image decomposition unit 710 by using a first interpolation algorithm to obtain a low-frequency sub-image;
  • the image interpolation unit 720 is further configured to perform pixel interpolation on the high-frequency component of the source image acquired by the image decomposition unit 710 by using a second interpolation algorithm to obtain a high-frequency sub-image;
  • the image fusion unit 730 is configured to fuse the low-frequency sub-image and the high-frequency sub-image acquired by the image interpolation unit 720 to obtain a fusion result image;
  • the first interpolation algorithm and the second interpolation algorithm use different algorithms.
  • the image amplifying device provided by the embodiment of the present disclosure first acquires a high frequency component and a low frequency component in the source image, and then interpolates the low frequency component pixel to obtain a low frequency sub image through a first interpolation algorithm, and interpolates the high frequency component pixel through a second interpolation algorithm. Obtaining the high-frequency sub-image, and finally fusing the low-frequency sub-image and the high-frequency sub-image to obtain the fusion result image; since the high-frequency component and the low-frequency component of the source image are interpolated respectively by different interpolation algorithms, the interpolation operation can be reduced. The amount of image quality is guaranteed at the same time.
  • the image decomposition unit 710 is configured to acquire high frequency components and low frequency components of the source image by a method of wavelet packet decomposition.
  • the image fusion unit 730 is configured to fuse the low-frequency sub-image and the high-frequency sub-image acquired by the image interpolation unit by wavelet packet inverse transform to obtain a fusion result image.
  • the apparatus further includes: a transforming unit 740, configured to perform RGB-YUV spatial conversion on the source image.
  • a transforming unit 740 configured to perform RGB-YUV spatial conversion on the source image.
  • the RGB-YUV space conversion can convert the source image of the RGB color space to the source image of the YUV color space; the YUV-RGB spatial inverse conversion can convert the source image of the YUV color space to the source image of the RGB color space. .
  • due to the visual contrast signal The sensitivity is higher than the chrominance signal.
  • the source image brightness signal can be stabilized during the following interpolation of the source image; thus ensuring that the enlarged image has good visual effect through the YUV-RGB space.
  • Inverse conversion allows the magnified image to remain an RGB color space image.
  • the first interpolation algorithm includes: a nearest neighbor interpolation algorithm, a bilinear interpolation algorithm, and a cubic convolution interpolation algorithm;
  • the second interpolation algorithm includes: a nearest neighbor interpolation algorithm and a cubic convolution interpolation algorithm.
  • the first interpolation algorithm is a bilinear interpolation algorithm
  • the image interpolation unit 720 includes:
  • a sampling sub-unit 721, configured to select four pixel points adjacent to the pixel to be interpolated among the low-frequency components of the source image
  • a pixel gradation difference acquisition sub-unit 722 configured to acquire the adjacent four pixel points according to a position and a pixel gradation of four pixel points adjacent to the pixel to be interpolated selected by the sampling sub-unit 721 Pixel gray difference in the horizontal direction, pixel gray difference in the vertical direction, and pixel gray difference in the diagonal direction;
  • the distance obtaining sub-unit 723 is configured to acquire the pixel gradation difference in the horizontal direction, the pixel gradation difference in the vertical direction, and the pixel gradation difference in the diagonal direction acquired according to the pixel gradation difference acquisition sub-unit 722. a distance from the pixel point to be interpolated to four pixel points adjacent to the pixel to be interpolated;
  • a weighting factor setting sub-unit 724 configured to set the pixel to be interpolated according to a distance from the pixel to be interpolated obtained by the distance acquiring sub-unit 723 to four pixel points adjacent to the pixel to be interpolated The weighting factor of the adjacent four pixels;
  • the image interpolation sub-unit 725 is configured to perform pixel interpolation on the pixel to be interpolated by a bilinear interpolation algorithm according to the weighting factor set by the weighting factor setting sub-unit 724, and obtain the interpolated low-frequency sub-pixel image.
  • the image amplifying device provided by the embodiment of the present disclosure first acquires a high frequency component and a low frequency component in the source image, and then interpolates the low frequency component pixel to obtain a low frequency sub image through a first interpolation algorithm, and interpolates the high frequency component pixel through a second interpolation algorithm. Obtaining the high-frequency sub-image, and finally fusing the low-frequency sub-image and the high-frequency sub-image to obtain the fusion result image; since the high-frequency component and the low-frequency component of the source image are interpolated respectively by different interpolation algorithms, the interpolation operation can be reduced. The amount of image quality is guaranteed at the same time.
  • the display device provided by the embodiment of the present disclosure first acquires a high frequency component and a low frequency component in the source image, and then interpolates the low frequency component pixel to obtain a low frequency sub image through a first interpolation algorithm, and interpolates the high frequency component pixel by a second interpolation algorithm.
  • the high-frequency sub-image is finally fused to the low-frequency sub-image and the high-frequency sub-image to obtain the fused result image. Since the high-frequency component and the low-frequency component of the source image are interpolated separately by different interpolation algorithms, the interpolation operation amount can be reduced. While ensuring the quality of the enlarged image.
  • the disclosed methods, apparatus, and devices may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrated into another device, or some features can be ignored or not executed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Television Systems (AREA)

Abstract

本公开的实施例提供一种图像放大方法、图像放大装置及显示设备,涉及图像处理技术领域,该方法包括:图像放大装置获取源图像的高频分量和低频分量;图像放大装置通过第一插值算法对源图像的低频分量进行像素插值,获取低频子图像;图像放大装置通过第二插值算法对源图像的高频分量进行像素插值,获取高频子图像;图像放大装置融合所述低频子图像与高频子图像,获取融合结果图像;其中,第一插值算法与第二插值算法采用不同的算法,以在降低运算量的同时保证放大后的图像质量。本公开的实施例用于图像放大。

Description

一种图像放大方法、图像放大装置及显示设备 技术领域
本公开涉及一种图像放大方法、图像放大装置及显示设备。
背景技术
图像放大的目的是提高放大后图像的分辨率,来满足人们的视觉要求或实际应用要求,图像放大在高清电视、医学图像等领域都有着非常重要的应用。
图像包含高频分量和低频分量,其中,高频分量主要分布于图像中的各个主题的边缘轮廓部分和细节部分,低频分量主要分布于图像中的各个主题非边缘轮廓部分。
目前,通常采用同一种算法对图像的高频分量和低频分量进行放大,插值算法是对图像放大最常用的图像放大算法,广泛采用的有最近邻插值、双线性插值和立方卷积插值。其中,最邻近插值算法最简单,但最邻近插值算法也最易产生像素值不连续,从而导致块效应,进而造成图像模糊,放大后图像质量效果一般不够理想。双线性插值算法较为复杂,双线性插值算法不会出现像素值不连续的情况,放大后的图像质量较高,但由于双线性插值具有低通滤波器的性质,使高频分量受损,所以可能会使图像中各个主题的边缘轮廓和细节部分在一定程度上变得模糊。立方卷积插值算法复杂,可以保留相对较为清晰的边缘轮廓和细节,可以降低或避免放大后的图像中各个主题边缘轮廓的锯齿现象及细节部分的梳状现象,插值效果相对真实,使放大后图像质量更加完善化。
由于在发明人已知的技术中采用一种图像放大算法对图像放大,而基于以上发明人已知的技术,由于图像的低频分量中图像像素灰度值变化较小,因此无论采用简单的算法还是复杂的算法,对图像的低频分量放大效果相当;而对于图像的高频分量采用较为复杂的算法,能够达到良好的图像效果,但是无论对于图像的高频分量还是低频分量,在采用复杂的算法进行放大处理时,均会增加额外的计算量,因此不能同时兼顾放大处理的运算量及图像放 大后的图像质量。
发明内容
本公开的实施例提供一种图像放大方法、图像放大装置及显示设备,能够在降低运算量的同时保证放大后的图像质量。
第一方面,提供一种图像放大方法,包括以下步骤:
图像放大装置获取源图像的高频分量和低频分量;
所述图像放大装置通过第一插值算法对所述源图像的低频分量进行像素插值,获取低频子图像;
所述图像放大装置通过第二插值算法对所述源图像的高频分量进行像素插值,获取高频子图像;以及
所述图像放大装置融合所述低频子图像与高频子图像,获取融合结果图像;
其中,所述第一插值算法与第二插值算法采用不同的算法。
可选地,所述图像放大装置通过小波包分解的方法获取所述源图像的高频分量和低频分量。
可选地,所述图像放大装置通过小波包逆变换融合所述低频子图像与高频子图像,获取融合结果图像。
可选地,所述图像放大装置获取源图像的高频分量和低频分量之前,所述方法还包括:
所述图像放大装置对源图像进行RGB-YUV空间转换。
可选地,所述图像放大装置对所述低频子图像与所述高频子图像融合,获取融合结果图像之后,所述方法还包括:
所述图像放大装置对融合结果图像进行YUV-RGB空间逆变换。
可选地,所述第一插值算法包括:最邻近插值算法、双线性插值算法、立方卷积插值算法;
所述第二插值算法包括:最邻近插值算法、立方卷积插值算法。
可选地,所述第一插值算法为双线性插值算法;所述图像放大装置通过第一插值算法对所述源图像的低频分量进行像素插值,获取低频子图像的步骤包括:
所述图像放大装置选取所述源图像的低频分量中待插值像素点相邻的四个像素点;
所述图像放大装置根据与所述待插值像素点相邻的四个像素点的位置和像素灰度,获取所述相邻的四个像素点水平方向的像素灰度差、垂直方向的像素灰度差及对角线方向的像素灰度差;
所述图像放大装置根据所述水平方向的像素灰度差、垂直方向的像素灰度差及对角线方向的像素灰度差,获取所述待插值像素点到所述待插值像素点相邻的四个像素点的距离;
所述图像放大装置根据所述待插值像素点到所述待插值像素点相邻的四个像素点的距离设定所述待插值像素点相邻的四个像素点的权重因子;以及
所述图像放大装置根据所述权重因子,通过双线性插值算法对所述待插值像素点进行像素插值,获取插值后的低频子像素图像。
第二方面,提供一种图像放大装置,包括:
图像分解单元,用于获取源图像的高频分量和低频分量;
图像插值单元,用于通过第一插值算法对所述图像分解单元获取的所述源图像的低频分量进行像素插值,获取低频子图像;
所述图像插值单元,还用于通过第二插值算法对所述图像分解单元获取的所述源图像的高频分量进行像素插值,获取高频子图像;以及
图像融合单元,用于融合所述图像插值单元获取的低频子图像与高频子图像,获取融合结果图像;
其中,所述第一插值算法与第二插值算法采用不同的算法。
可选地,所述图像分解单元用于通过小波包分解的方法获取所述源图像的高频分量和低频分量。
可选地,所述图像融合单元用于通过小波包逆变换融合所述图像插值单元获取的低频子图像与高频子图像,获取融合结果图像。
可选地,所述装置还包括:变换单元,所述变换单元用于对所述源图像进行RGB-YUV空间转换。
可选地,所述变换单元还用于所述对融合结果图像进行YUV-RGB空间逆变换。
可选地,所述第一插值算法包括:最邻近插值算法、双线性插值算法、 立方卷积插值算法;
所述第二插值算法包括:最邻近插值算法、立方卷积插值算法。
可选地,所述第一插值算法为双线性插值算法;所述图像插值单元包括:
采样子单元,用于选取所述源图像的低频分量中待插值像素点相邻的四个像素点;
像素灰度差获取子单元,用于根据所述采样子单元选取的与所述待插值像素点相邻的四个像素点的位置和像素灰度获取所述相邻的四个像素点水平方向的像素灰度差、垂直方向的像素灰度差及对角线方向的像素灰度差;
距离获取子单元,用于根据所述像素灰度差获取子单元获取的所述水平方向的像素灰度差、垂直方向的像素灰度差及对角线方向的像素灰度差,获取所述待插值像素点到所述待插值像素点相邻的四个像素点的距离;
权重因子设定子单元,用于根据所述距离获取子单元获取的所述待插值像素点到所述待插值像素点相邻的四个像素点的距离设定所述待插值像素点相邻的四个像素点的权重因子;以及
图像插值子单元,用于根据权重因子设定子单元设定的所述权重因子,通过双线性插值算法对所述待插值像素点进行像素插值,获取插值后的低频子像素图像。
第三方面,提供一种显示设备,包括上述任一种图像放大装置。
本公开实施例提供的图像放大方法、图像放大方法装置及显示设备,先获取源图像中的高频分量和低频分量,然后通过第一插值算法对低频分量像素插值获取低频子图像,通过第二插值算法对高频分量像素插值获取高频子图像,最后再对低频子图像和高频子图像进行融合,获取融合结果图像;由于通过不同的插值算法对源图像的高频分量和低频分量分别插值,所以能够在减少插值运算量的同时保证放大后的图像质量。
附图说明
图1为本公开实施例提供的图像放大方法流程示意图;
图2为本公开另一实施例提供的图像放大方法示意性流程图;
图3为本公开实施例提供一种采用小波包分解对YUV空间的源图像进行分解的示意性流程图;
图4为本公开实施例提供一种采用小波包分解对YUV空间的源图像进行分解的原理示意图;
图5a为本公开实施例提供的基于双线性插值法的原理示意图;
图5b为本公开实施例提供的基于双线性插值法像素对低频分量进行像素插值的方法示意性流程图;
图6为本公开实施例提供的通过小波包逆变换融合所述低频子图像与所述高频子图像的示意性流程图;
图7为本公开实施例提供的图像放大装置示意性结构图;
图8为本公开另一实施例提供的图像放大装置的示意性结构图。
具体实施方式
下面结合附图对本公开实施例提供的图像放大方法及装置进行详细描述,其中用相同的附图标记指示本文中的相同元件。在下面的描述中,为便于解释,给出了大量细节,以便提供对一个或多个实施例的全面理解。然而,很明显,也可以不用这些细节来实现所述实施例。在其它例子中,以方框图形式示出公知结构和设备,以便于描述一个或多个实施例。
参照图1所示,本公开的实施例提供一种图像放大方法,包括:
101、图像放大装置获取源图像的高频分量和低频分量;
可选地,在步骤101中,可以采用小波包分解的方法获取源图像的高频分量和低频分量;可选的步骤101为:图像放大装置通过小波包分解的方法获取源图像的高频分量和低频分量;
其中,小波包分解是小波分解的延伸,小波包分解不仅对图像的低频部分进行分解,而且对图像的高频部分也进行进一步的分解,小波包分解还能根据图像信号特性和分析要求,自适应地选择相应频带与图像频谱相匹配,是一种相比小波分解更为精细的分解方法,具有更为精确的分析能力。
102、图像放大装置通过第一插值算法对所述低频分量进行像素插值,获取低频子图像;
其中,步骤102中所采用的第一插值算法可以包括:最邻近插值算法、双线性插值算法、立方卷积插值算法。
103、图像放大装置通过第二插值算法对所述低频分量进行像素插值,获 取高频子图像;
其中,步骤103中所采用的第二插值算法可以包括:所述第二插值算法包括:最邻近插值算法、立方卷积插值算法。
104、图像放大装置融合所述低频子图像与所述高频子图像,获取融合结果图像。
可选地,在步骤104中可以采用小波包逆变换融合所述低频子图像与所述高频子图像。
其中,所述第一插值算法与第二插值算法不同。
本公开实施例提供的图像放大方法,先获取源图像中的高频分量和低频分量,然后通过第一插值算法对低频分量像素插值获取低频子图像,通过第二插值算法对高频分量像素插值获取高频子图像,最后再对低频子图像和高频子图像进行融合,获取融合结果图像;由于本公开实施例中通过不同的插值算法对源图像的高频分量和低频分量分别插值,所以本公开实施例提供的图像放大方法及装置能够在减少插值运算量的同时保证放大后的图像质量。
本公开实施例提供另一种图像放大方法的实施方式,如图2所示,包括:
S201、图像放大装置对源图像进行RGB-YUV空间转换;
其中,通过RGB-YUV空间转换,可以将RGB色彩空间的源图像转换到YUV色彩空间的源图像;RGB是工业界的一种颜色标准,是通过对红(R-red)、绿(G-green)、蓝(B-blue)三个颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色的;YUV是一种PAL(Phase Alteration Line,逐行倒相)颜色编码方法,其中YUV的亮度信号Y和色度信号U、V是分离的,其中由于视觉对亮度信号的敏感程度高于色度信号,通过RGB-YUV空间转换,可以保证在以下对源图像插值过程中源图像亮度信号的稳定;进而保证放大后的图像具有良好的视觉效果。
S202、图像放大装置通过小波包分解的方法获取源图像的高频分量和低频分量;
其中,在步骤S202中,在YUV色彩空间的源图像获取源图像的高频分量和低频分量,参照图3所示,步骤S202包括以下步骤:
301、图像放大装置载入YUV空间的源图像;
302、图像放大装置对YUV空间的源图像进行单尺度一维离散小波包分 解;
参照图4所示,首先,图像放大装置通过小波包分解的方法分解将YUV空间的源图像分解成低频a1和高频d1,其中低频a1用对应的低频系数cA1表示,高频d1用对应的高频系数cD1;cA1和cD1均为小波包分解过程中生成的小波系数;在分解中低频a1中失去的信息由高频d1捕获;然后,在下一层的分解中又将a1分解成低频a1a2和高频a1d2两部分,并产生对应的低频系数cA1A2和高频系数cA1D2,将d1分解成低频d1a2和高频d1d2两部分并产生对应的低频系数cD1A2和高频系数cD1D2,低频a1a2中失去的信息由高频a1d2捕获,低频d1a2中失去的信息由高频d1d2捕获;以此类推下去,可以进行第N层次的分解。N的取值可根据源图像与目标图像质量的高低进行调节。例如:采用能量对数
Figure PCTCN2015070053-appb-000001
作为代价函数来表征YUV空间的源图像的系数间的相关性,求取最小M对应的n值,即为小波包分解的最优基,也就是当前的N值。其中,xn为小波系数。
303、图像放大装置由单尺度一维离散小波包分解获取的低频系数重构低频分量,由单尺度一维离散小波包分解获取的高频系数重构高频分量;
当上述N=1时,即只获取低频系数cA1和高频系数cD1时,在步骤303中,通过低频系数cA1重构低频分量,通过高频系数cD1重构高频分量;当N=2时,即获取低频系数cA1A2、cD1A2和高频系数cA1D2、cD1D2时,在步骤303中,通过低频系数cA1A2、cD1A2重构低频分量,通过高频系数cA1D2、cD1D2重构高频分量。
通过上述步骤301-303,图像放大装置获取源图像的高频分量和低频分量。
S203、图像放大装置通过第一插值算法对低频分量进行像素插值,获取低频子图像;
以下,以第一插值算法为双线性插值算法为例进行说明,参照图5a、5b所示,步骤S203为:
501、所述图像放大装置选取所述源图像的低频分量中待插值像素点相邻的四个像素点A、B、C、D;
其中,该待插值点为虚拟的点,由于在图像中像素点是按照行列排列的, 因此在进行插值像素点时,任一插值点均有四个相邻的像素点,因此在步骤501中四个像素点A、B、C、D是能够实现放大比例的所有插值像素点中任意插值像素点对应的像素点。
502、所述图像放大装置根据所述待插值像素点相邻的四个像素点的位置和像素灰度获取所述相邻的四个像素点水平方向的像素灰度差、垂直方向的像素灰度差及对角线方向的像素灰度差;
503、所述图像放大装置根据所述水平方向的像素灰度差、垂直方向的像素灰度差及对角线方向的像素灰度差,获取所述待插值像素点到所述待插值像素点相邻的四个像素点的距离;
504、所述图像放大装置根据所述待插值像素点到所述待插值像素点相邻的四个像素点的距离设定所述待插值像素点相邻的四个像素点的权重因子;
参照图5a,其中,在步骤504中,根据所述图像放大装置根据所述待插值像素点到所述待插值像素点相邻的四个像素点的距离,获取待插值像素点与水平方向的位移、待插值像素点与垂直方向上的位移;并对待插值像素点与水平方向的位移、待插值像素点与垂直方向上的位移进行归一化,获取待插值像素点与水平方向的位移归一化后的位移dy和待插值像素点与垂直方向上的位移上归一化后的位移dx,并根据dx和dy设定待插值像素点相邻的四个像素点的权重因子;
其中,参照图5a所示,上述步骤502中采用dy表示水平方向的像素灰度差为:
Figure PCTCN2015070053-appb-000002
参照图5a所示,上述步骤502中采用dx表示垂直方向像素灰度差为:
Figure PCTCN2015070053-appb-000003
第一对角线方向上像素点A与像素点C的像素灰度差为:
C(d_D2)=d2|f(x,y)-f(x+1,y+1)|
第二对角线方向上像素点B与像素点D的像素灰度差为:
C(d_D1)=d1|f(x+1,y)-f(x,y+1)|
图5a所示四个像素点组成的矩形的对角线方向总的像素灰度差为:
Figure PCTCN2015070053-appb-000004
其中,待插值像素点到像素点B与像素点D的对角线方向的距离为:
Figure PCTCN2015070053-appb-000005
待插值像素点到像素点A与像素点C的对角线方向的距离为:
Figure PCTCN2015070053-appb-000006
上述各式中f(m,n)为各个像素点的灰度值相关函数,m为水平方向坐标,n为垂直方向坐标。
像素灰度差的大小即为相关度大小,相关度越小,待插值像素点越有可能位于该方向之上;反之,待插值像素点越偏离该方向。
505、所述图像放大装置根据所述权重因子,通过双线性插值算法对所述待插值像素点进行像素插值,获取低频子像素图像。
在步骤505中,首先依据C(d_H)、C(d_V)和C(d_D)两两做差的绝对值D_Val确定所采用的双线性插值算法,计算如下:
D_Val_HV=|C(d_H)-C(d_V)|,
D_Val_HD=|C(d_H)-C(d_D)|,
D_Val_VD=|C(d_V)-C(d_D)|。
设定阈值为T,当D_Val≤T时,则采用双线性插值算法;
当D_Val>T时,则需要分情况处理:
D_Val_HV>T||D_Val_HD>T,则采用双线性插值算法;
D_Val_HV>T||D_Val_VD>T,则采用双线性插值算法;
D_Val_HD>T||D_Val_VD>T,则采用基于方向相关度的双线性插值算法。
最后,将权重因子作为各方向系数带入对应的双线性插值算法,求取待插值像素灰度。
S204、图像放大装置通过第二插值算法对所述高频分量进行像素插值,获取高频子图像;
本公开实施例在第一插值算法采用双线性插值算法时第二插值算法需要采用有区别于双线性插值算法的插值算法,例如立方卷积插值算法,其中采用立方卷积插值算法对图像进行像素插值为发明人已知的技术,本文在此不再赘述。
S205、图像放大装置通过小波包逆变换融合所述低频子图像与所述高频子图像,获取融合结果图像。
参照图6所示,步骤S205包括:
601、通过小波包逆变换将所有低频子图像融合获取恢复低频分量;
602、通过小波包逆变换将所有高频子图像融合获取恢复高频分量;
恢复低频分量可以定义为低频融合子图像;恢复高频分量可以定义为高频融合子图像;该过程为图4的逆过程,为发明人已知的技术,这里不再赘述。
603、分别对所述恢复低频分量和所述恢复高频分量进行N层一维分解,获取恢复低频系数和恢复高频系数;
604、在所述分解结果中提取低频部分的低频系数和高频部分的高频系数;
605、根据所述低频部分的低频系数重构N层低频系数,并根据高频部分的高频系数重构N层高频系数;
606、依据所述N层低频系数和所述N层高频系数重构融合结果图像。
S206、图像放大装置对融合结果图像进行YUV-RGB空间逆变换。
可选地,S203中的插值算法可以为:最邻近插值算法、双线性插值算法、立方卷积插值算法;S204中的插值算法可以为最邻近插值算法、立方卷积插值算法。
本公开实施例提供的图像放大方法,先获取源图像中的高频分量和低频分量,然后通过第一插值算法对低频分量像素插值获取低频子图像,通过第二插值算法对高频分量像素插值获取高频子图像,最后再对低频子图像和高频子图像进行融合,获取融合结果图像;由于通过不同的插值算法对源图像的高频分量和低频分量分别插值,所以能够在减少插值运算量的同时保证放 大后的图像质量。
参照图7所示,本公开又一实施例提供一种图像放大装置700,包括:
图像分解单元710,用于获取源图像的高频分量和低频分量;
图像插值单元720,用于通过第一插值算法对所述图像分解单元710获取的所述源图像的低频分量进行像素插值,获取低频子图像;
所述图像插值单元720,还用于通过第二插值算法对所述图像分解单元710获取的所述源图像的高频分量进行像素插值,获取高频子图像;
图像融合单元730,用于融合所述图像插值单元720获取的低频子图像与高频子图像,获取融合结果图像;
其中,所述第一插值算法与第二插值算法采用不同的算法。
本公开实施例提供的图像放大装置,先获取源图像中的高频分量和低频分量,然后通过第一插值算法对低频分量像素插值获取低频子图像,通过第二插值算法对高频分量像素插值获取高频子图像,最后再对低频子图像和高频子图像进行融合,获取融合结果图像;由于通过不同的插值算法对源图像的高频分量和低频分量分别插值,所以能够在减少插值运算量的同时保证放大后的图像质量。
可选地,所述图像分解单元710用于通过小波包分解的方法获取所述源图像的高频分量和低频分量。
可选地,所述图像融合单元730用于通过小波包逆变换融合所述图像插值单元获取的低频子图像与高频子图像,获取融合结果图像。
其中,图像分解单元710对源图像的分解可参照上述实施步骤S202的实施方法,图像融合单元730的图像融合过程可参照上述实施步骤S205的实施方法,在此本文不再赘述。
可选地,所述装置还包括:变换单元740,所述变换单元用于对所述源图像进RGB-YUV空间转换。
可选地,所述变换单元740还用于所述对融合结果图像进行YUV-RGB空间逆变换。
其中,通过RGB-YUV空间转换,可以将RGB色彩空间的源图像转换到YUV色彩空间的源图像;通过YUV-RGB空间逆转换转换可以将YUV色彩空间的源图像转换到RGB色彩空间的源图像。其中由于视觉对亮度信号 的敏感程度高于色度信号,通过RGB-YUV空间转换,可以保证在以下对源图像插值过程中源图像亮度信号的稳定;进而保证放大后的图像具有良好的视觉效果,通过YUV-RGB空间逆转换,可以使放大后的图像仍为RGB色彩空间图像。
可选地,所述第一插值算法包括:最邻近插值算法、双线性插值算法、立方卷积插值算法;
所述第二插值算法包括:最邻近插值算法、立方卷积插值算法。
进一步可选地,参照图8所示,所述第一插值算法为双线性插值算法;所述图像插值单元720包括:
采样子单元721,用于选取所述源图像的低频分量中待插值像素点相邻的四个像素点;
像素灰度差获取子单元722,用于根据所述采样子单元721选取的与所述待插值像素点相邻的四个像素点的位置和像素灰度获取所述相邻的四个像素点水平方向的像素灰度差、垂直方向的像素灰度差及对角线方向的像素灰度差;
距离获取子单元723,用于根据像素灰度差获取子单元722获取的所述水平方向的像素灰度差、垂直方向的像素灰度差及对角线方向的像素灰度差,获取所述待插值像素点到所述待插值像素点相邻的四个像素点的距离;
权重因子设定子单元724,用于根据所述距离获取子单元723获取的所述待插值像素点到所述待插值像素点相邻的四个像素点的距离设定所述待插值像素点相邻的四个像素点的权重因子;以及
图像插值子单元725,用于根据所权重因子设定子单元724设定的所述述权重因子,通过双线性插值算法对所述待插值像素点进行像素插值,获取插值后的低频子像素图像。
本公开实施例提供的图像放大装置,先获取源图像中的高频分量和低频分量,然后通过第一插值算法对低频分量像素插值获取低频子图像,通过第二插值算法对高频分量像素插值获取高频子图像,最后再对低频子图像和高频子图像进行融合,获取融合结果图像;由于通过不同的插值算法对源图像的高频分量和低频分量分别插值,所以能够在减少插值运算量的同时保证放大后的图像质量。
本公开的实施例提供一种显示设备,包括上述实施例中任一种图像放大装置,该显示设备可以为电子纸、手机、电视、数码相框等显示设备。
本公开实施例提供的显示设备,先获取源图像中的高频分量和低频分量,然后通过第一插值算法对低频分量像素插值获取低频子图像,通过第二插值算法对高频分量像素插值获取高频子图像,最后再对低频子图像和高频子图像进行融合,获取融合结果图像;由于通过不同的插值算法对源图像的高频分量和低频分量分别插值,所以能够在减少插值运算量的同时保证放大后的图像质量。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法、装置和设备可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个设备中,或一些特征可以忽略,或不执行。
以上所述,仅为本公开的实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。
本申请要求于2014年9月26日递交的中国专利申请第201410503761.9号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。

Claims (15)

  1. 一种图像放大方法,包括以下步骤:
    图像放大装置获取源图像的高频分量和低频分量;
    所述图像放大装置通过第一插值算法对所述源图像的低频分量进行像素插值,获取低频子图像;
    所述图像放大装置通过第二插值算法对所述源图像的高频分量进行像素插值,获取高频子图像;以及
    所述图像放大装置融合所述低频子图像与高频子图像,获取融合结果图像;
    其中,所述第一插值算法与第二插值算法采用不同的算法。
  2. 根据权利要求1所述的方法,其中,所述图像放大装置通过小波包分解的方法获取所述源图像的高频分量和低频分量。
  3. 根据权利要求1或2所述的方法,其中,所述图像放大装置通过小波包逆变换融合所述低频子图像与高频子图像,获取融合结果图像。
  4. 根据权利要求1至3中的任一项所述的方法,其中,所述图像放大装置获取源图像的高频分量和低频分量之前,所述方法还包括:
    所述图像放大装置对源图像进RGB-YUV空间转换。
  5. 根据权利要求4所述的方法,其中,所述图像放大装置对所述低频子图像与所述高频子图像融合,获取融合结果图像之后,所述方法还包括:
    所述图像放大装置对融合结果图像进行YUV-RGB空间逆变换。
  6. 根据权利要求1至5中的任一项所述的方法,其中,
    所述第一插值算法包括:最邻近插值算法、双线性插值算法、立方卷积插值算法;
    所述第二插值算法包括:最邻近插值算法、立方卷积插值算法。
  7. 根据权利要求1至6中的任一项所述的方法,其中,所述第一插值算法为双线性插值算法;所述图像放大装置通过第一插值算法对所述源图像的低频分量进行像素插值,获取低频子图像的步骤包括:
    所述图像放大装置选取所述源图像的低频分量中待插值像素点相邻的四个像素点;
    所述图像放大装置根据与所述待插值像素点相邻的四个像素点的位置和像素灰度获取所述相邻的四个像素点水平方向的像素灰度差、垂直方向的像素灰度差及对角线方向的像素灰度差;
    所述图像放大装置根据所述水平方向的像素灰度差、垂直方向的像素灰度差及对角线方向的像素灰度差,获取所述待插值像素点到所述待插值像素点相邻的四个像素点的距离;
    所述图像放大装置根据所述待插值像素点到所述待插值像素点相邻的四个像素点的距离设定所述待插值像素点相邻的四个像素点的权重因子;以及
    所述图像放大装置根据所述权重因子,通过双线性插值算法对所述待插值像素点进行像素插值,获取插值后的低频子像素图像。
  8. 一种图像放大装置,包括:
    图像分解单元,用于获取源图像的高频分量和低频分量;
    图像插值单元,用于通过第一插值算法对所述图像分解单元获取的所述源图像的低频分量进行像素插值,获取低频子图像;
    所述图像插值单元,还用于通过第二插值算法对所述图像分解单元获取的所述源图像的高频分量进行像素插值,获取高频子图像;以及
    图像融合单元,用于融合所述图像插值单元获取的低频子图像与高频子图像,获取融合结果图像;
    其中,所述第一插值算法与第二插值算法采用不同的算法。
  9. 根据权利要求8所述的装置,其中,所述图像分解单元用于通过小波包分解的方法获取所述源图像的高频分量和低频分量。
  10. 根据权利要求8或9所述的装置,其中,所述图像融合单元用于通过小波包逆变换融合所述图像插值单元获取的低频子图像与高频子图像,获取融合结果图像。
  11. 根据权利要求8至10中的任一项所述的装置,其中,所述装置还包括:变换单元,所述变换单元用于对所述源图像进RGB-YUV空间转换。
  12. 根据权利要求11所述的装置,其中,所述变换单元还用于所述对融合结果图像进行YUV-RGB空间逆变换。
  13. 根据权利要求8至12中的任一项所述的装置,其中,所述第一插值算法包括:最邻近插值算法、双线性插值算法、立方卷积插值算法;
    所述第二插值算法包括:最邻近插值算法、立方卷积插值算法。
  14. 根据权利要求8至13中的任一项所述的装置,其中,所述第一插值算法为双线性插值算法;所述图像插值单元包括:
    采样子单元,用于选取所述源图像的低频分量中待插值像素点相邻的四个像素点;
    像素灰度差获取子单元,用于根据所述采样子单元选取的与所述待插值像素点相邻的四个像素点的位置和像素灰度获取所述相邻的四个像素点水平方向的像素灰度差、垂直方向的像素灰度差及对角线方向的像素灰度差;
    距离获取子单元,用于根据所述像素灰度差获取子单元获取的所述水平方向的像素灰度差、垂直方向的像素灰度差及对角线方向的像素灰度差,获取所述待插值像素点到所述待插值像素点相邻的四个像素点的距离;
    权重因子设定子单元,用于根据所述距离获取子单元获取的所述待插值像素点到所述待插值像素点相邻的四个像素点的距离设定所述待插值像素点相邻的四个像素点的权重因子;以及
    图像插值子单元,用于根据所述权重因子设定子单元设定的所述权重因子,通过双线性插值算法对所述待插值像素点进行像素插值,获取插值后的低频子像素图像。
  15. 一种显示设备,包括根据权利要求8至14中的任一项所述的图像放大装置。
PCT/CN2015/070053 2014-09-26 2015-01-04 一种图像放大方法、图像放大装置及显示设备 WO2016045242A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP15794806.8A EP3200147B1 (en) 2014-09-26 2015-01-04 Image magnification method, image magnification apparatus and display device
US14/771,340 US9824424B2 (en) 2014-09-26 2015-01-04 Image amplifying method, image amplifying device, and display apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410503761.9 2014-09-26
CN201410503761.9A CN104299185A (zh) 2014-09-26 2014-09-26 一种图像放大方法、图像放大装置及显示设备

Publications (1)

Publication Number Publication Date
WO2016045242A1 true WO2016045242A1 (zh) 2016-03-31

Family

ID=52318906

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/070053 WO2016045242A1 (zh) 2014-09-26 2015-01-04 一种图像放大方法、图像放大装置及显示设备

Country Status (4)

Country Link
US (1) US9824424B2 (zh)
EP (1) EP3200147B1 (zh)
CN (1) CN104299185A (zh)
WO (1) WO2016045242A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548454A (zh) * 2016-09-08 2017-03-29 清华大学 处理医学图像的方法和装置
CN111210389A (zh) * 2020-01-10 2020-05-29 北京华捷艾米科技有限公司 一种图像缩放处理方法及装置

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794682A (zh) * 2015-05-04 2015-07-22 长沙金定信息技术有限公司 一种基于变换域的图像插值方法与装置
CN105096256B (zh) * 2015-08-31 2018-05-04 深圳市博铭维智能科技有限公司 特种机器人移动平台及其所采集图像的超分辨率重构方法
CN106851399B (zh) 2015-12-03 2021-01-22 阿里巴巴(中国)有限公司 视频分辨率提升方法及装置
BR112018013602A2 (pt) 2015-12-31 2019-04-09 Shanghai United Imaging Healthcare Co., Ltd. métodos e sistemas de processamento de imagem
CN106778829B (zh) * 2016-11-28 2019-04-30 常熟理工学院 一种主动学习的肝脏损伤类别的图像检测方法
CN106600532B (zh) * 2016-12-08 2020-01-10 广东威创视讯科技股份有限公司 一种图像放大方法及装置
US10861131B2 (en) * 2017-12-28 2020-12-08 Samsung Electronics Co., Ltd. Image magnifying apparatus
CN109978766B (zh) * 2019-03-12 2020-10-16 深圳市华星光电技术有限公司 图像放大方法及图像放大装置
KR20210017185A (ko) * 2019-08-07 2021-02-17 한국전자통신연구원 심층 신경망을 기반으로 영상의 압축 포아송 잡음을 제거하는 방법 및 장치
CN111159622B (zh) * 2019-12-10 2023-06-30 北京蛙鸣信息科技发展有限公司 面向缺失数据的多参数融合空气质量空间插值方法和系统
CN111626935B (zh) * 2020-05-18 2021-01-15 成都乐信圣文科技有限责任公司 像素图缩放方法、游戏内容生成方法及装置
CN111899167B (zh) * 2020-06-22 2022-10-11 武汉联影医疗科技有限公司 插值算法确定方法、装置、计算机设备和存储介质
CN112381714A (zh) * 2020-10-30 2021-02-19 南阳柯丽尔科技有限公司 图像处理的方法、装置、存储介质及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2256335A (en) * 1991-05-31 1992-12-02 Sony Broadcast & Communication Filter system
JP2007148945A (ja) * 2005-11-30 2007-06-14 Tokyo Institute Of Technology 画像修復方法
CN101296338A (zh) * 2008-06-11 2008-10-29 四川虹微技术有限公司 图像缩放方法与装置
CN101609549A (zh) * 2009-07-24 2009-12-23 河海大学常州校区 视频模糊图像的多尺度几何分析超分辨处理方法
US20110200270A1 (en) * 2010-02-16 2011-08-18 Hirokazu Kameyama Image processing method, apparatus, program, and recording medium for the same

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6205259B1 (en) * 1992-04-09 2001-03-20 Olympus Optical Co., Ltd. Image processing apparatus
US7120305B2 (en) * 2002-04-16 2006-10-10 Ricoh, Co., Ltd. Adaptive nonlinear image enlargement using wavelet transform coefficients
US7738739B2 (en) * 2006-01-26 2010-06-15 Vestel Elektronik Sanayi Ve Ticaret A.S. Method and apparatus for adjusting the resolution of a digital image
JP2008015741A (ja) * 2006-07-05 2008-01-24 Konica Minolta Holdings Inc 画像処理装置、画像処理方法及びこれを用いた撮像装置
JP5012333B2 (ja) * 2007-08-30 2012-08-29 コニカミノルタアドバンストレイヤー株式会社 画像処理装置および画像処理方法ならびに撮像装置
CN101833754B (zh) * 2010-04-15 2012-05-30 青岛海信网络科技股份有限公司 图像增强方法及系统
CN102156963A (zh) * 2011-01-20 2011-08-17 中山大学 一种混合噪声图像去噪方法
CN103701468B (zh) * 2013-12-26 2017-04-26 国电南京自动化股份有限公司 基于正交小波包变换与旋转门算法的数据压缩与解压方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2256335A (en) * 1991-05-31 1992-12-02 Sony Broadcast & Communication Filter system
JP2007148945A (ja) * 2005-11-30 2007-06-14 Tokyo Institute Of Technology 画像修復方法
CN101296338A (zh) * 2008-06-11 2008-10-29 四川虹微技术有限公司 图像缩放方法与装置
CN101609549A (zh) * 2009-07-24 2009-12-23 河海大学常州校区 视频模糊图像的多尺度几何分析超分辨处理方法
US20110200270A1 (en) * 2010-02-16 2011-08-18 Hirokazu Kameyama Image processing method, apparatus, program, and recording medium for the same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3200147A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548454A (zh) * 2016-09-08 2017-03-29 清华大学 处理医学图像的方法和装置
CN111210389A (zh) * 2020-01-10 2020-05-29 北京华捷艾米科技有限公司 一种图像缩放处理方法及装置
CN111210389B (zh) * 2020-01-10 2023-09-19 北京华捷艾米科技有限公司 一种图像缩放处理方法及装置

Also Published As

Publication number Publication date
EP3200147A4 (en) 2018-04-11
US9824424B2 (en) 2017-11-21
CN104299185A (zh) 2015-01-21
EP3200147B1 (en) 2022-03-02
US20160364840A1 (en) 2016-12-15
EP3200147A1 (en) 2017-08-02

Similar Documents

Publication Publication Date Title
WO2016045242A1 (zh) 一种图像放大方法、图像放大装置及显示设备
CN111080724B (zh) 一种红外和可见光的融合方法
US9495582B2 (en) Digital makeup
TWI543586B (zh) 影像增強的方法及其影像處理裝置與電腦程式產品
US10565742B1 (en) Image processing method and apparatus
KR101460688B1 (ko) 화상처리장치 및 그 제어 방법
KR20160102524A (ko) 이미지의 역 톤 매핑을 위한 방법
US20130243312A1 (en) Color distance measurement apparatus, color distance measurement method, and program
WO2015161794A1 (zh) 一种基于图像显著性检测的获取缩略图的方法
US8712191B2 (en) Method for detecting directions of regularity in a two-dimensional image
JPH09284798A (ja) 信号処理装置
CN107220934B (zh) 图像重建方法及装置
Wu et al. Color demosaicking with sparse representations
CN106327428B (zh) 一种基于迁移学习的图像超分辨率方法及系统
CN113068011A (zh) 图像传感器、图像处理方法及系统
US9928577B2 (en) Image correction apparatus and image correction method
JP2007151094A (ja) 画像の階調変換装置、プログラム、電子カメラ、およびその方法
JP4728411B2 (ja) デジタル画像における色にじみアーティファクトの低減方法
CN106887024B (zh) 照片的处理方法及处理系统
CN105469399B (zh) 一种面向混合噪声的人脸超分辨率的重建方法及装置
WO2005059833A1 (ja) 輝度補正装置および輝度補正方法
KR102470242B1 (ko) 영상 처리 장치, 영상 처리 방법, 및 프로그램
Shi et al. Region-adaptive demosaicking with weighted values of multidirectional information
Lin et al. Color image recovery system from printed gray image
JP5494249B2 (ja) 画像処理装置、撮像装置及び画像処理プログラム

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 14771340

Country of ref document: US

REEP Request for entry into the european phase

Ref document number: 2015794806

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015794806

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15794806

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE