US6236765B1 - DWT-based up-sampling algorithm suitable for image display in an LCD panel - Google Patents

DWT-based up-sampling algorithm suitable for image display in an LCD panel Download PDF

Info

Publication number
US6236765B1
US6236765B1 US09/129,728 US12972898A US6236765B1 US 6236765 B1 US6236765 B1 US 6236765B1 US 12972898 A US12972898 A US 12972898A US 6236765 B1 US6236765 B1 US 6236765B1
Authority
US
United States
Prior art keywords
sub
image
dwt
band
bands
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US09/129,728
Inventor
Tinku Acharya
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ACHARYA, TINKU
Priority to US09/129,728 priority Critical patent/US6236765B1/en
Priority to GB0102430A priority patent/GB2362054B/en
Priority to PCT/US1999/017042 priority patent/WO2000008592A1/en
Priority to KR10-2001-7001534A priority patent/KR100380199B1/en
Priority to AU52360/99A priority patent/AU5236099A/en
Priority to JP2000564156A priority patent/JP4465112B2/en
Priority to TW088113386A priority patent/TW451160B/en
Publication of US6236765B1 publication Critical patent/US6236765B1/en
Application granted granted Critical
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4084Transform-based scaling, e.g. FFT domain scaling

Definitions

  • the invention relates generally to image processing and computer graphics. More specifically, the invention relates to up-sampling or up-scaling of an image.
  • an image In the art of imaging, it may be desirable to resize an image. Particularly, it may be desirable to scale up (up-sample) an image to make it larger if the image is too small for its intended use or application. For instance, a digital camera may capture an image in a small size of M pixel rows by N pixel columns. If the image is to be printed, it may be desirable to scale the image to R pixel rows by S pixel columns (R>M and/or S>N) such that the image covers the print area.
  • a display panel such as an LCD (Liquid Crystal Display) is provided so that users can review in a quick fashion the pictures they have already taken or the contents of the picture they are about to take (i.e. what is in the focus area of the camera).
  • the LCD panel like any CRT (Cathode Ray Tube) monitoring device has a maximum resolution that it can support, but unlike a CRT, the resolution cannot be modified by the supporting video sub- system (i.e. graphics card). For instance, in a CRT device, a maximum resolution of 640 pixels by 480 pixels also implies that lower resolutions could be provided with very little loss in visual quality. However, in a LCD panel, since there are a fixed number of very discretely observable pixels, an attempted change in resolution usually results in a highly blurred image.
  • a bi-linear interpolation would average in two different directions to determine the scaled image data set.
  • the scaled image under a bi-linear interpolation method may consist of: X A ⁇ X A + X B 2 ⁇ X B ⁇ X B + X C 2 ⁇ X C ⁇ ⁇ ⁇ X A + X D 2 ⁇ X A + X B + X D + X E 4 ⁇ ⁇ ⁇ X D ⁇ X D + X E 2 ⁇ X E ⁇ ⁇ ⁇ ⁇
  • the scaled image would be of size M*N*4 in terms of total number of pixels in the respective data sets. This and other averaging methods may yield better results than filling, but still results in blurred details and rough edges that are not smoothly contoured on the image.
  • Typical up-sampling techniques are inadequate and lead to poor image quality. Thus, there is a need for a up-sampling technique that better preserves image quality. Further, since lower cost of computation, in terms of complexity, is crucial in devices such as digital cameras, the up-sampling technique should also be compute efficient, so that it can be utilized in such applications.
  • a method comprising constructing virtual (Discrete Wavelet Transform) DWT sub-bands from an image without performing the DWT and then applying an inverse DWT upon the virtual sub-bands, the result of the inverse DWT representing an up-sampled version of the image.
  • an apparatus comprising an interface configured to communicate image data, and an up-sampling unit, the up-sampling unit coupled to the interface to receive the image data, the up-sampling unit configured to construct virtual sub-band input data from the image data, the up-sampling unit configured to perform an inverse DWT upon the input data generating an up-sampled image therefrom.
  • an apparatus comprising a computer readable medium having instructions which when executed perform constructing virtual (Discrete Wavelet Transform) DWT sub-bands from an image without performing the DWT, applying an inverse DWT upon the virtual sub-bands, the result of the inverse DWT representing an up-sampled version of the image.
  • virtual Discrete Wavelet Transform
  • FIG. 1 illustrates the sub-band(s) resulting from a forward DWT operation upon an image.
  • FIG. 2 is a flow diagram of DWT based up-sampling according to one embodiment of the invention.
  • FIG. 3 illustrates DWT based up-sampling according to one embodiment of the invention.
  • FIG. 4 ( a ) shows a basic processing cell for computing a DWT operation.
  • FIG. 4 ( b ) is an architecture for a one-dimensional inverse DWT.
  • FIG. 4 ( c ) is an architecture for a one-dimentional inverse DWT having odd cell output.
  • FIG. 5 is a flow diagram of one embodiment of the invention.
  • FIG. 6 is a block diagram of an image processing apparatus according to an embodiment of the invention.
  • FIG. 7 is a system diagram of one embodiment of the invention.
  • each block within the flowcharts represents both a method step and an apparatus element for performing the method step.
  • the corresponding apparatus element may be configured in hardware, software, firmware or combinations thereof.
  • Up-sampling of an image is achieved, according to one embodiment of the invention, by applying an inverse Discrete Wavelet Transform (DWT) upon an image after approximating values for “virtual sub-bands” belonging to the image.
  • FIG. 1 illustrates the sub-band(s) resulting from a forward DWT operation upon an image.
  • the DWT is a “discrete” algorithm based upon Wavelet theory which utilizes one or more “basis” functions that can be used to analyze a signal, similar to the DCT (Discrete Cosine Transform) based upon sinusoidal basis functions.
  • the DWT is better suited to representing edge features in images since Wavelets by nature are periodic and often jagged.
  • the DWT approximates an input signal by discrete samples of a full continuous wavelet.
  • the DWT can be thought of as also being a filtering operation with well-defined coefficients.
  • the Wavelet coefficients can be selected to suit particularly the application, or type of input signal.
  • the DWT chosen in at least one embodiment of the invention for image scaling is known as the 9-7 bi-orthogonal spline filters DWT. Since the DWT is discrete, the DWT can be implemented using digital logic such as Very Large Scale Integration (VLSI) circuits and thus can be integrated on a chip with other digital components.
  • VLSI Very Large Scale Integration
  • an inverse DWT can easily be implemented in the same device with little additional overhead and cost.
  • the ability of the DWT to better approximate the edge features of an image make it ideal for an up-sampling application. It is advantageous over interpolation-type scaling in that visually essential image features can be better reconstructed in the scaled-up image.
  • an architecture for DWT-based up-sampling can be implemented efficiently for high data throughput, unlike Fourier or averaging techniques which require multiple cycles or iterations to generate a single output datum.
  • the essence of what is known commonly as a two-dimensional DWT is to decompose an input signal into four frequency (frequency refers to the high-pass (H) or low-pass (L) nature of the filtering) sub-bands.
  • the two-dimensional DWT creates four sub-bands, LL, LH, HL and HH, with the number of data values in each one-quarter of the input data.
  • the DWT is multi-resolution technique such that the result data from one iteration of the DWT may be subjected to another DWT iteration. Each “level” of resolution is obtained by applying the DWT upon the “LL” sub-band generated by the previous level DWT.
  • each sub-band can further be divided into smaller and smaller sub-bands as is desired.
  • Each DWT resolution level k has 4 sub-bands, LL k , HL k , LH k and HH k .
  • the first level of DWT resolution is obtained by performing the DWT upon the entire original image, while further DWT resolution is obtained by performing the DWT on the LL sub-band generated by the previous level k-1.
  • the LL k sub-band contains enough image information to substantially reconstruct a scaled version of the image.
  • the LH k HL k and HH k sub-bands contain high-frequency noise and edge information which is less visually critical than that resolved into and represented in the LL k sub-band.
  • LL k contains a one-quarter scaled version of the LL k-1 sub-band.
  • LH k contains noise and horizontal edge information from the LL k-1 sub-band image.
  • HL k contains noise and vertical edge information from the LL k-1 sub-band image.
  • HH k contains noise and diagonal edge information from the LL k-l sub-band image.
  • the property of the DWT which allows most information to be preserved in the LL k sub-band can be exploited in performing up-sampling of an image.
  • the DWT is a unitary transformation technique for multi-resolution hierarchical decomposition which permits an input signal that is decomposed by a DWT can be recovered with little loss by performing an inverse of the DWT.
  • the coefficients representing the “forward” DWT described above have a symmetric relationship with coefficients for the inverse DWT.
  • Up-sampling is performed in an embodiment of the invention by considering the original input image that is to be up-sampled as a virtual LL sub-band, approximating values for the other sub-bands and then applying the inverse DWT.
  • FIG. 2 is a flow diagram of DWT based up-sampling according to one embodiment of the invention.
  • the first step is to consider the input image as the LL sub-band (step 200 ). If an image has M rows by N columns of pixels, then these M*N values are considered as a virtual LL sub-band LL v . Then, each virtual sub-band LH v , HL v and HH v , each of which need to be dimensioned to have M*N values, need to be approximated or constructed (step 210 ).
  • One way of constructing these sub-band data values may be to analyze/detect the input image for horizontal, vertical and diagonal (see above which direction of edge is appropriate for each sub-band HL, LH and HH) edges and provide non-zero values where edges occur and approximate all other values to zero.
  • all values in the virtual sub-bands LH v , HL v and HH v are approximated to zero. If the 9-7 bi-orthogonal splines DWT is considered, then an approximation to zero of all other sub-band data values when the original image or higher level LL sub-band is being recovered through an inverse DWT will result in the recovered image or higher level sub-band being nearly identical (from a human vision perspective) with its original state prior to performing the forward DWT operation.
  • an inverse two-dimensional DWT may be performed to generate the up-scaled image (step 220 ).
  • the up-scaled image will exhibit better visual clarity than that available through filling, averaging or interpolation based up-sampling techniques.
  • the up-sampling will result will have M*2 rows and N*2 columns of pixel result values, which together constitute the up-sampled image.
  • FIG. 3 illustrates DWT based up-sampling according to one embodiment of the invention.
  • An original image I that has M rows by N columns of pixels in its data set can be up-sampled by a factor of two by following the procedure outlined in FIG. 2 .
  • This image is composed of M*N pixels I ij where i, representing the row number of the pixel, ranges from 1 to M and j, representing the column number of the pixel, ranges from 1 to N.
  • the original image has pixels I 1,1, I 1,2, etc. on its first row and pixels I 2,1, I 2,2, etc. on its second row and so on.
  • the entire image data set I will comprise the virtual sub-band LL v .
  • the data values for the other virtual sub-bands may all be approximated to zero.
  • the combined data set for all four sub-bands will have M*2 rows and N*2 columns of pixel values which can then be subjected to a two-dimensional inverse DWT operation.
  • the up-sampled image U will have M*2 rows and N*2 columns of pixels that are virtually identical in terms of perception of the quality and clarity of the image when compared with the original image I.
  • the up-sampled image U is shown to have pixel values of U r,s , resulting from the two-dimensional inverse DWT being applied upon the virtual sub-bands, where r, representing the row number of the pixel, ranges from 1 to M*2 and s, representing the column number of the pixel, ranges from 1 to N*2.
  • the data set of all such values U r,s represents a two-to-one scaled-up version of the input image I.
  • FIG. 4 ( a ) shows a basic processing cell for computing a DWT operation.
  • FIG. 4 ( a ) which shows the basic processing cell D k 400, is first described to aid in understanding the architecture of FIG. 4 ( b ) which computes a one-dimensional inverse DWT.
  • the term q k represents the input virtual sub-band data which is the subject of the inverse DWT
  • the term p k -1 refers to an input data propagated from the coupled processing cell from the previous clock cycle and p k the input data from the current clock cycle, respectively.
  • the input Pk is passed through to output p k -1 from a cell D k to the previous cell D k -1 in the array.
  • the terms p k and p k -1 will be referred to hereinafter as “propagated inputs.”
  • the basic processing cell 400 of FIG. 4 ( a ) may be repeatedly built and selectively coupled to perform the inverse DWT computing architecture as shown in FIG. 4 ( b ). This processing cell may be built in hardware by coupling an adder with a multiplier and registers that hold the inverse DWT filter coefficients.
  • FIG. 4 ( b ) is an architecture for a one-dimensional inverse DWT.
  • a forward DWT may be either one-dimensional or two-dimensional in nature.
  • a one-dimensional forward DWT (one that is performed either row by row or column by column only) would result in only two sub-bands—LFS (Low Frequency Sub-band) and HFS (High Frequency Sub-band). If the direction of the one-dimensional forward DWT is row-wise, then two vertical sub-bands that are dimensioned M rows by N/2 columns are created when applied to an image.
  • the LFS sub-band from a row-by-row forward DWT is a tall and skinny distorted version of the original image.
  • the direction of the one-dimensional forward DWT is column-wise, then two vertical sub-bands that are dimensioned M rows by N/2 columns are created when applied to an image.
  • the LFS sub-band from a column-by-column forward DWT is a wide and fat distorted version of the original image.
  • a two-dimensional forward DWT is embodied.
  • a row-wise inverse DWT and column-wise inverse DWT can be combined to create a two dimensional inverse DWT, which is desirable in scaling an image proportionally (without axis distortion).
  • the result from a row-wise inverse DWT may be transposed such that another one-dimensional inverse DWT operation upon the result data row-wise will actually be column-wise.
  • the combination of a row-wise and column-wise one-dimensional inverse DWTs will yield a 2 to 1 up-scaled version of the virtual LL sub-band (the original image).
  • a two-dimensional inverse DWT may be built by repeating or re-utilizing one-dimensional inverse DWT modules, such as that shown in FIG. 4 ( b ).
  • an intermediate data set is first generated by applying a one-dimensional inverse DWT module to the virtually constructed sub-band data.
  • a n are the constructed (virtual) LFS data and c n is the constructed (virtual) HFS data.
  • the LFS data may be constructed by concatenating the data in the virtual sub-bands LL v and LH v
  • the HFS data may be constructed by concatenating the HL v and HH v viu sub-bands.
  • the inverse and forward 9-7 bi-orthogonal splines DWT have certain symmetric properties which allow for efficient and systolic (i.e., parallel and repeated) processing cells such as those shown in FIG. 4 ( a ) to be utilized.
  • the inverse DWT has a set of inverse high-pass filter coefficients ⁇ overscore (g) ⁇ k and a set of inverse low-pass filter coefficients ⁇ overscore (h) ⁇ k .
  • the relation of these coefficients in discussed in a related patent application entitled “ An Integrated Systolic Architecture for Decomposition and Reconstruction of Signals Using Wavelet Transforms,” Ser. No. 08,767,976, filed Dec.17, 1996.
  • U 1 ′ ⁇ ( 1 ) ⁇ n ⁇ h _ 2 ⁇ n - i ⁇ a n
  • U i ′ ⁇ ( 2 ) ⁇ n ⁇ g _ 2 ⁇ n - i ⁇ c n .
  • the inverse 9-7 bi-orthogonal splines DWT filter coefficients like their forward counterparts have certain symmetric properties allowing associative grouping.
  • the even-numbered outputs U′ 2j can be computed using only four coefficients, ⁇ overscore (h) ⁇ 0 , ⁇ overscore (h) ⁇ 2 , ⁇ overscore (g) ⁇ 2 and ⁇ overscore (g) ⁇ 4 .
  • odd-numbered outputs U′ 2j-1 it can be shown by the same filter properties discussed above that only five coefficients, ⁇ overscore (h) ⁇ 3 , ⁇ overscore (h) ⁇ 1 , ⁇ overscore (g) ⁇ 5 , ⁇ overscore (g) ⁇ 3 and ⁇ overscore (g) ⁇ 1 can be used.
  • the relationships described above show how an inverse DWT may be computed.
  • the architecture in FIG. 4 ( b ) for computing the inverse DWT consists of two input sequences a i and c i , which represent the high-frequency sub-band and the low-frequency sub-band inputs, respectively.
  • the inverse architecture receives two inputs and produces one output.
  • This architecture is not unitary in that the odd-numbered outputs, i.e., U′ 1 , U′ 3 , U′ 5 . . . , require five processing cells, one for each coefficient, whereas the even-numbered outputs, i.e., U′ 1 , U′ 2 , U′ 4 . . . , require only four processing cells.
  • the odd-numbered outputs may be generated and even-numbered outputs may be generated on alternating clock cycles, odd and even, respectively.
  • the inverse DWT architecture must be composed of two distinct blocks—an even output generating block 402 and an odd output generating block 450 .
  • Even output generating block 402 is further composed of two sub-circuits—an even high frequency sub-band sub-circuit (HFS) 410 and an even low frequency sub-band sub-circuit (LFS) 420 .
  • HFS sub-circuit 410 consists of two processing cells 415 and 417 each of which are composed of a multiplier and an adder.
  • the processing cells 415 , 417 , 425 and 427 operate similar to the basic processing cell 400 shown in FIG. 4 ( a ) accepting two inputs, summing them and then multiplying that sum by a coefficient.
  • processing cell 415 outputs a term such that a i is first added to the propagated input from processing cell 417 , with that sum multiplied by ⁇ overscore (h) ⁇ 2 .
  • processing cell 425 outputs a term to adder/controller 430 which is the product of ⁇ overscore (g) ⁇ 4 and the sum of the input c i and the propagated input from processing cell 427 .
  • Processing cell 417 receives as one input 0 and as the other input a i-1 since delay element 412 holds the value given it on the previous clock, transmitting it on the next clock cycle.
  • cell 415 generates the term ⁇ overscore (h) ⁇ 2 a 1 and cell 417 generates ⁇ overscore (h) ⁇ 0 a 0 .
  • cells 425 and 427 generate the terms c 1 ⁇ overscore (g) ⁇ 4 and c 0 ⁇ overscore (g) ⁇ 2 , respectively, these terms are ignored (cleared) by adder/controller 430 since according to the relationship defined above, the first output U′ 0 utilizes the C 2 input data.
  • cell 427 generates the term (c 1 +c 0 ) ⁇ overscore (g) ⁇ 2 .
  • Cell 425 generates c 2 ⁇ overscore (g) ⁇ 4 .
  • adder/controller 430 receives current outputs of sub-circuit 420 and adds them to the previous clock′s outputs from sub-circuit 410 . Additionally, adder/controller 430 receives the current output of sub-circuit 410 , holding them until the next clock cycle.
  • FIG. 4 ( c ) shows an odd output generating block 450 which requires five processing cells— 465 , 467 , 475 , 477 and 479 .
  • the processing cells 465 , 467 , 475 , 477 and 479 operate similarly to the processing cell 400 shown in FIG. 4 ( a ).
  • the delay elements 462 , 464 , 472 and 474 hold their inputs for one clock cycle and release them on the next clock cycle.
  • Each cell has an adder and multiplier and receives a propagated input from the cell to which it is connected.
  • cell 465 generates a term a 2 ⁇ overscore (h) ⁇ 3 and cell 467 generates (a 1 +a 0 ) ⁇ overscore (h) ⁇ 1 .
  • These outputs are sent to adder/controller 480 but are held for one clock cycle before being summed with the outputs of cells 475 , 477 and 479 .
  • the outputs of cells 475 , 477 and 479 are ignored by adder/controller 480 .
  • cell 475 receives c 2 from the delay 472
  • cell 479 receives as propagated input c i
  • cell 477 receives as its propagated input, c 0 .
  • cell 475 generates the term c 3 ⁇ overscore (g) ⁇ 5
  • cell 477 generates the term (c 0 +c 2 )* ⁇ overscore (g) ⁇ 3
  • cell 479 generates c 1 ⁇ overscore (g) ⁇ 1 .
  • adder/controller 480 receives the current outputs of cells 475 , 477 and 479 and adds them to the previous clock cycle's outputs of cells 465 and 467 . Additionally, adder/controller 480 receives the current clock's outputs of cells 465 and 467 holding them until the next clock cycle. With a set of result data U′ 1 thus obtained, these values can be transposed and fed back in as inputs for another iteration of the one-dimensional DWT.
  • the intermediate output U′ 0 corresponds to data result positioned at row 1 and column 1 in the up-scaled image space
  • U′ 1 corresponds to a data result positioned at row 1 and column 2
  • the last entry of the first row in the up-scaled image space would be U′ N*2-1 while U′ N*2 is positioned in the second row and first column of the M*2 row, N*2 column up-scaled image space.
  • FIG. 5 is a flow diagram of one embodiment of the invention.
  • the methodology for discrete wavelet transform (DWT) based up-sampling of an image involves a step-wise application of the inverse DWT.
  • the first step in implementing a two-dimensional inverse DWT is to consider the LL v and HL v sub-bands as the LFS data, and the LH v and HH v sub-bands as the HFS data for the one-dimensional inverse DWT (block 505 ).
  • An inverse DWT in one-dimension is applied. row-rise to the LFS and HFS data (block 510 ).
  • the M*N*4 outputs resulting from this first iteration of the inverse DWT (labeled U′ i in the FIGS.
  • the outputs of the row- wise DWT are transposed so that columns become rows and rows become columns in the intermediate output data U′ i (block 520 ). This transposing may be performed simultaneously with the storing of the intermediate output result U′ i .
  • the transposed data is subjected to the one-dimensional inverse DWT of block 510 but will operate in a column-wise fashion since the data has been transposed.
  • a row-wise DWT on the transposed data is essentially column-wise.
  • the resulting data set U i from block 530 are the pixel values of a 2:1 up-scaled version of the original image.
  • This data set U i may be stored or displayed as an up-scaled image (block 540 ).
  • normalization may be needed to convert a larger data value that may occur in the inverse DWT operation. Normalization of data result U i may be achieved by the following formula: (U i ⁇ min)/(max ⁇ min)*K, where min is the minimum result value, max, the maximum result value and K the maximum desired normalized value. For instance, if an 8-bit value is desired, K may be set to 255.
  • FIG. 6 is a block diagram of an image processing apparatus according to an embodiment: of the invention.
  • FIG. 6 is a block diagram of internal image processing components of an imaging devic(e incorporating at least one embodiment of the invention including a DWT-based up-sampling unit.
  • a sensor 600 generates pixel components which are color/intensity values from some scene/environment.
  • the n-bit pixel values generated by sensor 600 are sent to a capture interface 610 .
  • Sensor 600 in the context relating to the invention will typically sense one of either R, G, or B components from one “sense” of an area or location.
  • the intensity value of each pixel is associated with only one of three (or four if G1 and G2 are considered separately) color planes and may form together a Bayer pattern raw image.
  • Capture interface 610 resolves the image generated by the sensor and assigns intensity values to the individual pixels. The set of all such pixels for the entire image is in a Bayer pattern in accordance with typical industry implementation of digital camera sensors.
  • a RAM 616 consists of the row and column indices of the dead pixels, which are supplied by the sensor. This RAM 616 helps to identify the location of dead pixels in relation to the captured image.
  • a primary compressor 628 receives companded sensor image data and performs image compression such as the DWT based compression, JPEG (Joint Photographic Experts Group) or Differential Pulse Code Modulation.
  • a RAM 629 can be used to store DWT coefficients both forward and inverse.
  • Primary compressor 628 can be designed to provide compressed channel by channel outputs to an encoding/storage unit 630 .
  • Encoding/storage unit 630 can be equipped to carry out a variety of binary encoding schemes, such as Modified Huffinan Coding (using tables stored in RAM 631 ) or may store directly the compressed image to storage arrays 640 .
  • An image up-sampling unit 670 can be coupled through bus 660 to the storage arrays 640 to up-sample the image, compressed or direct from the sensor for display or other purposes.
  • Image up-sampling unit 670 can be designed to include DWT based up-sampling as described above and may incorporate the modules such as the architecture shown in FIGS. 4 ( b )- 4 ( c ) and a transpose circuit.
  • the compressor unit 628 is designed for DWT based image compression
  • an integrated forward and inverse DWT architecture may be incorporated in the compressor 628 such that up-sampling unit 670 initiates the inverse mode of the architecture and sends virtually constructed sub-band data from storage arrays 640 to compressor unit 628 .
  • the imaging unit may contain an on-board display sub-system 680 such as an LCD panel coupled to bus 660 .
  • One application of up-sampling unit 670 is to provide up-sampled image data to display sub-system 680 .
  • the up-sampling unit 670 may receive data from, any stage of the image processing flow, even directly from the sensor prior to image compression if desired.
  • the up-sampled image will exhibit a very sharp and clear version of the compressed image.
  • Each of the RAM tables 616 , 629 and 631 can directly communicate with a bus 660 so that their data can be loaded and then later, if desired, modified. Further, those RAM tables and other RAM tables may be used to store intermediate result data as needed. When the data in storage arrays 640 or from up-sampling unit 670 is ready to be transferred external to the imaging apparatus of FIG. 6 it may be placed upon bus 660 for transfer. Bus 660 also facilitates the update of RAM tables 616 , 629 and 631 as desired.
  • FIG. 7 is a system diagram of one embodiment of the invention. Illustrated is a computer system 710 , which may be any general or special purpose computing or data processing machine such as a PC (personal computer), coupled to a camera 730 .
  • Camera 730 may be a digital camera, digital video camera, or any image capture device or imaging system, or combination thereof and is utilized to capture a sensor image of an scene 740 .
  • captured images are processed by an image processing circuit 732 so that they can be efficiently stored in an image memory unit 734 , which may be a ROM, RAM or other storage device such as a fixed disk.
  • image memory unit 734 that is destined for computer system 710 even if up-sampled is enhanced in that the loss of image features due to traditional up-sampling techniques is greatly mitigated by better preserving edge features in the DWT based up-sampling process.
  • images are stored first and downloaded later. This allows the camera 730 to capture the next object/scene quickly without additional delay.
  • digital video camera especially one used for live videoconferencing, it is important that images not only be quickly captured, but quickly processed and transmitted out of camera 730 .
  • the invention in various embodiments, particularly in scaling operation, is well-suited to providing good fast throughput to other parts of the image processing circuit 732 so that the overall speed of transmitting image frames is increased.
  • Image up-sampling is carried out within the image processing circuit 732 in this embodiment of the invention. After the image is up-sampled, it may also be passed to a display system on the camera 730 such as an LCD panel, or to a display adapter 76 on the computer system. The procedure of DWT based up-sampling may be applied to any image whether captured by camera 730 or obtained elsewhere. Since the inverse and forward DWT are essentially filtering operations, one of ordinary skill in the art may program computer system 710 to perform DWT based up-sampling. This may be achieved using a processor 712 such as the Pentium® processor (a product of Intel Corporation) and a memory 711 , such as RAM, which is used to store/load instructions, addresses and result data as needed.
  • a processor 712 such as the Pentium® processor (a product of Intel Corporation)
  • a memory 711 such as RAM, which is used to store/load instructions, addresses and result data as needed.
  • up-sampling may be achieved in software application running on computer system 710 rather than directly in hardware.
  • the application(s) used to generate scaled-up image pixels after download from camera 730 may be from an executable compiled from source code written in a language such as C++.
  • the instructions of that executable file, which correspond with instructions necessary to scale the image may be stored to a disk 718 , such as a floppy drive, hard drive or CD-ROM, or memory 711 .
  • the various embodiments of the invention may be implemented onto a video display adapter or graphics processing unit that provides up-sampling or image zooming.
  • Computer system 710 has a system bus 713 which facilitates information transfer to/from the processor 712 and memory 711 and a bridge 714 which couples to an I/O bus 715 .
  • I/O bus 715 connects various I/O devices such as a display adapter 716 , disk 718 and an I/O port 717 , such as a serial port.
  • I/O devices, buses and bridges can be utilized with the invention and the combination shown is merely illustrative of one such possible combination.
  • Image processing circuit 732 consists of ICs and other components which can execute, among other functions, the scaling up of the captured or compressed image.
  • the scaling operation may utilize image memory unit to store the original image of the scene 740 captured by the camera 730 . Further, this same memory unit can be used to store the up-sampled image data.
  • the scaled (and/or compressed) images stored in the image memory unit are transferred from image memory unit 734 to the I/O port 717 .
  • I/O port 717 uses the bus-bridge hierarchy shown (I/O bus 715 to bridge 714 to system bus 713 ) to temporarily store the scaled and compressed image data into memory 711 or, optionally, disk 718 .
  • the images are displayed on computer system 712 by suitable application software (or hardware), which may utilize processor 712 for its processing.
  • the image data may then be rendered visually using a display adapter 716 into a rendered/scaled image 750 .
  • the scaled image is shown as being twice in size of the original captured scene. This is desirable in many image applications where the original sensor capture size of a scene.
  • the image data in its compressed and scaled form may be communicated over a network or communication system to another node or computer system in addition to or exclusive of computer system 710 so that a videoconferencing session may take place.
  • a communication port in camera 730 that allows the image data to be transported directly to the other node(s) in a videoconferencing session.
  • image data that is scaled-up may be sent both to computer system 710 and transported over a network to other nodes.
  • the scaled image will have more visually accurate edge features than typical in scaling operations due to the enhancement in the scaling process by specifically and carefully selecting the DWT coefficients.
  • the end result will be a higher quality rendered up-sampled image 750 that displayed onto monitor 720 or other nodes in a videoconferencing session as compared with even typical up-sampling methods.

Abstract

A method comprising constructing virtual (Discrete Wavelet Transform) DWT sub-bands; from an image without performing the DWT and then applying an inverse DWT upon the virtual sub-bands, the result of the inverse DWT representing an up-sampled version of the image. Alternatively, an apparatus comprising an interface configured to communicate image data, and an up-sampling unit, the up-sampling unit coupled to the interface to receive the image data, the up-sampling unit configured to construct virtual sub-band input data from the image data, the up-sampling unit configured to perform an inverse DWT upon the input data generating an up-sampled image therefrom. In an alternate embodiment, an apparatus comprising a computer readable medium having instructions which when executed perform constructing virtual (Discrete Wavelet Transform) DWT sub-bands from an image without performing the DWT, applying an inverse DWT upon the virtual sub-bands, the result of the inverse DWT representing an up-sampled version of the image.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates generally to image processing and computer graphics. More specifically, the invention relates to up-sampling or up-scaling of an image.
2. Description of the Related Art
In the art of imaging, it may be desirable to resize an image. Particularly, it may be desirable to scale up (up-sample) an image to make it larger if the image is too small for its intended use or application. For instance, a digital camera may capture an image in a small size of M pixel rows by N pixel columns. If the image is to be printed, it may be desirable to scale the image to R pixel rows by S pixel columns (R>M and/or S>N) such that the image covers the print area. In some digital cameras, a display panel such as an LCD (Liquid Crystal Display) is provided so that users can review in a quick fashion the pictures they have already taken or the contents of the picture they are about to take (i.e. what is in the focus area of the camera). The LCD panel, like any CRT (Cathode Ray Tube) monitoring device has a maximum resolution that it can support, but unlike a CRT, the resolution cannot be modified by the supporting video sub- system (i.e. graphics card). For instance, in a CRT device, a maximum resolution of 640 pixels by 480 pixels also implies that lower resolutions could be provided with very little loss in visual quality. However, in a LCD panel, since there are a fixed number of very discretely observable pixels, an attempted change in resolution usually results in a highly blurred image.
When an image needs to be scaled-up or up-sampled, where the image size is increased, the degradation due to blurring and blocking in a LCD panel is severe. For instance, consider an image that is 100 by 100 pixels in size. If an LCD panel on which the image is to be displayed is 200 by 200 in its screen size in terms of fixed pixels available, then only one-quarter of the available screen is being utilized. If it is desired that the entire screen be utilized for displaying the image, then the image needs to be scaled-up by a 2:1 ratio.
One simple and traditional way of up-sampling is to merely “duplicate” pixels as needed. In this case, since a 2:1 up-sampling is desired, each pixel could be repeated three additional times such that the information that occupied one pixel, now occupies four pixels in a two-by-two block. This “fill” approach has clear speed advantages over any other method of up-sampling since no computation or image processing is involved. However, this approach guarantees that the resulting scaled image is fuzzier, less sharp and “blocky” where individual pixel squares are more readily discernible to the eye. Importantly, the scaled result will have edge features, which are critical to human perception of any image, that are also more blocky and less sharp.
One traditional means of increasing the quality of the scaled-up image has been to use bi-linear interpolation. If a two to one up-sampling is desired then each pixel in the original image should be replaced by a block of four pixels. Consider, for example, the following original image: X A X B X C X D X E X F
Figure US06236765-20010522-M00001
A bi-linear interpolation would average in two different directions to determine the scaled image data set. The scaled image under a bi-linear interpolation method may consist of: X A X A + X B 2 X B X B + X C 2 X C X A + X D 2 X A + X B + X D + X E 4 X D X D + X E 2 X E
Figure US06236765-20010522-M00002
If the original image is a size M×N, then the scaled image would be of size M*N*4 in terms of total number of pixels in the respective data sets. This and other averaging methods may yield better results than filling, but still results in blurred details and rough edges that are not smoothly contoured on the image.
Typical up-sampling techniques are inadequate and lead to poor image quality. Thus, there is a need for a up-sampling technique that better preserves image quality. Further, since lower cost of computation, in terms of complexity, is crucial in devices such as digital cameras, the up-sampling technique should also be compute efficient, so that it can be utilized in such applications.
SUMMARY OF THE INVENTION
What is disclosed is a method comprising constructing virtual (Discrete Wavelet Transform) DWT sub-bands from an image without performing the DWT and then applying an inverse DWT upon the virtual sub-bands, the result of the inverse DWT representing an up-sampled version of the image. Alternatively, disclosed is an apparatus comprising an interface configured to communicate image data, and an up-sampling unit, the up-sampling unit coupled to the interface to receive the image data, the up-sampling unit configured to construct virtual sub-band input data from the image data, the up-sampling unit configured to perform an inverse DWT upon the input data generating an up-sampled image therefrom. In an alternate embodiment, what is disclosed is an apparatus comprising a computer readable medium having instructions which when executed perform constructing virtual (Discrete Wavelet Transform) DWT sub-bands from an image without performing the DWT, applying an inverse DWT upon the virtual sub-bands, the result of the inverse DWT representing an up-sampled version of the image.
BRIEF DESCRIPTION OF THE DRAWINGS
The objects, features and advantages of the method and apparatus for the present invention will be apparent from the following description in which:
FIG. 1 illustrates the sub-band(s) resulting from a forward DWT operation upon an image.
FIG. 2 is a flow diagram of DWT based up-sampling according to one embodiment of the invention.
FIG. 3 illustrates DWT based up-sampling according to one embodiment of the invention.
FIG. 4(a) shows a basic processing cell for computing a DWT operation.
FIG. 4(b) is an architecture for a one-dimensional inverse DWT.
FIG. 4(c) is an architecture for a one-dimentional inverse DWT having odd cell output.
FIG. 5 is a flow diagram of one embodiment of the invention.
FIG. 6 is a block diagram of an image processing apparatus according to an embodiment of the invention.
FIG. 7 is a system diagram of one embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
Referring to the figures, exemplary embodiments of the invention will now be described. The exemplary embodiments are provided to illustrate aspects of the invention and should not be construed as limiting the scope of the invention. The exemplary embodiments are primarily described with reference to block diagrams or flowcharts. As to the flowcharts, each block within the flowcharts represents both a method step and an apparatus element for performing the method step. Depending upon the implementation, the corresponding apparatus element may be configured in hardware, software, firmware or combinations thereof.
Up-sampling of an image is achieved, according to one embodiment of the invention, by applying an inverse Discrete Wavelet Transform (DWT) upon an image after approximating values for “virtual sub-bands” belonging to the image. FIG. 1 illustrates the sub-band(s) resulting from a forward DWT operation upon an image. The DWT is a “discrete” algorithm based upon Wavelet theory which utilizes one or more “basis” functions that can be used to analyze a signal, similar to the DCT (Discrete Cosine Transform) based upon sinusoidal basis functions. Advantageously, the DWT is better suited to representing edge features in images since Wavelets by nature are periodic and often jagged. The DWT approximates an input signal by discrete samples of a full continuous wavelet. Thus, with these discrete sample points, the DWT can be thought of as also being a filtering operation with well-defined coefficients. Unlike the Fourier transforms or averaging filters, the Wavelet coefficients can be selected to suit particularly the application, or type of input signal. The DWT chosen in at least one embodiment of the invention for image scaling is known as the 9-7 bi-orthogonal spline filters DWT. Since the DWT is discrete, the DWT can be implemented using digital logic such as Very Large Scale Integration (VLSI) circuits and thus can be integrated on a chip with other digital components. Where a imaging device or camera already utilizes a DWT for image compression, an inverse DWT can easily be implemented in the same device with little additional overhead and cost. The ability of the DWT to better approximate the edge features of an image make it ideal for an up-sampling application. It is advantageous over interpolation-type scaling in that visually essential image features can be better reconstructed in the scaled-up image. Further, as shown and described below, an architecture for DWT-based up-sampling can be implemented efficiently for high data throughput, unlike Fourier or averaging techniques which require multiple cycles or iterations to generate a single output datum.
The essence of what is known commonly as a two-dimensional DWT is to decompose an input signal into four frequency (frequency refers to the high-pass (H) or low-pass (L) nature of the filtering) sub-bands. The two-dimensional DWT creates four sub-bands, LL, LH, HL and HH, with the number of data values in each one-quarter of the input data. The DWT is multi-resolution technique such that the result data from one iteration of the DWT may be subjected to another DWT iteration. Each “level” of resolution is obtained by applying the DWT upon the “LL” sub-band generated by the previous level DWT. This is possible because the LL sub-band contains almost all essential image features and can be considered a one-quarter scaled version of the previous level LL sub-band (or the original full image in a one-level DWT). With this capability of the DWT, each sub-band can further be divided into smaller and smaller sub-bands as is desired.
Each DWT resolution level k has 4 sub-bands, LLk, HLk, LHk and HHk. The first level of DWT resolution is obtained by performing the DWT upon the entire original image, while further DWT resolution is obtained by performing the DWT on the LL sub-band generated by the previous level k-1. The LLk sub-band contains enough image information to substantially reconstruct a scaled version of the image. The LHk HLk and HHk sub-bands contain high-frequency noise and edge information which is less visually critical than that resolved into and represented in the LLk sub-band.
In general, level K sub-band(s) have the following properties (if K=1, LLo refers to the original image):
LLk—contains a one-quarter scaled version of the LLk-1 sub-band.
LHk—contains noise and horizontal edge information from the LLk-1 sub-band image.
HLk—contains noise and vertical edge information from the LLk-1 sub-band image.
HHk—contains noise and diagonal edge information from the LLk-l sub-band image.
The property of the DWT which allows most information to be preserved in the LLk sub-band can be exploited in performing up-sampling of an image. The DWT is a unitary transformation technique for multi-resolution hierarchical decomposition which permits an input signal that is decomposed by a DWT can be recovered with little loss by performing an inverse of the DWT. The coefficients representing the “forward” DWT described above have a symmetric relationship with coefficients for the inverse DWT. Up-sampling is performed in an embodiment of the invention by considering the original input image that is to be up-sampled as a virtual LL sub-band, approximating values for the other sub-bands and then applying the inverse DWT. This is based upon the premise that an inverse DWT, when applied to the four sub-bands generated at any level will result in the recovery of the LL sub-band of the higher level from which it was resolved, or in the case of level 1 sub-bands will result in recovery of the original image. The nature of these coefficients and an architecture for carrying out an inverse DWT in order to achieve up-sampling are discussed below.
FIG. 2 is a flow diagram of DWT based up-sampling according to one embodiment of the invention.
In DWT based up-sampling, the first step is to consider the input image as the LL sub-band (step 200). If an image has M rows by N columns of pixels, then these M*N values are considered as a virtual LL sub-band LLv. Then, each virtual sub-band LHv, HLv and HHv, each of which need to be dimensioned to have M*N values, need to be approximated or constructed (step 210). One way of constructing these sub-band data values may be to analyze/detect the input image for horizontal, vertical and diagonal (see above which direction of edge is appropriate for each sub-band HL, LH and HH) edges and provide non-zero values where edges occur and approximate all other values to zero. In one embodiment of the invention, instead of performing any analysis or detection, all values in the virtual sub-bands LHv, HLv and HHv, are approximated to zero. If the 9-7 bi-orthogonal splines DWT is considered, then an approximation to zero of all other sub-band data values when the original image or higher level LL sub-band is being recovered through an inverse DWT will result in the recovered image or higher level sub-band being nearly identical (from a human vision perspective) with its original state prior to performing the forward DWT operation. With all of the sub-bands, LLv, LHv, HLv and HHv thus constructed, an inverse two-dimensional DWT may be performed to generate the up-scaled image (step 220). The up-scaled image will exhibit better visual clarity than that available through filling, averaging or interpolation based up-sampling techniques. The up-sampling will result will have M*2 rows and N*2 columns of pixel result values, which together constitute the up-sampled image.
FIG. 3 illustrates DWT based up-sampling according to one embodiment of the invention.
An original image I that has M rows by N columns of pixels in its data set can be up-sampled by a factor of two by following the procedure outlined in FIG. 2. This image is composed of M*N pixels Iij where i, representing the row number of the pixel, ranges from 1 to M and j, representing the column number of the pixel, ranges from 1 to N. Thus, the original image has pixels I1,1, I1,2, etc. on its first row and pixels I2,1, I2,2, etc. on its second row and so on. The entire image data set I, according to an embodiment of the invention, will comprise the virtual sub-band LLv. Since data for the other virtual sub-bands that need to be constructed is unavailable (since no real DWT operation was performed), this data must be approximated or constructed. According to an embodiment of the invention, the data values for the other virtual sub-bands may all be approximated to zero. The combined data set for all four sub-bands will have M*2 rows and N*2 columns of pixel values which can then be subjected to a two-dimensional inverse DWT operation. The up-sampled image U will have M*2 rows and N*2 columns of pixels that are virtually identical in terms of perception of the quality and clarity of the image when compared with the original image I. The up-sampled image U is shown to have pixel values of Ur,s, resulting from the two-dimensional inverse DWT being applied upon the virtual sub-bands, where r, representing the row number of the pixel, ranges from 1 to M*2 and s, representing the column number of the pixel, ranges from 1 to N*2. The data set of all such values Ur,s represents a two-to-one scaled-up version of the input image I.
FIG. 4(a) shows a basic processing cell for computing a DWT operation. FIG. 4(a), which shows the basic processing cell Dk 400, is first described to aid in understanding the architecture of FIG. 4(b) which computes a one-dimensional inverse DWT. Referring to FIG. 4(a), given a filter coefficients, c (either high-pass or low-pass), an intermediate output Lk is determined by the following expression: Lk=(pk+qk)*c. In the expressions for Lk, the term qk represents the input virtual sub-band data which is the subject of the inverse DWT, while the term pk-1 refers to an input data propagated from the coupled processing cell from the previous clock cycle and pk the input data from the current clock cycle, respectively. The input Pk is passed through to output pk-1 from a cell Dk to the previous cell Dk-1 in the array. Thus the terms pk and pk-1 will be referred to hereinafter as “propagated inputs.” The basic processing cell 400 of FIG. 4(a) may be repeatedly built and selectively coupled to perform the inverse DWT computing architecture as shown in FIG. 4(b). This processing cell may be built in hardware by coupling an adder with a multiplier and registers that hold the inverse DWT filter coefficients.
FIG. 4(b) is an architecture for a one-dimensional inverse DWT.
A forward DWT may be either one-dimensional or two-dimensional in nature. A one-dimensional forward DWT (one that is performed either row by row or column by column only) would result in only two sub-bands—LFS (Low Frequency Sub-band) and HFS (High Frequency Sub-band). If the direction of the one-dimensional forward DWT is row-wise, then two vertical sub-bands that are dimensioned M rows by N/2 columns are created when applied to an image. The LFS sub-band from a row-by-row forward DWT is a tall and skinny distorted version of the original image. If the direction of the one-dimensional forward DWT is column-wise, then two vertical sub-bands that are dimensioned M rows by N/2 columns are created when applied to an image. The LFS sub-band from a column-by-column forward DWT is a wide and fat distorted version of the original image. When these two one-dimensional forward DWT processes are combined, a two-dimensional forward DWT is embodied. Likewise, when performing the inverse DWT, a row-wise inverse DWT and column-wise inverse DWT can be combined to create a two dimensional inverse DWT, which is desirable in scaling an image proportionally (without axis distortion). Thus, as shown in FIG. 5, the result from a row-wise inverse DWT may be transposed such that another one-dimensional inverse DWT operation upon the result data row-wise will actually be column-wise. The combination of a row-wise and column-wise one-dimensional inverse DWTs will yield a 2 to 1 up-scaled version of the virtual LL sub-band (the original image). Thus, a two-dimensional inverse DWT may be built by repeating or re-utilizing one-dimensional inverse DWT modules, such as that shown in FIG. 4(b).
To obtain the up-sampled image data Ur,s, an intermediate data set, according to an embodiment of the invention, U′i is first generated by applying a one-dimensional inverse DWT module to the virtually constructed sub-band data. This intermediate data set U′i is represented by the expression U i = n [ h _ 2 n - i a n + g _ 2 n - i c n ] = n h _ 2 n - i a n + n g _ 2 n - i c n ,
Figure US06236765-20010522-M00003
where an are the constructed (virtual) LFS data and cn is the constructed (virtual) HFS data. The LFS data may be constructed by concatenating the data in the virtual sub-bands LLv and LHv, while the HFS data may be constructed by concatenating the HLv and HHv viu sub-bands. The inverse and forward 9-7 bi-orthogonal splines DWT have certain symmetric properties which allow for efficient and systolic (i.e., parallel and repeated) processing cells such as those shown in FIG. 4(a) to be utilized. The inverse DWT has a set of inverse high-pass filter coefficients {overscore (g)}k and a set of inverse low-pass filter coefficients {overscore (h)}k. The relation of these coefficients in discussed in a related patent application entitled “An Integrated Systolic Architecture for Decomposition and Reconstruction of Signals Using Wavelet Transforms,” Ser. No. 08,767,976, filed Dec.17, 1996. We can split the U′i representation into a sum of two summations: U 1 ( 1 ) = n h _ 2 n - i a n , U i ( 2 ) = n g _ 2 n - i c n .
Figure US06236765-20010522-M00004
The even terms of U′i (1) and U′i (2), i.e., U′2j (1) and U′2j (2) for j=0, 1, . . . , n/2-1 using these filter coefficients are expanded as: U 0 ( 1 ) = h _ 0 a 0 + h _ 2 a 1 , U 0 ( 2 ) = g _ 0 c 0 + g _ 2 c 1 + g _ 4 c 2 , U 2 ( 1 ) = h _ - 2 a 0 + h _ 0 a 1 + h _ 2 a 2 , U 2 ( 2 ) = g _ - 2 c 0 + g _ 0 c 1 + g _ 2 c 2 + g _ 4 c 3 , U 4 ( 1 ) = h _ - 2 a 1 + h _ 0 a 2 + h _ 2 a 3 , U 4 ( 2 ) = g _ - 2 c 1 + g _ 0 c 2 + g _ 2 c 3 + g _ 4 c 4 , U n - 6 ( 1 ) = h _ - 2 a n / 2 - 4 + h _ 0 a n / 2 - 3 + h _ 2 a n / 2 - 2 , U n - 6 ( 2 ) = g _ - 2 c n / 2 - 4 + g _ 0 c n / 2 - 3 + g _ 2 c n / 2 - 2 + g _ 4 c n / 2 - 1 , U n - 4 ( 1 ) = h _ - 2 a n / 2 - 3 + h _ 0 a n / 2 - 2 + h _ 2 a n / 2 - 1 , U n - 4 ( 2 ) = g _ - 2 c n / 2 - 3 + g _ 0 c n / 2 - 2 + g _ 2 c n / 2 - 1 , U n - 2 ( 1 ) = h _ - 2 a n / 2 - 2 + h _ 0 a n / 2 - 1 , U n - 2 ( 2 ) = g _ - 2 c n / 2 - 2 + g _ 0 c n / 2 - 1 .
Figure US06236765-20010522-M00005
The inverse 9-7 bi-orthogonal splines DWT filter coefficients like their forward counterparts have certain symmetric properties allowing associative grouping. One property of the inverse high-pass coefficients is {overscore (g)}n=(−1)nh1-n, where hk are forward DWT coefficients. Additionally since the forward coefficients have the property hn=h-n, the inverse high-pass coefficients also have a property such that {overscore (g)}n=(−1)nhn-1. Thus, for n=0, {overscore (g)}o=h-1. For n=2, since {overscore (g)} 2=h1 and h-1={overscore (g)} 0, then {overscore (g)} 2={overscore (g)} 0. Likewise, for n=4, {overscore (g)} 4=h3={overscore (g)}-2. The inverse low-pass coefficients have the property {overscore (h)}n={overscore (h)}-n, such that {overscore (h)}2={overscore (h)}-2. Thus, the even-numbered outputs U′2j can be computed using only four coefficients, {overscore (h)}0, {overscore (h)}2, {overscore (g)}2 and {overscore (g)}4. Similarly, for odd-numbered outputs U′2j-1, it can be shown by the same filter properties discussed above that only five coefficients, {overscore (h)}3, {overscore (h)}1, {overscore (g)}5, {overscore (g)}3 and {overscore (g)}1 can be used.
The relationships described above show how an inverse DWT may be computed. The architecture in FIG. 4(b) for computing the inverse DWT consists of two input sequences ai and ci, which represent the high-frequency sub-band and the low-frequency sub-band inputs, respectively. The inverse architecture receives two inputs and produces one output. This architecture is not unitary in that the odd-numbered outputs, i.e., U′1, U′3, U′5. . . , require five processing cells, one for each coefficient, whereas the even-numbered outputs, i.e., U′1, U′2, U′4. . . , require only four processing cells. The odd-numbered outputs may be generated and even-numbered outputs may be generated on alternating clock cycles, odd and even, respectively.
Thus, the inverse DWT architecture must be composed of two distinct blocks—an even output generating block 402 and an odd output generating block 450. Even output generating block 402 is further composed of two sub-circuits—an even high frequency sub-band sub-circuit (HFS) 410 and an even low frequency sub-band sub-circuit (LFS) 420. Even HFS sub-circuit 410 consists of two processing cells 415 and 417 each of which are composed of a multiplier and an adder. The processing cells 415, 417, 425 and 427 operate similar to the basic processing cell 400 shown in FIG. 4(a) accepting two inputs, summing them and then multiplying that sum by a coefficient. For instance, processing cell 415 outputs a term such that ai is first added to the propagated input from processing cell 417, with that sum multiplied by {overscore (h)}2. Likewise for low frequency sub-band circuit 420, processing cell 425 outputs a term to adder/controller 430 which is the product of {overscore (g)}4 and the sum of the input ci and the propagated input from processing cell 427. Processing cell 417 receives as one input 0 and as the other input ai-1 since delay element 412 holds the value given it on the previous clock, transmitting it on the next clock cycle.
Even output generating block operates as follows. At i=0, ao is propagated to delay 412, and c0 to delay 422. Though a0 and c0 are also input to cells 415 and 425, respectively, adder/controller 430 waits until the third clock cycle to output x0 and have non-zero propagated inputs. At i=0, cells 417 and 427 have outputs of 0, since initial values released by delays 412, 424 and 422 are set at zero. At i=1, delay 412 releases ao to the pi input of cell 417 and a1 is held at delay 412 and input to cell 415. As a result, cell 415 generates the term {overscore (h)}2a1 and cell 417 generates {overscore (h)}0a0. These outputs are sent to adder/controller 430 but are held (latched) until the next clock i=2. At i=1, though cells 425 and 427 generate the terms c1{overscore (g)}4 and c0{overscore (g)}2, respectively, these terms are ignored (cleared) by adder/controller 430 since according to the relationship defined above, the first output U′0 utilizes the C2 input data.
At i=2, the third clock cycle, delay 424 releases c0 to the pi (propagated) input of cell 427 and delay 422 releases c1 to the qi input of cell 427 (for a description of qi and pi, see FIG. 4(a) and associated text). Thus, cell 427 generates the term (c1+c0){overscore (g)}2. Cell 425 generates c2{overscore (g)}4. As described earlier, the outputs of cells 415 and 417 from the previous clock were held at adder/controller 430 and are now summed, at i=2 with the terms generated by cells 425 and 427. Again, at i−2, even though cells 415 and 417 generate terms (a0+a2){overscore (h)}2 and a1{overscore (h)}0, respectively, these terms are held one clock cycle. Instead, the i=2 outputs of cells 425 and 427 which are C2{overscore (g)}4 and (c0+c1)*{overscore (g)}2, respectively, are summed with the i=1 outputs of cells 415 and 417 which are {overscore (h)}0a0 and {overscore (h)}2a1. Hence, adder/controller 430 generates the first output U′0={overscore (h)}0a0+{overscore (h)}2a1+c0{overscore (g)}2+c1{overscore (g)}2c2{overscore (g)}4.
Thus, for each clock cycle i, after i=2 (the third clock cycle), adder/controller 430 receives current outputs of sub-circuit 420 and adds them to the previous clock′s outputs from sub-circuit 410. Additionally, adder/controller 430 receives the current output of sub-circuit 410, holding them until the next clock cycle. FIG. 4(c) shows an odd output generating block 450 which requires five processing cells—465, 467, 475, 477 and 479. The processing cells 465, 467, 475, 477 and 479 operate similarly to the processing cell 400 shown in FIG. 4(a). The delay elements 462, 464, 472 and 474 hold their inputs for one clock cycle and release them on the next clock cycle. Each cell has an adder and multiplier and receives a propagated input from the cell to which it is connected.
Odd output generating block 450 operates as follows. At i=0, a0 is propagated to cell 465 and is held on clock cycle at delay 462, while cell 475 receives c0 . At i−1, delay 462 releases a0 to cell 467, while delay 472 release c0 to cell 477. Also, at i=1 and c1 are input to cells 465 and 475, respectively. At i=2, cell 465 receives a2, cell 467 receives a1 as its qi and receives a0 as its pi input. Thus, cell 465 generates a term a2{overscore (h)}3 and cell 467 generates (a1+a0){overscore (h)}1. These outputs are sent to adder/controller 480 but are held for one clock cycle before being summed with the outputs of cells 475, 477 and 479. At i=2, the outputs of cells 475, 477 and 479 are ignored by adder/controller 480.
At i=3, c3 is input to cell 475, cell 477 receives c2 from the delay 472, cell 479 receives as propagated input ci, and cell 477 receives as its propagated input, c0. Thus, cell 475 generates the term c3{overscore (g)}5, cell 477 generates the term (c0+c2)*{overscore (g)}3, and cell 479 generates c1{overscore (g)}1. These outputs are received by adder/controller 480 which adds the i=3 outputs of cells 475, 477 and 479 with the latched, i=2 outputs of cells 465 and 467 from the previous clock cycle. Hence, adder/controller 480 generates the second output (the first odd output) x1={overscore (h)}1a0+{overscore (h)}1a1+{overscore (h)}3a2+{overscore (g)}3c0+{overscore (g)}3c2+{overscore (g)}5c3+{overscore (g)}1c1.
Thus, for each clock cycle i, after i=3, (the fourth clock cycle), adder/controller 480 receives the current outputs of cells 475, 477 and 479 and adds them to the previous clock cycle's outputs of cells 465 and 467. Additionally, adder/controller 480 receives the current clock's outputs of cells 465 and 467 holding them until the next clock cycle. With a set of result data U′1 thus obtained, these values can be transposed and fed back in as inputs for another iteration of the one-dimensional DWT. The intermediate output U′0 corresponds to data result positioned at row 1 and column 1 in the up-scaled image space, while U′1 corresponds to a data result positioned at row 1 and column 2, since the input data from the LFS and HFS are fed into the architecture of FIG. 4(b) in a row-wise manner. The last entry of the first row in the up-scaled image space would be U′N*2-1 while U′N*2 is positioned in the second row and first column of the M*2 row, N*2 column up-scaled image space.
Once all the intermediate data U′i, where i ranges from 0 to M*2*N*2-1, is generated, these values may be transposed by a matrix transpose circuit, which one of ordinary skill in the art can readily implement, such that the row-wise data is now column wise. The transposed data set TU′i is then considered to be the input data for another iteration of the one-dimensional inverse DWT, fed-back into a module with architecture identical or similar to FIG. 4(b). The result of applying a one-dimensional DWT to the data set TU′i is Ui (or Ur,s in row-column representation), the up-scaled image data. This process is summarized in FIG. 5.
FIG. 5 is a flow diagram of one embodiment of the invention.
The methodology for discrete wavelet transform (DWT) based up-sampling of an image involves a step-wise application of the inverse DWT. Once the four virtual sub-bands are constructed, then the first step in implementing a two-dimensional inverse DWT is to consider the LLv and HLv sub-bands as the LFS data, and the LHv and HHv sub-bands as the HFS data for the one-dimensional inverse DWT (block 505). An inverse DWT in one-dimension is applied. row-rise to the LFS and HFS data (block 510). The M*N*4 outputs resulting from this first iteration of the inverse DWT (labeled U′i in the FIGS. 4(b)-4(c) example) may be stored into an image array which may be a memory or other storage means. Next, the outputs of the row- wise DWT are transposed so that columns become rows and rows become columns in the intermediate output data U′i (block 520). This transposing may be performed simultaneously with the storing of the intermediate output result U′i. Next, the transposed data is subjected to the one-dimensional inverse DWT of block 510 but will operate in a column-wise fashion since the data has been transposed. A row-wise DWT on the transposed data is essentially column-wise. The resulting data set Ui from block 530 are the pixel values of a 2:1 up-scaled version of the original image. This data set Ui may be stored or displayed as an up-scaled image (block 540). In some instances, normalization may be needed to convert a larger data value that may occur in the inverse DWT operation. Normalization of data result Ui may be achieved by the following formula: (Ui−min)/(max−min)*K, where min is the minimum result value, max, the maximum result value and K the maximum desired normalized value. For instance, if an 8-bit value is desired, K may be set to 255.
FIG. 6 is a block diagram of an image processing apparatus according to an embodiment: of the invention.
FIG. 6 is a block diagram of internal image processing components of an imaging devic(e incorporating at least one embodiment of the invention including a DWT-based up-sampling unit. In the exemplary circuit of FIG. 6, a sensor 600 generates pixel components which are color/intensity values from some scene/environment. The n-bit pixel values generated by sensor 600 are sent to a capture interface 610. Sensor 600 in the context relating to the invention will typically sense one of either R, G, or B components from one “sense” of an area or location. Thus, the intensity value of each pixel is associated with only one of three (or four if G1 and G2 are considered separately) color planes and may form together a Bayer pattern raw image. Capture interface 610 resolves the image generated by the sensor and assigns intensity values to the individual pixels. The set of all such pixels for the entire image is in a Bayer pattern in accordance with typical industry implementation of digital camera sensors.
It is typical in any sensor device that some of the pixel cells in the sensor plane may not respond to the lighting condition in the scene/environment properly. As a result, the pixel values; generated from these cell may be defective. These pixels are called “dead pixels.” The “pixel substitution” unit 615 replaces each dead pixel by the immediate previously valid pixel in the row. A RAM 616 consists of the row and column indices of the dead pixels, which are supplied by the sensor. This RAM 616 helps to identify the location of dead pixels in relation to the captured image.
A primary compressor 628 receives companded sensor image data and performs image compression such as the DWT based compression, JPEG (Joint Photographic Experts Group) or Differential Pulse Code Modulation. A RAM 629 can be used to store DWT coefficients both forward and inverse. Primary compressor 628 can be designed to provide compressed channel by channel outputs to an encoding/storage unit 630. Encoding/storage unit 630 can be equipped to carry out a variety of binary encoding schemes, such as Modified Huffinan Coding (using tables stored in RAM 631) or may store directly the compressed image to storage arrays 640.
An image up-sampling unit 670 can be coupled through bus 660 to the storage arrays 640 to up-sample the image, compressed or direct from the sensor for display or other purposes. Image up-sampling unit 670 can be designed to include DWT based up-sampling as described above and may incorporate the modules such as the architecture shown in FIGS. 4(b)-4(c) and a transpose circuit. Alternatively, where the compressor unit 628 is designed for DWT based image compression, then, an integrated forward and inverse DWT architecture may be incorporated in the compressor 628 such that up-sampling unit 670 initiates the inverse mode of the architecture and sends virtually constructed sub-band data from storage arrays 640 to compressor unit 628. The imaging unit may contain an on-board display sub-system 680 such as an LCD panel coupled to bus 660. One application of up-sampling unit 670 is to provide up-sampled image data to display sub-system 680. The up-sampling unit 670 may receive data from, any stage of the image processing flow, even directly from the sensor prior to image compression if desired. Advantageously, the up-sampled image will exhibit a very sharp and clear version of the compressed image.
Each of the RAM tables 616, 629 and 631 can directly communicate with a bus 660 so that their data can be loaded and then later, if desired, modified. Further, those RAM tables and other RAM tables may be used to store intermediate result data as needed. When the data in storage arrays 640 or from up-sampling unit 670 is ready to be transferred external to the imaging apparatus of FIG. 6 it may be placed upon bus 660 for transfer. Bus 660 also facilitates the update of RAM tables 616, 629 and 631 as desired.
FIG. 7 is a system diagram of one embodiment of the invention. Illustrated is a computer system 710, which may be any general or special purpose computing or data processing machine such as a PC (personal computer), coupled to a camera 730. Camera 730 may be a digital camera, digital video camera, or any image capture device or imaging system, or combination thereof and is utilized to capture a sensor image of an scene 740. Essentially, captured images are processed by an image processing circuit 732 so that they can be efficiently stored in an image memory unit 734, which may be a ROM, RAM or other storage device such as a fixed disk. The image contained within image memory unit 734 that is destined for computer system 710 even if up-sampled is enhanced in that the loss of image features due to traditional up-sampling techniques is greatly mitigated by better preserving edge features in the DWT based up-sampling process. In most digital cameras that can perform still imaging, images are stored first and downloaded later. This allows the camera 730 to capture the next object/scene quickly without additional delay. However, in the case of digital video camera, especially one used for live videoconferencing, it is important that images not only be quickly captured, but quickly processed and transmitted out of camera 730. The invention in various embodiments, particularly in scaling operation, is well-suited to providing good fast throughput to other parts of the image processing circuit 732 so that the overall speed of transmitting image frames is increased. Image up-sampling is carried out within the image processing circuit 732 in this embodiment of the invention. After the image is up-sampled, it may also be passed to a display system on the camera 730 such as an LCD panel, or to a display adapter 76 on the computer system. The procedure of DWT based up-sampling may be applied to any image whether captured by camera 730 or obtained elsewhere. Since the inverse and forward DWT are essentially filtering operations, one of ordinary skill in the art may program computer system 710 to perform DWT based up-sampling. This may be achieved using a processor 712 such as the Pentium® processor (a product of Intel Corporation) and a memory 711, such as RAM, which is used to store/load instructions, addresses and result data as needed. Thus, in an alternative embodiment, up-sampling may be achieved in software application running on computer system 710 rather than directly in hardware. The application(s) used to generate scaled-up image pixels after download from camera 730 may be from an executable compiled from source code written in a language such as C++. The instructions of that executable file, which correspond with instructions necessary to scale the image, may be stored to a disk 718, such as a floppy drive, hard drive or CD-ROM, or memory 711. Further, the various embodiments of the invention may be implemented onto a video display adapter or graphics processing unit that provides up-sampling or image zooming.
Computer system 710 has a system bus 713 which facilitates information transfer to/from the processor 712 and memory 711 and a bridge 714 which couples to an I/O bus 715. I/O bus 715 connects various I/O devices such as a display adapter 716, disk 718 and an I/O port 717, such as a serial port. Many such combinations of I/O devices, buses and bridges can be utilized with the invention and the combination shown is merely illustrative of one such possible combination.
When an image, such as an image of a scene 740, is captured by camera 730, it is sent to the image processing circuit 732. Image processing circuit 732 consists of ICs and other components which can execute, among other functions, the scaling up of the captured or compressed image. The scaling operation, as described earlier, may utilize image memory unit to store the original image of the scene 740 captured by the camera 730. Further, this same memory unit can be used to store the up-sampled image data. When the user or application desires/requests a download of images, the scaled (and/or compressed) images stored in the image memory unit are transferred from image memory unit 734 to the I/O port 717. I/O port 717 uses the bus-bridge hierarchy shown (I/O bus 715 to bridge 714 to system bus 713) to temporarily store the scaled and compressed image data into memory 711 or, optionally, disk 718.
The images are displayed on computer system 712 by suitable application software (or hardware), which may utilize processor 712 for its processing. The image data may then be rendered visually using a display adapter 716 into a rendered/scaled image 750. The scaled image is shown as being twice in size of the original captured scene. This is desirable in many image applications where the original sensor capture size of a scene. In a videoconferencing application the image data in its compressed and scaled form may be communicated over a network or communication system to another node or computer system in addition to or exclusive of computer system 710 so that a videoconferencing session may take place. Since up-sampling and compression are already achieved on-camera in one embodiment of the invention, it may be possible to implement a communication port in camera 730 that allows the image data to be transported directly to the other node(s) in a videoconferencing session. Wherever a user of computer system 710 also desires to see his own scene on monitor 720, image data that is scaled-up may be sent both to computer system 710 and transported over a network to other nodes. As discussed earlier, the scaled image will have more visually accurate edge features than typical in scaling operations due to the enhancement in the scaling process by specifically and carefully selecting the DWT coefficients. The end result will be a higher quality rendered up-sampled image 750 that displayed onto monitor 720 or other nodes in a videoconferencing session as compared with even typical up-sampling methods.
The exemplary embodiments described herein are provided merely to illustrate the principles of the invention and should not be construed as limiting the scope of the invention. Rather, the principles of the invention may be applied to a wide range of systems to achieve the advantages described herein and to achieve other advantages or to satisfy other objectives as well.

Claims (28)

What is claimed is:
1. A method comprising:
constructing virtual Discrete Wavelet Transform (DWT) sub-bands from an image by approximation without performing a DWT,
wherein a virtual LL sub-band is approximated to an original image pixel width and height dimensions without reduction; applying an inverse DWT upon said virtual sub-bands, the result of said inverse DWT representing an up-sampled version of said image, said up-sampled version of said image having increased pixel width and height dimensions of said original image.
2. A method according to claim 1 wherein said virtual sub-bands include said LL sub-band, an HL sub-band, an LH sub-band and an HH sub-band.
3. A method according to claim 2 wherein said LL sub-band has the same dimension as said image.
4. A method according to claim 2 wherein said HL sub-bands, said LH sub-bands and said HH sub-bands are approximated by zero pixel values.
5. A method according to claim 2 wherein said inverse DWT performed is a two-dimensional inverse DWT.
6. A method according to claim 5 wherein performing said two-dimensional inverse, DWT comprises:
applying a one-dimensional inverse DWT to said virtual sub-bands in a row-wise fashion; transposing said data resulting from said first DWT such that columns become rows and rows become columns; and
applying again said one-dimensional inverse DWT to said transposed data, the result of applying said one-dimensional inverse DWT to said transpose data being up-scaled image data.
7. A method according to claim 2 wherein said image is doubled in size in up-sampled version.
8. An apparatus comprising:
an interface to communicate image data; and
an up-sampling unit, said up-sampling unit coupled to said interface to receive said image data, said up-sampling unit constructiog virtual sub-band input data from said image data by approximation, wherein an LL sub-band is approximated to original image pixel width and height dimensions without reduction, said up-sampling unit performing an inverse Discrete Wavelet Transform (DWT) upon said input data generating an up-sampled image therefrom, said up-sampled image having increased pixel width and height dimensions of said original image; and
an adder coupled to a processing cell to receive and selectively add said intermediate output.
9. An apparatus according to claim 8 wherein said up-sampling unit comprises:
a first up-sampled data output generation; and
a second up-sampled data output generation, said first and second output generators providing their respective outputs in an alternating fashion.
10. An apparatus according to claim 9 wherein said first generation and said second generation each comprise:
a plurality of processing cells, each processing cell capable of generating an intermediate inverse DWT output from said input data; and X
an adder coupled to said processing cell to receive and selectively add said intermediate outputs.
11. An apparatus according to claim 10 wherein each said output generation further comprises:
delay elements selectively coupled to said processing cells, said delay elements delaying selectively outputs to said processing cells.
12. An apparatus according to claim 10 wherein each said processing cell is configured to multiply an inverse DWT coefficient by a sum of selected ones of said input data.
13. An apparatus according to claim 9 configured to couple to an imaging system.
14. An apparatus according to claim 13 wherein said imaging system is a digital camera.
15. An article comprising:
a computer readable medium having instructions stored thereon, which when executed perform:
constructing virtual Discrete Wavelet Transform (DWT) sub-bands from an image by approximation without performing a DWT, wherein a virtual LL sub-band is approximated to an original image pixel width and height dimensions without reduction;
applying an inverse DWT upon said virtual sub-bands, the result of said inverse DWT representing an up-sampled version of said image, said up-sampled version of said image having increased pixel width and height dimensions of said original image.
16. An article according to claim 15, wherein virtual sub-bands include said LL sub-band, an HL sub-band, an LH sub-band and an HH sub-band.
17. An article according to claim 16, wherein said LL sub-band has the same dimension of pixels as said image.
18. An article according to claim 16 wherein said HL sub-bands, said LH sub-bands and said HH sub-bands are approximated by zero pixel values.
19. An article according to claim 16 wherein said image is doubled in size in an up-sampled version.
20. An article according to claim 16 wherein said inverse DWT performed is a two-dimensional inverse DWT.
21. A method comprising:
constructing virtual Discrete Wavelet Transform (DWT) sub-bands from an image by approximation without performing a DWT, wherein a virtual LL sub-band is approximated to original image dimension in pixels without reduction, wherein said virtual sub-bands include said LL sub-band, an HL sub-band, an LH sub-band and an HH sub-band; and
applying an inverse DWT upon said virtual sub-bands, the result of said inverse DWT representing an up-sampled version of said image, wherein said inverse DWT performed is a two-dimensional inverse DWT, performing said two-dimensional inverse DWT further comprises:
applying a one-dimensional inverse DWT to said virtual sub-bands in a row-wise fashion;
transposing said data resulting from said first DWT such that columns become rows and rows become columns; and
applying again said one-dimensional inverse DWT to said transposed data, the result of applying said one-dimensional inverse DWT to said transpose data being up-scaled image data.
22. A method according to claim 21 wherein said LL sub-band has the same dimension as said image.
23. A method according to claim 21 wherein said HL sub-bands, said LH sub-bands and said HH sub-bands are approximated by zero pixel values.
24. A method according to claim 21, wherein said image is doubled in size in up-sampled version.
25. An apparatus comprising:
an interface communicating image data; and
an up-sampling unit, said up-sampling unit coupled to said interface to receive said image data, said up-sampling unit constructing virtual sub-band input data from said image data by approximation, wherein an LL sub-band is approximated to original image dimension in pixels without reduction, said up-sampling unit configured to perform an inverse Discrete Wavelet Transform (DWT) upon said input data generating an up-sampled image therefrom; and
an adder coupled to a processing cell to receive and selectively add intermediate outputs; wherein said up-sampling unit comprises:
a first up-sampled data output generation; and
a second up-sampled data output generation, said first and second output generators providing their respective outputs in an alternating fashion, said first generation and said second generation each comprise:
a plurality of processing cells, each processing cell capable of generating an intermediate inverse DWT output from said input data; and
an adder coupled to said processing cell to receive and selectively add said intermediate outputs, each said output generation comprises:
delay elements selectively coupled to said processing cells, said delay elements delaying selectively outputs to said processing cells.
26. An apparatus according to claim 25 wherein each said processing cell multiplies an inverse DWT coefficient by a sum of selected ones of said input data.
27. An apparatus according to claim 25 coupled to an imaging system.
28. An apparatus according to claim 26 wherein said imaging system is a digital camera.
US09/129,728 1998-08-05 1998-08-05 DWT-based up-sampling algorithm suitable for image display in an LCD panel Expired - Lifetime US6236765B1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
US09/129,728 US6236765B1 (en) 1998-08-05 1998-08-05 DWT-based up-sampling algorithm suitable for image display in an LCD panel
AU52360/99A AU5236099A (en) 1998-08-05 1999-07-27 A dwt-based up-sampling algorithm suitable for image display in an lcd panel
PCT/US1999/017042 WO2000008592A1 (en) 1998-08-05 1999-07-27 A dwt-based up-sampling algorithm suitable for image display in an lcd panel
KR10-2001-7001534A KR100380199B1 (en) 1998-08-05 1999-07-27 A dwt-based up-sampling algorithm suitable for image display in an lcd panel
GB0102430A GB2362054B (en) 1998-08-05 1999-07-27 A dwt-based up-sampling algorithm suitable for image display in an lcd panel
JP2000564156A JP4465112B2 (en) 1998-08-05 1999-07-27 Upsampling algorithm based on DWT suitable for image display on LCD panel
TW088113386A TW451160B (en) 1998-08-05 1999-08-05 A DWT-based up-sampling algorithm suitable for image display in an LCD panel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/129,728 US6236765B1 (en) 1998-08-05 1998-08-05 DWT-based up-sampling algorithm suitable for image display in an LCD panel

Publications (1)

Publication Number Publication Date
US6236765B1 true US6236765B1 (en) 2001-05-22

Family

ID=22441326

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/129,728 Expired - Lifetime US6236765B1 (en) 1998-08-05 1998-08-05 DWT-based up-sampling algorithm suitable for image display in an LCD panel

Country Status (7)

Country Link
US (1) US6236765B1 (en)
JP (1) JP4465112B2 (en)
KR (1) KR100380199B1 (en)
AU (1) AU5236099A (en)
GB (1) GB2362054B (en)
TW (1) TW451160B (en)
WO (1) WO2000008592A1 (en)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020118746A1 (en) * 2001-01-03 2002-08-29 Kim Hyun Mun Method of performing video encoding rate control using motion estimation
US20020161807A1 (en) * 2001-03-30 2002-10-31 Tinku Acharya Two-dimensional pyramid filter architecture
US20020184276A1 (en) * 2001-03-30 2002-12-05 Tinku Acharya Two-dimensional pyramid filter architecture
US20030018818A1 (en) * 2001-06-27 2003-01-23 Martin Boliek JPEG 2000 for efficent imaging in a client/server environment
US20030053717A1 (en) * 2001-08-28 2003-03-20 Akhan Mehmet Bilgay Image enhancement and data loss recovery using wavelet transforms
US20030076999A1 (en) * 2001-03-30 2003-04-24 Schwartz Edward L. Method and apparatus for block sequential processing
US20030118241A1 (en) * 1994-09-21 2003-06-26 Ricoh Company, Ltd. Method and apparatus for compression using reversible wavelet transforms and an embedded codestream
US20030138156A1 (en) * 1994-09-21 2003-07-24 Schwartz Edward L. Decoding with storage of less bits for less important data
US20030147560A1 (en) * 2001-03-30 2003-08-07 Schwartz Edward L. Method and apparatus for storing bitplanes of coefficients in a reduced size memory
US20030174077A1 (en) * 2000-10-31 2003-09-18 Tinku Acharya Method of performing huffman decoding
US6625308B1 (en) 1999-09-10 2003-09-23 Intel Corporation Fuzzy distinction based thresholding technique for image segmentation
US6628827B1 (en) 1999-12-14 2003-09-30 Intel Corporation Method of upscaling a color image
US20030194150A1 (en) * 2002-04-16 2003-10-16 Kathrin Berkner Adaptive nonlinear image enlargement using wavelet transform coefficients
US6636167B1 (en) 2000-10-31 2003-10-21 Intel Corporation Method of generating Huffman code length information
US20030206661A1 (en) * 2001-02-15 2003-11-06 Schwartz Edward L. Method and apparatus for clipping coefficient values after application of each wavelet transform
US6658399B1 (en) 1999-09-10 2003-12-02 Intel Corporation Fuzzy based thresholding technique for image segmentation
US20040017952A1 (en) * 1999-10-01 2004-01-29 Tinku Acharya Color video coding scheme
US6697534B1 (en) 1999-06-09 2004-02-24 Intel Corporation Method and apparatus for adaptively sharpening local image content of an image
US20040042551A1 (en) * 2002-09-04 2004-03-04 Tinku Acharya Motion estimation
US20040047422A1 (en) * 2002-09-04 2004-03-11 Tinku Acharya Motion estimation using logarithmic search
US20040057626A1 (en) * 2002-09-23 2004-03-25 Tinku Acharya Motion estimation using a context adaptive search
US20040071350A1 (en) * 2000-06-19 2004-04-15 Tinku Acharya Method of compressing an image
US6748118B1 (en) 2000-02-18 2004-06-08 Intel Corporation Method of quantizing signal samples of an image during same
US6766286B2 (en) 2001-03-28 2004-07-20 Intel Corporation Pyramid filter
US6775413B1 (en) 2000-09-18 2004-08-10 Intel Corporation Techniques to implement one-dimensional compression
US20040169749A1 (en) * 2003-02-28 2004-09-02 Tinku Acharya Four-color mosaic pattern for depth and image capture
US20040169748A1 (en) * 2003-02-28 2004-09-02 Tinku Acharya Sub-sampled infrared sensor for use in a digital image capture device
US20040174446A1 (en) * 2003-02-28 2004-09-09 Tinku Acharya Four-color mosaic pattern for depth and image capture
US20050185851A1 (en) * 2001-03-30 2005-08-25 Yutaka Satoh 5,3 wavelet filter
US6961472B1 (en) 2000-02-18 2005-11-01 Intel Corporation Method of inverse quantized signal samples of an image during image decompression
US20050254718A1 (en) * 2002-03-04 2005-11-17 Ryozo Setoguchi Web-oriented image database building/control method
US7046728B1 (en) 2000-06-30 2006-05-16 Intel Corporation Method of video coding the movement of a human face from a sequence of images
US7053944B1 (en) 1999-10-01 2006-05-30 Intel Corporation Method of using hue to interpolate color pixel signals
US7095164B1 (en) 1999-05-25 2006-08-22 Intel Corporation Display screen
US20060222254A1 (en) * 1994-09-21 2006-10-05 Ahmad Zandi Method and apparatus for compression using reversible wavelet transforms and an embedded codestream
US20060284891A1 (en) * 2003-08-28 2006-12-21 Koninklijke Philips Electronics N.V. Method for spatial up-scaling of video frames
US7158178B1 (en) 1999-12-14 2007-01-02 Intel Corporation Method of converting a sub-sampled color image
US20080062312A1 (en) * 2006-09-13 2008-03-13 Jiliang Song Methods and Devices of Using a 26 MHz Clock to Encode Videos
US20080062311A1 (en) * 2006-09-13 2008-03-13 Jiliang Song Methods and Devices to Use Two Different Clocks in a Television Digital Encoder
US20110032413A1 (en) * 2009-08-04 2011-02-10 Aptina Imaging Corporation Auto-focusing techniques based on statistical blur estimation and associated systems and methods
CN102844786A (en) * 2010-03-01 2012-12-26 夏普株式会社 Image enlargement device, image enlargement program, memory medium on which image enlargement program is stored, and display device

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002359846A (en) * 2001-05-31 2002-12-13 Sanyo Electric Co Ltd Method and device for decoding image
KR100816187B1 (en) 2006-11-21 2008-03-21 삼성에스디아이 주식회사 Plasma display device and image processing method thereof
JP5452337B2 (en) * 2010-04-21 2014-03-26 日本放送協会 Image coding apparatus and program
JP5419795B2 (en) * 2010-04-30 2014-02-19 日本放送協会 Image coding apparatus and program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5014134A (en) * 1989-09-11 1991-05-07 Aware, Inc. Image compression method and apparatus
US5321776A (en) * 1992-02-26 1994-06-14 General Electric Company Data compression system including successive approximation quantizer
US5392255A (en) * 1992-10-15 1995-02-21 Western Atlas International Wavelet transform method for downward continuation in seismic data migration
US5491561A (en) * 1992-07-21 1996-02-13 Matsushita Electric Industrial Co., Ltd. Image processor for processing high definition image signal and standard image signal and disk apparatus therefor
US5602589A (en) * 1994-08-19 1997-02-11 Xerox Corporation Video image compression using weighted wavelet hierarchical vector quantization
US5706220A (en) * 1996-05-14 1998-01-06 Lsi Logic Corporation System and method for implementing the fast wavelet transform
US5737448A (en) * 1995-06-15 1998-04-07 Intel Corporation Method and apparatus for low bit rate image compression

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB621372A (en) * 1947-02-24 1949-04-07 Ernest John Munday A permutation switch for the ignition system of a motor vehicle and/or a magnetic lock

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5014134A (en) * 1989-09-11 1991-05-07 Aware, Inc. Image compression method and apparatus
US5321776A (en) * 1992-02-26 1994-06-14 General Electric Company Data compression system including successive approximation quantizer
US5491561A (en) * 1992-07-21 1996-02-13 Matsushita Electric Industrial Co., Ltd. Image processor for processing high definition image signal and standard image signal and disk apparatus therefor
US5392255A (en) * 1992-10-15 1995-02-21 Western Atlas International Wavelet transform method for downward continuation in seismic data migration
US5602589A (en) * 1994-08-19 1997-02-11 Xerox Corporation Video image compression using weighted wavelet hierarchical vector quantization
US5737448A (en) * 1995-06-15 1998-04-07 Intel Corporation Method and apparatus for low bit rate image compression
US5706220A (en) * 1996-05-14 1998-01-06 Lsi Logic Corporation System and method for implementing the fast wavelet transform

Cited By (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030138157A1 (en) * 1994-09-21 2003-07-24 Schwartz Edward L. Reversible embedded wavelet system implementaion
US8565298B2 (en) 1994-09-21 2013-10-22 Ricoh Co., Ltd. Encoder rate control
US20080063070A1 (en) * 1994-09-21 2008-03-13 Schwartz Edward L Encoder rate control
US20030142874A1 (en) * 1994-09-21 2003-07-31 Schwartz Edward L. Context generation
US20060222254A1 (en) * 1994-09-21 2006-10-05 Ahmad Zandi Method and apparatus for compression using reversible wavelet transforms and an embedded codestream
US20030118241A1 (en) * 1994-09-21 2003-06-26 Ricoh Company, Ltd. Method and apparatus for compression using reversible wavelet transforms and an embedded codestream
US20030123743A1 (en) * 1994-09-21 2003-07-03 Ricoh Company, Ltd. Method and apparatus for compression using reversible wavelet transforms and an embedded codestream
US20030138156A1 (en) * 1994-09-21 2003-07-24 Schwartz Edward L. Decoding with storage of less bits for less important data
US7095164B1 (en) 1999-05-25 2006-08-22 Intel Corporation Display screen
US6697534B1 (en) 1999-06-09 2004-02-24 Intel Corporation Method and apparatus for adaptively sharpening local image content of an image
US6625308B1 (en) 1999-09-10 2003-09-23 Intel Corporation Fuzzy distinction based thresholding technique for image segmentation
US6658399B1 (en) 1999-09-10 2003-12-02 Intel Corporation Fuzzy based thresholding technique for image segmentation
US7053944B1 (en) 1999-10-01 2006-05-30 Intel Corporation Method of using hue to interpolate color pixel signals
US7106910B2 (en) 1999-10-01 2006-09-12 Intel Corporation Color video coding scheme
US20040017952A1 (en) * 1999-10-01 2004-01-29 Tinku Acharya Color video coding scheme
US6628827B1 (en) 1999-12-14 2003-09-30 Intel Corporation Method of upscaling a color image
US7158178B1 (en) 1999-12-14 2007-01-02 Intel Corporation Method of converting a sub-sampled color image
US6748118B1 (en) 2000-02-18 2004-06-08 Intel Corporation Method of quantizing signal samples of an image during same
US6961472B1 (en) 2000-02-18 2005-11-01 Intel Corporation Method of inverse quantized signal samples of an image during image decompression
US6738520B1 (en) 2000-06-19 2004-05-18 Intel Corporation Method of compressing an image
US20040071350A1 (en) * 2000-06-19 2004-04-15 Tinku Acharya Method of compressing an image
US7046728B1 (en) 2000-06-30 2006-05-16 Intel Corporation Method of video coding the movement of a human face from a sequence of images
US6775413B1 (en) 2000-09-18 2004-08-10 Intel Corporation Techniques to implement one-dimensional compression
US7190287B2 (en) 2000-10-31 2007-03-13 Intel Corporation Method of generating Huffman code length information
US20030210164A1 (en) * 2000-10-31 2003-11-13 Tinku Acharya Method of generating Huffman code length information
US20060087460A1 (en) * 2000-10-31 2006-04-27 Tinku Acharya Method of generating Huffman code length information
US6987469B2 (en) 2000-10-31 2006-01-17 Intel Corporation Method of generating Huffman code length information
US6982661B2 (en) 2000-10-31 2006-01-03 Intel Corporation Method of performing huffman decoding
US6646577B2 (en) 2000-10-31 2003-11-11 Intel Corporation Method of performing Huffman decoding
US6636167B1 (en) 2000-10-31 2003-10-21 Intel Corporation Method of generating Huffman code length information
US20030174077A1 (en) * 2000-10-31 2003-09-18 Tinku Acharya Method of performing huffman decoding
US20020118746A1 (en) * 2001-01-03 2002-08-29 Kim Hyun Mun Method of performing video encoding rate control using motion estimation
US20030206661A1 (en) * 2001-02-15 2003-11-06 Schwartz Edward L. Method and apparatus for clipping coefficient values after application of each wavelet transform
US20030210827A1 (en) * 2001-02-15 2003-11-13 Schwartz Edward L. Method and apparatus for performing scalar quantization with a power of two step size
US20040120585A1 (en) * 2001-02-15 2004-06-24 Schwartz Edward L. Method and apparatus for sending additional sideband information in a codestream
US20030215150A1 (en) * 2001-02-15 2003-11-20 Gormish Michael J. Method and apparatus for performing progressive order conversion
US20040057628A1 (en) * 2001-02-15 2004-03-25 Schwartz Edward L. Method and apparatus for selecting layers for quantization based on sideband information
US6898325B2 (en) * 2001-02-15 2005-05-24 Ricoh Company, Ltd. Method and apparatus for clipping coefficient values after application of each wavelet transform
US6766286B2 (en) 2001-03-28 2004-07-20 Intel Corporation Pyramid filter
US20020184276A1 (en) * 2001-03-30 2002-12-05 Tinku Acharya Two-dimensional pyramid filter architecture
US20050185851A1 (en) * 2001-03-30 2005-08-25 Yutaka Satoh 5,3 wavelet filter
US20030076999A1 (en) * 2001-03-30 2003-04-24 Schwartz Edward L. Method and apparatus for block sequential processing
US6889237B2 (en) 2001-03-30 2005-05-03 Intel Corporation Two-dimensional pyramid filter architecture
US20030147560A1 (en) * 2001-03-30 2003-08-07 Schwartz Edward L. Method and apparatus for storing bitplanes of coefficients in a reduced size memory
US20020161807A1 (en) * 2001-03-30 2002-10-31 Tinku Acharya Two-dimensional pyramid filter architecture
US20030018818A1 (en) * 2001-06-27 2003-01-23 Martin Boliek JPEG 2000 for efficent imaging in a client/server environment
US7085436B2 (en) * 2001-08-28 2006-08-01 Visioprime Image enhancement and data loss recovery using wavelet transforms
US20030053717A1 (en) * 2001-08-28 2003-03-20 Akhan Mehmet Bilgay Image enhancement and data loss recovery using wavelet transforms
US20050254718A1 (en) * 2002-03-04 2005-11-17 Ryozo Setoguchi Web-oriented image database building/control method
EP1610267A3 (en) * 2002-04-16 2006-04-12 Ricoh Company, Ltd. Adaptive nonlinear image enlargement using wavelet transform coefficients
EP1610267A2 (en) * 2002-04-16 2005-12-28 Ricoh Company, Ltd. Adaptive nonlinear image enlargement using wavelet transform coefficients
US20030194150A1 (en) * 2002-04-16 2003-10-16 Kathrin Berkner Adaptive nonlinear image enlargement using wavelet transform coefficients
US20040047422A1 (en) * 2002-09-04 2004-03-11 Tinku Acharya Motion estimation using logarithmic search
US20040042551A1 (en) * 2002-09-04 2004-03-04 Tinku Acharya Motion estimation
US7266151B2 (en) 2002-09-04 2007-09-04 Intel Corporation Method and system for performing motion estimation using logarithmic search
US20040057626A1 (en) * 2002-09-23 2004-03-25 Tinku Acharya Motion estimation using a context adaptive search
US7274393B2 (en) 2003-02-28 2007-09-25 Intel Corporation Four-color mosaic pattern for depth and image capture
US20040174446A1 (en) * 2003-02-28 2004-09-09 Tinku Acharya Four-color mosaic pattern for depth and image capture
US20040169749A1 (en) * 2003-02-28 2004-09-02 Tinku Acharya Four-color mosaic pattern for depth and image capture
US20040169748A1 (en) * 2003-02-28 2004-09-02 Tinku Acharya Sub-sampled infrared sensor for use in a digital image capture device
US20060284891A1 (en) * 2003-08-28 2006-12-21 Koninklijke Philips Electronics N.V. Method for spatial up-scaling of video frames
US20080062312A1 (en) * 2006-09-13 2008-03-13 Jiliang Song Methods and Devices of Using a 26 MHz Clock to Encode Videos
US20080062311A1 (en) * 2006-09-13 2008-03-13 Jiliang Song Methods and Devices to Use Two Different Clocks in a Television Digital Encoder
US20110032413A1 (en) * 2009-08-04 2011-02-10 Aptina Imaging Corporation Auto-focusing techniques based on statistical blur estimation and associated systems and methods
US8294811B2 (en) * 2009-08-04 2012-10-23 Aptina Imaging Corporation Auto-focusing techniques based on statistical blur estimation and associated systems and methods
CN102844786A (en) * 2010-03-01 2012-12-26 夏普株式会社 Image enlargement device, image enlargement program, memory medium on which image enlargement program is stored, and display device
US8897569B2 (en) 2010-03-01 2014-11-25 Sharp Kabushiki Kaisha Image enlargement device, image enlargement program, memory medium on which an image enlargement program is stored, and display device

Also Published As

Publication number Publication date
KR20010072265A (en) 2001-07-31
GB2362054A8 (en) 2002-08-21
GB2362054B (en) 2003-03-26
TW451160B (en) 2001-08-21
KR100380199B1 (en) 2003-04-11
WO2000008592A1 (en) 2000-02-17
JP2002522831A (en) 2002-07-23
GB0102430D0 (en) 2001-03-14
AU5236099A (en) 2000-02-28
JP4465112B2 (en) 2010-05-19
GB2362054A (en) 2001-11-07

Similar Documents

Publication Publication Date Title
US6236765B1 (en) DWT-based up-sampling algorithm suitable for image display in an LCD panel
US6377280B1 (en) Edge enhanced image up-sampling algorithm using discrete wavelet transform
US6215916B1 (en) Efficient algorithm and architecture for image scaling using discrete wavelet transforms
US6937772B2 (en) Multiresolution based method for removing noise from digital images
US5325449A (en) Method for fusing images and apparatus therefor
EP0826195B1 (en) Image noise reduction system using a wiener variant filter in a pyramid image representation
US6389176B1 (en) System, method and medium for increasing compression of an image while minimizing image degradation
US20050147313A1 (en) Image deblurring with a systolic array processor
He et al. FPGA-based real-time super-resolution system for ultra high definition videos
KR20010038010A (en) Method of filtering control of image bilinear interpolation
Witwit et al. Global motion based video super-resolution reconstruction using discrete wavelet transform
JPH08294001A (en) Image processing method and image processing unit
EP2153405B1 (en) Method and device for selecting optimal transform matrices for down-sampling dct image
CN109102463B (en) Super-resolution image reconstruction method and device
JP4019201B2 (en) System and method for tone recovery using discrete cosine transform
Tom et al. Reconstruction of a high resolution image from multiple low resolution images
US6725247B2 (en) Two-dimensional pyramid filter architecture
EP0700016A1 (en) Improvements in and relating to filters
EP2153403B1 (en) Method and device for down-sampling a dct image in the dct domain
Aydin et al. A linear well-posed solution to recover high-frequency information for super resolution image reconstruction
KR100717031B1 (en) 1-D image restoration using a sliding window method
Güngör et al. A transform learning based deconvolution technique with super-resolution and microscanning applications
Fan Super-resolution using regularized orthogonal matching pursuit based on compressed sensing theory in the wavelet domain
KR100300338B1 (en) VLSI Architecture for the 2-D Discrete Wavelet Transform
Li Super-resolution using regularized orthogonal matching pursuit based on compressed sensing theory in the wavelet domain

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ACHARYA, TINKU;REEL/FRAME:009377/0712

Effective date: 19980731

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12