EP1164781A1 - Bildverarbeitungsgerät, bildverarbeitungsmethode und aufnahmemedium - Google Patents

Bildverarbeitungsgerät, bildverarbeitungsmethode und aufnahmemedium Download PDF

Info

Publication number
EP1164781A1
EP1164781A1 EP00909658A EP00909658A EP1164781A1 EP 1164781 A1 EP1164781 A1 EP 1164781A1 EP 00909658 A EP00909658 A EP 00909658A EP 00909658 A EP00909658 A EP 00909658A EP 1164781 A1 EP1164781 A1 EP 1164781A1
Authority
EP
European Patent Office
Prior art keywords
image
enlarged
images
frequency components
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP00909658A
Other languages
English (en)
French (fr)
Inventor
Tatsumi Watanabe
Yasuhiro Kuwahara
Akio Kojima
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP10470399A external-priority patent/JP4129097B2/ja
Priority claimed from JP17499099A external-priority patent/JP4081926B2/ja
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Publication of EP1164781A1 publication Critical patent/EP1164781A1/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/403Edge-driven scaling; Edge-based scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4084Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling

Definitions

  • the present invention relates to an image processing device and image processing method for reducing the memory size of image data inputted by the scanner, digital camera etc. or image data sent through communication means such as the Internet and outputting those data clearly
  • An original image is read by an image input means 10 formed of CCD elements etc.
  • lossy compression non-reversible compression
  • compression means 3900 On the original image, lossy compression (non-reversible compression) by the JPEG is carried out by compression means 3900.
  • discrete cosine transform of the original image is performed by discrete cosine transform (DCT) means 3906 whereby the original image is transformed into signals in the frequency space, and the obtained transform coefficient is quantized by quantization means 3907 using a quantization table 3901.
  • the results of this quantization are transformed into a code string by entropy encoding means 3908 on the basis of an entropy encoding table 3902, and this code string is stored on a storage medium 15. This processing is continued until after the compression of all the original images is over.
  • a lossless compression (reversible compression) method in which an image can be restored without distortion is also proposed for the JPEG standard.
  • the compression ratio which is defined as the ratio of original image size to the compressed image size
  • the compression ratio is very low. Therfore, lossy compression is generally used. But when lossy compression is used, the original image are not exactly reconstructed because of the quantization error as in FIG. 39 and the rounding error by DCT. Of the two reversible factors, the quantization error has bad influence on the quality of the reconstructed image especially.
  • image processing devices such as the digital camera are generally provided with a display unit such as liquid crystal display to review a photographed image right there, to retrieve the image for data editing etc. Then, the resolution of CCD is high as compared with the resolution of the display unit for review, and when an compressed image data stored on the storage medium 15 is displayed on those display units, this compressed image data is expanded and the thinning out of picture elements is conducted to transform the resolution.
  • expanding means 3903 in FIG. 39 expands a compressed image data stored on the storage medium 15.
  • the coefficient of entropy-encoded image signal is brought back to the quantized transform coefficients by entropy decoding means 3911 and DCT coefficient by inverse quantization means 3910.
  • IDCT inverse discrete cosine transforming
  • This expanded image data is outputted on the printer, CRT etc. by Original Image (OI) output means 3912 in accordance with an instruction signal inputted by the user through the mouse or the like (not shown).
  • means 3904 for thinning out images thins out picture elements from the all image data expanded by expanding means 3903 according to the resolution at means 3905 for displaying reduced image for review display, and this thinned out image data is displayed for review by means 3905 for displaying reduced image.
  • a conventional system example to realize the technique of interpolating between the picture elements in (1) is shown in a block diagram in FIG. 42.
  • the picture elements are linear interpolated between the picture elements in accordance with the following Formula 1.
  • Da is picture element data at point A
  • Db is picture element data at point B
  • Dc is picture element data at point C
  • Dd is picture element data at point D.
  • De is picture element data at point E to be worked out.
  • edge information is extracted by edge extracting means 4200, and is enlarged to a desired enlargement ratio by edge interpolation means 4201 to obtain an enlarged edge.
  • the interpolated image of the original image by picture element interpolating means 1100 are convoluted by this enlarged edge information so that a desired enlarged image is generated and outputted by Enlarged Images (EI) output means 501.
  • EI Enlarged Images
  • interpolation methods such as the nearest neighbor method in which the value of the nearest sample is taken as interpolation value.
  • the interpolation methods mentioned above have some problems.
  • the frequency characteristics in the pass band width is suppressed and the image data is smoothed as if the data is passed through a low pass filter (LPF), so that the image tends to blur with insufficient sharpness and expression of details.
  • LPF low pass filter
  • the problem with the nearest neighbor method is that much lack of high frequency components occurs and that tends to cause distortion of the enlarged image, for example, jaggy in edge area and blurred mosaic distortion.
  • FIG. 40 is a block diagram of a conventional system example to realize the technique in (2) that uses spatial frequency
  • FIG. 41 schematically shows the processing procedure.
  • an original image (n x n picture elements) in the real space as shown in FIG. 41 (a) is orthogonally transformed into an image (n x n picture elements) in the frequency space as shown in FIG. 41 (b).
  • the image data in this frequency space is expressed in an n x n matrix, and the matrix obtained by this frequency transform shows lower frequency component as the position moves nearer the upper left of the figure and shows higher frequency component as the position moves in the right direction and in the downward direction along the arrows.
  • a s-fold area - an area of sn x sn shown in FIG. 41 ( c ) - of the transformed image in the frequency space is prepared.
  • the frequency area of n x n shown in FIG. 41 (b) that is obtained by the orthogonal transform is copied, while the remaining part of high frequency component is interpolated with "0".
  • this frequency area of sn x sn is inverse-orthogonally transformed whereby a s-fold image data in the real space as shown in FIG. 41 (d) is obtained and an estimated enlarged image is outputted by Estimated Enlarged Image (EsEI) output means 4002 in FIG. 40.
  • the image data once compressed and stored has to be expanded again, and in addition, the compression is performed by an lossy compression involving quantization in JPEG, and for this reason, the original image data not be reconstructed from the compressed image, and it often happens that some noise and color difference are caused.
  • the problem is the processing speed.
  • the arithmetic processing amount is not so troublesome if the enlargement ratio s is not so large, but if the enlargement ratio s is large, the arithmetic processing amount of inverse transform as opposed to the arithmetic processing amount of right transform increases approximately in proportion to s x n. Especially in the two-dimensional processing actually performed, the arithmetic processing amount increases roughly in proportion to the cube of s x n.
  • the enlargement for a plurality of color components will be necessary, further increasing the time needed for the processing. Furthermore, in case the image to be enlarged is low in resolution, the high frequency components will not be restored to the accurately.
  • the method disclosed in unexamined Japanese patent application 8-294001 takes those problems into consideration and considers the processing time and restoration of the high frequency components.
  • This method involves embedding frequency information obtained on the basis of a prediction rule prepared in advance in the high frequency area of the original image. Therefore, it is necessary to work out a rule between the high frequency components and the other areas on the basis of a large number of picture samples in advance. It takes much labor to prepare a proper rule base, and if that can not be made into a proper rule, there is fear that it can not obtain sufficient effects.
  • the image size is generally arbitrary, and the larger the size for orthogonal transform, the longer the processing is. Therefore, it is usual that the whole of an image of a specific size is not put to orthogonal transform at a time, but orthogonal transform is performed by blocks of a size of 4 picture elements to 16 picture elements. The problem is that it can happen that discontinuity between the blocks (block distortion) occurs in the border portion in an enlarged image.
  • the present invention has been made, and it is an object of the present invention to provide an. image processing device and image processing method which permit reduction of memory size of image data and enlargement of an image to a sharp and high-quality image.
  • an original image obtained from image input means 10 is orthogonally transformed by Original Images (OI) orthogonal transforming mean 11 so as to generate the frequency components of the original image, and the low frequency components are extracted from the frequency components of the original image by Low Frequency Components (LFC) extracting means 100.
  • High Frequency Components (HFC) encoding means 13 works out the relation information between the low frequency components and the remaining high frequency components in the frequency components of the original image and encodes the information, and at the same time
  • Codes Synthesizing means 14 synthesizes the low frequency components and the relation information into simplified image data.
  • LFC decoding means 16 extracts low frequency components, and at the same time, High Frequency Components (HFC) decoding means 17 takes out the relation information and decodes the high frequency components on the basis of the low frequency components.
  • Original Images (OI) output means 18 combines the low frequency components and high frequency components and subjects the combination to inverse orthogonal transform to restore the original image.
  • the simplified image data can be processed by inputting in means such as the personal computer which can restore an image, and also can be first stored in storage medium 15. Also, in the simplified image data, the data mount of the low frequency components can be further compressed by Low Frequency Components (LFC) compression means 300. In this case, it is desirable that a lossless compression method should be used.
  • LFC Low Frequency Components
  • Reduced Images (RI) generating means 101 can extract the frequency area corresponding to a specified size (preview size, for example) from the low frequency components and to generate a reduced image by performing inverse orthogonal transformation on that components.
  • Shortage Components (ShC) estimating means 500 estimates the shortage high frequency components on the basis of the frequency components of the image as shown in FIG. 5.
  • EI output means 501 combines the frequency components of the specific image and the high frequency components obtained by ShC estimating means 500 and subjects the combination to inverse orthogonal transform, thereby outputting an image enlarged to a desired size.
  • OI orthogonal transforming means 11 subjects the image data to orthogonal transform to generate the frequency component of the original image and from this frequency component of the original image
  • Enlarged Frequency (EF) estimating means 800 estimates the frequency component at the time when the original image is enlarged corresponding to some desired enlargement ratio.
  • EF estimating means 800 extracts the frequency component - as basic component - necessary for restoring the specified basic image to a predetermined size, and multiple image encoding means 802 works out each relation information between the basic component and each frequency component corresponding to some estimated enlarged images and encodes the information.
  • the basic component thus obtained and each relation information corresponding to some enlargement ratios are synthesized by Multiple Codes (MC) synthesizing means 803 to generate multiple simplified image data.
  • MC Multiple Codes
  • the basic component and the relation information are extracted, and the image data can be restored on the basis of the basic component and the relation information.
  • the data size can be further reduced by the data size compression of the basic component.
  • inter-picture element interpolating means 1100 performs interpolation between picture elements - according to a desired enlargement ratio - of image data inputted from image input means 10 as shown in FIG 10.
  • the interpolated enlarged image thus obtained is not sharp in the edge portion, but with convolution means 1101 performing a convolutional calculation to enhance the edge portion on the interpolated enlarged image, an enlarged image with sharp edges is generated without time-consuming frequency transform.
  • Enlarged Frequency (EF) estimating means 120A estimates the frequency components of an enlarged image on the basis of the frequency components of the original image obtained by OI orthogonal transforming means 11 as shown in FIG. 12. And on the frequency components generated by EF estimating means 120A, inverse orthogonal transform means 1213 performs inverse orthogonal transform corresponding to the enlargement size and obtains an enlarged image data.
  • EF Larged Frequency
  • Estimation of the frequency components by EF estimating means 120A is performed using a linear approximation or radial basis function network.
  • the following method of estimating the high frequency components in an enlarged image is also adopted. That is, from the original image, an edge image is taken out which it is thought contains plenty of high frequency components, and from an enlarged edge image obtained by the linear transform, the high frequency components of an enlarged image are estimated using orthogonal transform.
  • the following method of estimating the frequency components of an enlarged edge image with high precision is used. That is, on the basis of the frequency components of the edge image taken out, the frequency components of the enlarged image is estimated with precision using the linear approximation or radial basis function network.
  • block dividing means 2300 divides an inputted image data into a plurality of blocks taking into consideration the processing time required for the orthogonal transform.
  • Enlarged Block Images (BI) frequency estimating means 2302 estimates the frequency components of the each enlarged image block for all of blocks divided from original image. Then, as shown in FIG. 24, the neighboring blocks are partly overlapped, and on the overlapped part in the enlarged block images, the enlarged block generated later is adopted, thus reducing the block artifacts.
  • BI Larged Block Images
  • the original image is processed by the transform function, thus reducing the discontinuity of an image on the block border-the discontinuity caused by division of an image into blocks.
  • Standard Component (SC) selecting means 2800 selects a color component to be a standard from among the color components making up an inputted color image and generates an enlarged image corresponding to the standard color components as shown in FIG. 28.
  • Shortage Component (ShC) enlarging means 2805 makes an estimation using the transform ratio derived by Transform Ratio (TR) deriving means 2801 from a color original image to the enlarged image of the standard color components, thereby speeding up the processing in generating an enlarged image data of a color image data.
  • TR Transform Ratio
  • Input Images (II) regulating means 12 interpolates or thins out (hereinafter both expressed as "regulate") to Ln/2 picture elements x Lm/2 picture elements as shown in FIG. 29.
  • image enlarging means 290A applies a method based on Wavelet transform and generates an enlarged image.
  • the images - an image regulated according to the number of picture elements of an enlarged image to be obtained, the edge image in the vertical direction, the edge image in the horizontal direction and the edge image in the oblique direction - are regarded as four sub-band images making up a Wavelet transform image. And by performing inverse Wavelet transform on the sub-band images, an enlarged image of a desired picture element size is to be obtained.
  • the relation is found between three edge images obtained from a 1/4 size reduced image in the low frequency area of transform image data, that is, Wavelet transformed original image data and the remaining three sub-band images within the transform image data.
  • the edge image in the vertical direction, the edge image in the horizontal direction, the edge image in the oblique direction of the image regulated according to the number of picture elements of an enlarged image to be obtained are each corrected using the above relation information.
  • the regulated image and the three corrected edge images are regarded as four sub-band images making up the transform image data.
  • an enlarged image of a desired picture element size is obtained.
  • the relation is to be found between one typical edge image data obtained from the 1/4 size reduced image data present in the low frequency area of the transform image data and the remaining three sub-band image data within the transform image data.
  • one typical edge image obtained from the image regulated according to the number of picture elements of an enlarged image data to be obtained is corrected using the relation information.
  • the regulated image data and the three image data obtained by correcting are regarded as four sub-band images making up the transform image data, and inverse Wavelet transform is performed to obtain an enlarged image of a desired picture element size.
  • Enlarging Process (EP) initializing means 3700 sets an original image as object image for enlargement
  • Object Images (ObI) enlarging means 3701 applies a method based on Wavelet transform to the object image for enlargement to generate an enlarged image having four times as many picture elements.
  • MP Multiple Processing
  • ending judge means 3703 sets the enlarged image obtained by ObI enlarging means 3701 as object image for enlargement and returns the process to ObI enlarging means 3701.
  • Enlarged Images (EI) presenting means 3702 presents visually the enlarged image obtained from ObI enlarging means 3701.
  • image fine-adjustment means 3704 enlarges or reduces the enlarged image presented by EI presenting means 3702.
  • a color component as standard is selected from among the color components making up the color image, and for its standard color components, an enlarged image is generated. And the remaining color components are found by performing linear transform on the enlarged image of the standard color component using a ratio of each remaining color to standard component, thus speeding up the processing in generating an enlarged image of a color image.
  • the unit of coordinate values is all identical with the unit of the distance between picture elements.
  • the original image that will be described by way of example is an original image taken in by scanner, digital camera or the like. But it is not restrictive.
  • the original image may be a specific image or the like held on a magnetic disk etc. or an image or the like sent in through communication means like the Internet.
  • an original image of a size N picture elements x N picture elements obtained by image input means 10 like CCD elements is transformed into component data (frequency component data) in frequency space.
  • orthogonal transform is used, and among the transform method are Hadamard transform and fast Fourier transform (FFT), discrete cosine transform (DCT), slant transform, and Haar transform.
  • FFT Hadamard transform and fast Fourier transform
  • DCT discrete cosine transform
  • slant transform slant transform
  • DCT used here is two-dimensional DCT especially for dealing with images, and the transform formula is given by Formula 2.
  • F (u, v) represents a DCT component at the component position (u, v)
  • D (x, y) represents image data at picture element position (x, y).
  • K _x indicates the number of picture elements in the x direction
  • K _y indicates the number of picture elements in the y direction.
  • LFC extracting means 100 extracts low frequency components of the frequency component data of the original image as shown in FIG. 2 (b).
  • HFC encoding means 13 encodes the high frequency area (e) that are left when the low frequency components (c) are taken out from the frequency components of the original image in FIG. 2 (b).
  • the area of the high frequency components is divided into small blocks H1 to H5 as shown in FIG. 2 (e), for example.
  • the area of low frequency component (c) is divided into blocks L1 to L4 as shown in FIG. 2 (f).
  • frequency components within the respective blocks H1 to H5 of the high frequency component (e) and the respective blocks L1 to L4 of low frequency components (f) are made related to each other.
  • the frequency components within the respective blocks H1 to H5 may be approximated by multi-dimensional function ⁇ (L1, L2, L3, L4) with the frequency components of the respective blocks L1 to L4 as variable.
  • coefficient matrix M _C1 to M _C5 are provided, and using them and low frequency component matrix M _L1 to M _L4 made up of lfrequency components within blocks L1 to L4, it is also possible to express high frequency component matrixes M_H1 to M _H5 made up of frequency data within blocks H1 to H5.
  • the high frequency data size can be reduced by formulating, the remaining high frequency components in FIG. 2 (e) expressing the clearness of the image and edge information using the low frequency components data in FIG. 2 (c).
  • Codes Synthesizing means 14 synthesizes the code string - of the relation information between the low frequency components and high frequency components obtained from a rule description obtained at HFC encoding means 13 or data such as coefficients of the approximation expression used by HFC encoding means 13 - and low frequency components data of the original image obtained from LFC extracting means 100. And the Codes Synthesizing means 14 stores the data on the storage medium 15 as simplified image data. Then, to read and restore data, a table is provided within the storage medium 15, and information-the number of stored image data, the sizes of the respective image data, the ratio of low frequency component data in the respective data stored, the ratio of codes indicating high frequency components - is stored, thus making extraction of the respective data efficient.
  • LFC extracting means 100 extracts low frequency components according to the number of picture elements of Reduced Images (RI) display means 102 that displays thumbnail images.
  • RI generating means 101 performs inverse orthogonal transform on the low frequency components, and thumbnail images are displayed on RI display means 102.
  • the user gives an instruction signal by inputting means such as a keyboard, mouse etc. (not shown) so as to take out desired data from simplified image data stored on the storage medium 15 by the image processing device in Embodiment 1 of the present invention and output the data on a high resolution laser printer or ink jet printer or to edit the data on CRT.
  • LFC decoding means 16 first takes out low frequency components as the main data from the storage medium 15.
  • HFC decoding means 17 decodes the low frequency components obtained by LFC decoding means 16 and the high frequency components according to relation information between the low frequency components and high frequency components within the storage medium 15.
  • OI output means 18 combines the low frequency component data and high frequency component data and performs inverse orthogonal transform of the corresponding image size so as to output data to be handled by other image processing devices, that is, to be displayed on CRT, or to be outputted on the printer or the like.
  • the processing steps by OI orthogonal transforming means 11, means 100, HFC encoding means 13, Codes Synthesizing means 14 and the storage medium 15 is the same as in the image processing device of Embodiment 1, and will not be explained.
  • the volume of the low frequency component data of the original image obtained by LFC extracting means 100 is large in size and in consideration of the restriction by the volume size of the recorded medium etc., the low frequency component data is compressed by LFC compression means 300.
  • the processing method by LFC compression means 300 is based on differential encoding (Differential PCM; DPCM) and entropy encoding called the spatial method in JPEG instead of compression technique using usual DCT and quantization, entropy encoding (the compression technique also called the baseline technique). It is desirable to use the lossless compression method that will not cause distortion by compression and expansion.
  • prediction values are worked out by a predictor using the neighboring density values, and from the density value to be encoded is subtracted its prediction value.
  • this predictor seven kinds of relational expressions are made available as shown in Formula 4.
  • the neighboring three density values are to be called D1, D2, D3 as shown in FIG. 4.
  • the prediction value dDx to be calculated from those three density values is defined by Formula 4. Which expression is used is written in header information of the compressed image data. In encoding, one prediction expression is selected as shown in FIG. 4 (b), and subtracted Ex is worked out. This subtracted Ex is entropy encoded. Encoding prediction errors makes it possible to compress the low frequency components reversibly.
  • HFC encoding means 13 formulates the remaining high frequency components in a way as shown in Formula 3 as in Embodiment 1.
  • Codes Synthesizing means 14 synthesizes the compressed frequency component data and the code string of the relation information between the low frequency components and high frequency components obtained from the rule description obtained by HFC encoding means 13 or data such as the coefficient etc. of the approximation expression used by HFC encoding means 13, and stores the data on the storage medium 15 as simplified image data.
  • the low frequency component data is a basic image to restore high frequency components which express the sharpness in the details of the image
  • a lossless compression method is applied to the low frequency components of the original image to realize the outputting of a clearest image with a sharp edge.
  • the usual JPEG baseline method can also be used here. It is a lossy compression, and when it is applied the clearness of the image will be lost.
  • the data size can be compressed to 1/10 to 1/20, and it is a method that is used as in case the image volume to be held on memory media and to be read is compacted as far as possible so that much image data can be stored.
  • the relation information between the low frequency components and high frequency components obtained from HFC encoding means 13 in the image processing device of Embodiment 1 and the relation information between the basic components and the high frequency components of the respective enlarged images obtained from multiple image encoding means 802 in the image processing devices in Embodiments 6 and 9 can be compressed by lossless compression methods such as the spatial method.
  • ShC estimating means 500 is provided with a function of estimating the high frequency component data that will be short in enlarging the data to a desired size in case the original image read is displayed on high-resolution CRT etc.
  • EI output means 501 combines the shortage of high frequency components estimated by ShC estimating means 500 and the frequency components of the original image decoded from the encoded image data stored on the storage medium 15 by LFC decoding means 16 and HFC decoding means 17, and performs inverse orthogonal transform corresponding to the enlargement size, thereby generating an enlarged image of the original image read.
  • ShC estimating means 500 and EI output means 501 process data as shown in FIG. 6.
  • the frequency component data as shown in FIG. 6 (b) obtained by performing orthogonal transform of an original image of a size N picture elements and N picture elements as shown in FIG. 6 (a) are embedded in the low frequency area of the frequency components of an enlarged image having a coefficient size corresponding to a desired enlargement size (sN picture elements x sN picture elements)(FIG. 6 (c)).
  • the shortage components H1 to H3 that occur then are estimated from the frequency components of the original image shown in FIG. 6 (b).
  • There are a number of methods of estimation including:
  • the ShC estimating means 500 requires a technique of estimating with precision of the high frequency components necessary to generate a clear image in details with sharp edge. If this requirement is satisfied, any technique is applicable.
  • EI output means 501 performs inverse orthogonal transform corresponding to the coefficient size of sN x sN as shown in FIG. 6 (d), whereby the frequency components of the enlarged image estimated by ShC estimating means 500 are brought back to real space data, and outputted as data to be handled by other image processing devices as to be displayed on CRT or the like, or to be referred to an outputting unit such as the printer.
  • the high frequency component data required in outputting an enlarged image using frequency components of the original image are estimated and compensated, thus avoiding a blurred edge and unclear details which are observed in enlargement by the prior art picture element interpolation.
  • EF estimating means 800 estimates the frequency component data in enlarging the image to a plurality of image sizes.
  • the estimating method can be the same technique as ShC estimating means 500 of FIG. 5, one of the constituent elements of the image processing device in Embodiment 5 of the present invention.
  • FIG. 8 (a) shows the frequency components of the original image
  • FIG. 8 (b) shows the frequency components of an image produced by enlarging the original image twice vertically and twice horizontally
  • FIG. 8 (c) shows the frequency components of an image produced by enlarging the original image three times vertically and three times horizontally. More image sizes may be provided, and the enlargement ratio of the image size does not have to be an integer.
  • the frequency components in the low frequency area represent the whole-wise feature of an image and have a qualitative tendency, which is common to the respective enlargement ratios.
  • frequency component L00 in a low frequency area that exhibits a similar tendency is to be taken as a basic component as shown in FIG. 8 (d).
  • the frequency components of the original image shown in FIG. 8 (a) may be regarded as basic component.
  • Multiple encoding means 802 in FIG. 7 relates the basic component L00 and the respective blocks H11 to H13 shown in FIG. 8 (b) and formulates them (FIG. 8 (e-1)) in the same way as HFC encoding means 13 does in the image processing devices of Embodiments 1 and 4 of the present invention.
  • the respective blocks H21 to H25 shown in FIG. 8 (c) also can be related directly to the basic component L00 in the same way.
  • the basic component (frequency component in a low frequency area) L00 and the respective blocks H21 to H25 in FIG. 8 (c) are indirectly related and formulated. That is because less qualitative difference is observed between the near blocks in the frequency area, the accuracy by the relating procedure.
  • MC synthesizing means 803 synthesizes the extracted basic component and the relation information between the basic component obtained by multiple image encoding means 802 and the respective frequency components of the enlarged images and stores data on the storage medium 15 as multiple simplified image data in the same way as Codes Synthesizing means 14 does in Embodiments 1 and 4. Then, it is necessary to write the number of enlarged images prepared multiple-wise, the starting flag signals of data of the respective sizes etc. on the headers or the like of the multiple simplified image data.
  • Basic Image (BI) generating means 808 perform inverse orthogonal transform on the basic component extracted by BC extracting means 807.
  • the size (coefficient size) in the frequency space of the frequency components extracted by BC extracting means 807 is larger than the size of the image for review on the display, a frequency component matched to the resolution will be further extracted from the low frequency area of the basic component and a basic image for display is generated.
  • the coefficient size of the frequency components extracted by BC extracting means 807 is smaller than the size of the image for review on the display, "0" component shown in FIG. 41 will be embedded in the area where the coefficients of the basic component are short, and a thumbnail image is generated.
  • the user can select an image, an object for outputting and editing, from among the image data stored on the storage medium 15 by the image processing device of Embodiment 6 of the present invention and also can specify the image size.
  • the following methods can be used. That is, selection information according to the image sizes stored on the storage medium 15 is presented, or the user inputs an enlargement ratio with the original image size as basis, or the size is automatically selected in accordance with the resolution of equipment to which the image is outputted.
  • Object Frequency (ObF) decoding means 805 takes out the expression code string of the high frequency component data corresponding to the selected image size, and decodes the high frequency components other than the basic component of the object enlarged image size from the code string and the basic component.
  • Object Images (ObI) output means 806 combines the high frequency components and the basic component, performs inverse orthogonal transform and outputs the desired enlarged image.
  • the low frequency component that can be regarded as common to the frequency components of a plurality of enlarged sizes is extracted as basic component and the remaining high frequency components of the respective enlarged sizes are encoded on the basis of this basic component, frequency components of image on a plurality of enlarged sizes can be provided. Therefore, when an enlarged image of the needed image size is reproduced according to the user's instruction, the enlarged image can be reproduced at a high speed without estimating the shortage of high frequency components in the desired enlarged size each time an instruction is given.
  • BC compression means 1000 compresses the basic component extracted by BC extracting means 807. This compression step is identical with that by LFC compression means 300 in the image processing device of Embodiment 4 of the present invention and will not be explained.
  • FIG. 10 is a block diagram showing the arrangement of the device.
  • interpolating is effected by an interpolating technique shown in FIG. 43 between the picture elements on an original image read by image input means 10, and inter-picture element interpolating means 1100 generates an enlarged image.
  • convolution means 1101 repeats convolution of the enlarged image for enhancing the picture elements.
  • FIG. 11 shows the convolution process.
  • the edge is an unclear image as shown in FIG. 11 (b)
  • image data on the edge can not be extracted, and it is difficult to enlarge the image in the processing step as shown in FIG. 42.
  • the averaged density value can be enhanced without difficulty.
  • the processing by convolution means 1101 produces the same effect on the enlarged image as when the processing by the edge enhancing filter is repeated several times.
  • the conventional processing by edge enhancing filter has problems that the processing has to be repeated many times, or the edge is not enhanced at all unless a suitable filter is selected, but the convolution method has no such problem because in convolution means 1101, convolution is effected by the density value itself. If it is assumed that when convolution is performed at point P in FIG.
  • the density value Dp [K + 1] at point P by convolution for the (K + 1)-th time is defined as in Fof an objecormula 5. This is to find the average value of convolution values near that picture element.
  • represents the range of an object picture element for convolution
  • U _ Gaso represents the total number of picture elements within its ⁇
  • q represents any picture element within ⁇ .
  • convergence judging means 1102 judges the convergence of this convolution processing. It judges whether on the basis of the mean value within ⁇ of the square errors between the density value Dp[k - 1] obtained at the (K - 1)th convolution and the density value Dp[K] at the K-th convolution as in Formula 6 is satisfied with convergence judgment value. It judges that the convergence of the density value is over and finishes the convolution if the mean value is smaller than a preset convergence judgement value Thred, and estimated enlarged image outputting means 1312 outputs the image as estimated enlarged image.
  • T _Gaso represents the total number of picture elements within ⁇ .
  • the image processing device of present embodiment is to enhance an enlarged image through convolutional arithmetic processing of interpolated image data and can realize an enlarged image in a handy manner without conducting time-consuming frequency transform. Also, it is possible to obtain an enlarged image with edge information without difficulty in processing even an original image with unclear-cut edge which presents a problem when edge information of the original image is used to keep an interpolated image from blurring.
  • FIG. 12 shows the arrangement of the image processing device of Embodiment 11.
  • non-linear estimating means 1201 in EF estimating means 120A estimates the frequency component data of an enlarged image - as will be described later - using the radial basis function network.
  • Image input means 10 reads out data of an original image of a size of n picture elements x n picture elements for enlargement.
  • the original image obtained by image input means 10 is processed in the same way as in Embodiment 1 until the original image is transformed into component data (frequency component data) in frequency area by OI orthogonal transforming means 11, and the processing until that will not be described.
  • RBFN radial basis function network
  • FIG. 14 is a example model arrangement of RBFN used here.
  • RBF radial basis function
  • VP_i corresponds to the position vector of the frequency component of the original image
  • the number N of the RBF functions corresponds to the number n x n of picture elements of the original image. That is, as many RBF functions as the number of picture elements of the original image are provided, and frequency component data after enlargement placed around component position vector VP_i at P, the center of the RBF function, is estimated as overlapping of these RBF function outputs.
  • RBF ⁇ is a function that changes depending on distance ⁇ VP ⁇ VP_i ⁇ between component position vector VP and the center VP_i of the i-th RBF function, and one example is given in Formula 8.
  • Weight coefficient vector Vw (w _0, w _1,..., w _N-1) T (T: transposition) is decided on as follows.
  • P'_i in the frequency area after enlargement corresponds to P_i in the frequency area of the original image as shown in FIG. 15. It means that with frequency component position at P _i as (a_i, b_i), Vw that minimizes square error function E (Vw) between frequency component F(a _i, b_i) at P _i and estimated frequency component F'(u_i, v _i) in P'_i should be taken as optimum weight coefficient vector.
  • estimated components F' (u, v) at (u, v) are arranged from the low frequency area, and the k-th estimated component corresponding to frequency component position (a_i, b_i) of the original image will be given as FF(k).
  • matrix MP made up of frequency component vector Vy of an estimated enlarged image and RBF function is defined as Formula 9.
  • frequency component vector Vy can be rewritten as Formula 10.
  • Vy MP ⁇ Vw
  • Vw (MP T ⁇ MP) -1 MP ⁇ Vf
  • AC deriving means 1200 calculates approximation weight coefficient Vw.
  • non-linear estimating means 1201 estimates DCT component F'(u, v) at (u, v) from Formula 7.
  • inverse orthogonal transform means 1213 performs inverse discrete cosine transform (IDCT), thus restoring the enlarged mage as the value in real space.
  • IDCT inverse discrete cosine transform
  • Estimated Enlarged Images (EsFI) output means 1214 - obtained by inverse orthogonal transform means 1213 - to be handled by other image processing devices, that is, to be displayed on CRT, to be outputted on the printer or the like.
  • the features of the frequency components of an original image can be approximated with precision, and it is possible to estimate the high frequency components erased in the sampling of the original image in a handy manner with high precision without preparing a rule etc. in advance.
  • FIG. 16 shows the arrangement of the image processing device of Embodiment 12, and there will be described the operation of this image processing device.
  • an original image obtained by image input means 10 is transformed into frequency component F(u, v) by OI orthogonal transforming means 11.
  • edge generating means 1600 extracts edges by a Laplacian filter shown in FIG. 18.
  • Laplacian filters shown in FIG. 18 (a) or (b) may be used for the purpose. Some other filter may also be used.
  • the precision of edge extraction can be improved by selecting a filter depending on the features of the original image to be handled.
  • the Laplacian filter shown in FIG. 18 (a) multiplies the density value at object picture element by eight times and subtracts the density values of the eight surrounding picture elements. That is, the difference between the density value of the picture element in the center and the density value of the surrounding picture elements is added to the density value of the picture element in the center, whereby the picture elements that changes greatly in the density difference from the surrounding as in the edge are enhanced.
  • Enlarged Edge (EE) approximating means 1601 in Enlarged Edge (EEd) estimating means 120B linearly enlarges an edge image (FIG. 17 (c)) -obtained by edge generating means 1600 as shown in FIG. 17 - to a desired enlargement ratio s, and interpolation picture elements have to be embedded between the existing picture elements as the image is enlarged.
  • edge image FIG. 17 (c)
  • interpolation picture elements have to be embedded between the existing picture elements as the image is enlarged.
  • Edge Frequency (EF) enerating means 1602 in EEd estimating means 120B performs orthogonal transform on an estimated image of an enlarged edge (FIG. 17 (d)) obtained by EEd approximating means 1601 to find frequency components (FIG. 17 (e)). That is done mainly because of the clearness of the image and the features of details of the image and also because plenty of high frequency components representing the edge are contained in such an edge image. In other words, it is based on the idea that because an extracted edge image is lacking in information on the other portion, high frequency components appear, but lower frequency components will come out at a low level only. And the frequency components possessed by the enlarged image are estimated by substituting the low frequency area of the frequency components of the enlarged edge image (FIG.
  • the enlarged image can be estimated without difficulty without using an RBFN method as in Embodiment 11. Furthermore, the edge information is handed over in the original condition, and thus the edge can be enhanced without losing the high frequency components of the original image and the blurring of the image can be kept down.
  • FIG. 19 shows the arrangement of the image processing device of Embodiment 19, and there will be described the operation of the image processing device.
  • DCT transform is applied to an original image (FIG. 20 (a)) obtained by image input means 10 so as to derive frequency component data (FIG. 20 (b)), and at the same time an edge image (FIG. 20 (c)) of the original image is generated by edge generating means 1600 using a Laplacian filter as shown in FIG. 18.
  • Edge Images (EI) orthogonal transforming means 1900 in EEd means 120B performs DCT on the edge image to acquire the frequency component data of the edge image (FIG. 20 (d)).
  • Edge Frequency (EdF) estimating means 1901 in EEd means 120B estimates the frequency component data - to be obtained from the edge portion of an enlarged image (FIG. 20 (e)) - using radial basis function network (RBFN) adopted in Embodiment 11 which is excellent in nonlinear approximation. And the low frequency area of the frequency component data thus estimated is substituted with the frequency components obtained from the original image (FIG. 20 (f)), thereby acquiring the frequency component data of the enlarged image.
  • RBFN radial basis function network
  • the enlarged edge information containing plenty of high frequency component data of the enlarged image is estimated from the features of the high frequency components of an image contained mainly in edge information, and by using that, the high frequency components can be compensated well which give clearness to an enlarged image.
  • the frequency component data of an enlarged edge image is estimated from a simple linear-interpolated image of the edge image of an original image. Interpolation of present embodiment is equivalent to non-linear approximation and it is considered, therefore, that the estimation precision in present embodiment will be higher than the method of simply performing linear interpolation between two samples.
  • This non-linear estimation method is different from that in Embodiment 11 only in that the frequency component data of the edge image obtained from the original image, and not the frequency component data of the original image, is inputted in RBFN of non-linear estimating means 1201 used in Embodiment 11, and will not be explained.
  • EdF estimating means 1901 finds the intermediate value between the DCT components (points n - 3, n - 2, n - 1 in FIG. 21 (a)) of an image sampled from the high frequency side according to the enlargement ratio s as shown in FIG. 21. And the frequency components of the enlarged edge can be estimated by allocating the DCT component positions starting from the head side of the high frequency components (points n - 1 + t - 3, n - 1 + t - 1 in FIG. 21 (b)). That way, the high frequency components can be compensated that give clearness to an enlarged image.
  • Embodiment 12 it appears that technically, there is not much difference between the interpolation method shown in FIG. 21 and that in Embodiment 12. In Embodiment 12, however, there is a possibility that the features of the edge etc. will change depending on between which picture elements of the edge image the interpolation picture element should be preferentially embedded when an enlarged edge image is prepared, and a proper interpolation order has to be worked out. But in case interpolation is done with the frequency components as shown in FIG. 21, the interpolation values of the DCT components should be embedded head side with the high frequency components since the point is to compensate the high frequency components. Then, the frequency components of the enlarged image should be estimated.
  • FIG. 23 shows the arrangement of the image processing device of Embodiment 14.
  • image input means 10 Many of original images inputted by image input means 10 have hundreds of picture elements x hundreds of picture elements. If orthogonal transform as in Embodiments 11, 12 and 13 is applied to that original image at a time, it will take a vast amount of time. To avoid that, the data is usually divided into blocks, each a size of 4 picture elements x 4 picture elements to 16 picture elements x 16 picture elements, and the respective blocks are enlarged to a picture element size according to a desired image enlargement ratio s, and they are put together again. In the case of this method, however, the following problem are pointed out.
  • FIG. 22 (a) schematically shows that problem.
  • ⁇ _i represents the i-th DCT component.
  • two-dimensional DCT is one obtained by expanding one-dimensional DCT in the y direction, that discontinuity will also occur when the one-dimensional way is expanded to two-dimensional DCT.
  • the present embodiment is a process addressing this problem.
  • DCT on block A0 is performed on section [0, n + u] (in this case, it is presupposed that in section [n + u + 1, 2n + 2u], the same data as in section [0, n + u] will be repeated (shown in dotted line in FIG. 22 (b)).
  • DCT on block A1 is performed on section [n, 2n + u - 1]. That way, it is possible to keep down the gap in density value occurring in the block border when the conventional method is used (see frame R in FIG. 22).
  • noise N is caused in the end portion of block A0, and the data in this portion of block A0 is not adopted but the data in A1 is used.
  • EBI frequency means 2302 enlarges the frequency component data of block Ai to frequency component data of ((n + u) x s) x ((n + u) x s). This enlargement is the same as that in Embodiments 1, 2 and 3 except that the enlargement is carried out block by block in the present embodiment, and will not be explained.
  • Block frequency extracting means 2303 does not use all the frequency component data of block Bi' obtained but takes out data of the required size m x m from the low frequency side, and again processes it to make an enlarged block Ci. And on Ci, block inverse orthogonal transform means 2304 effects orthogonal transform, and Enlarged Images (EI) recomposing means 2305 places the image data generated from block Ci at the corresponding position, thus finally obtaining an enlarged image.
  • EI Enlarged Images
  • FIG. 25 shows the arrangement of the image processing device of Embodiment 15.
  • the present embodiment is identical with Embodiment 14 in arrangement except that the present embodiment drops B frequency extracting means 2303 adopted in the image processing device in Embodiment 14.
  • the original image is divided by block dividing means 2300 into blocks of a little larger size than the size of n x n so that the respective blocks overlap, and frequency component data of a desired size is taken out of an enlarged block obtained and substituted again with the frequency data of the enlarged block whose original size is n x n.
  • FIG. 27 (a) shows the transition of density value D(x, y) in blocks Ai.
  • ⁇ (x, y_0) D(0, y_0) + (D(n - 1, y_0) - D(0, y_0))/n ⁇ x
  • FIG. 28 shows the arrangement of the image processing device according to Embodiment 16.
  • the present embodiment is an invention to make the processing efficient.
  • SC selecting means 2800 selects a color component to be made a standard.
  • the color original image is made up of three colors - red, green and blue. Considering that green data is much reflected on luminance information, it is desirable to select a green component as standard component.
  • TR depriving means 2801 finds simple ratio ratio_r of red component to green component and simple ratio ratio _b of blue component to the green component. There are a variety of methods of finding the simple ratio. To be used here in this example are the mean value of density ratios to green of red within the object area and the mean value of density ratios to green of blue within the object area as in Formula 16.
  • r_ij, g_ij, b_ij represent the densities of the red, green and blue components respectively at picture element position (i, j) of an original image.
  • matrix R_r made up of the ratio of the red component to the green component in each picture element
  • matrix R_b made up of the ratio of the blue component to the green component in each picture element. This way, it is possible to reproduce the features of the color original image better and enlarge the color image with higher precision than when using one ratio coefficient.
  • SEIF Standard Enlarged Image Frequency
  • SI Standard Inverse
  • ShC enlarging means 2805 multiplies the simple ratio ratio_r, ratio_b by the enlarged green data from SI orthogonal transforming means 2804, thus producing enlarged data of the red and blue components.
  • EsEI output means 1214 outputs the data to be handled by other image processing devices, that is, to be displayed on CRT, to be outputted on the printer or the like.
  • Embodiments 17 to 21 there will be explained about image processing devices to enlarge an image using Wavelet transform.
  • FIG. 29 shows the arrangement of the image processing device of Embodiment 17.
  • Image input means 10 is the same as those in Embodiment 1, Embodiment 11 and others.
  • the II regulating means 12 interpolates or thins out (hereinafter both expressed as "regulate") the horizontal, vertical picture elements of an original image having n picture elements x n picture elements obtained by image input means 10 to 1/2 of a desired enlarged image size of Ln picture elements x Ln picture elements.
  • Image enlarging means 290A enlarges the image using multiple resolution analysis in Wavelet transform which will be described later.
  • Enlarged Images (EI) regulating means 2913 regulates to a desired image size of Ln picture elements x Ln picture elements an enlarged image which was regulated by II regulating means 12 and having four times as many picture elements as the original image.
  • EsEI output means 1214 outputs an image data after enlargement - estimated by EI regulating means 2913 - to other devices as for display.
  • Image enlarging means 290A is provided with Vertical Edge (VdE) generating means 2900 which takes out an vertical-direction edge component image from an original image regulated to Ln/2 picture elements x Ln/2 picture elements by II regulating means 12, Horizontal Edge (HEd) generating means 2901 which takes out an horizontal-direction edge component image and Oblique Edge (OE) generating means 2902 that takes out an oblique-direction edge component image.
  • VdE Vertical Edge
  • HEd Horizontal Edge
  • OE Oblique Edge
  • image enlarging means 290A has leveling up means 2903 which generates an enlarged image of Ln picture elements x Ln picture elements by inverse Wavelet transform by regarding the above-mentioned three edge component images and an original image regulated to Ln/2 picture elements x Ln/2 picture elements regulated by II regulating means 12 as four sub-band components making up an transformed image at the time when the enlarged image of Ln picture elements x Ln picture elements was subjected to Wavelet transform.
  • Wavelet transform which is described in a number of publications including "Wavelet beginnerers Guide,” Susumu Sakakibara, Tokyo Electric Engineering College Publication Bureau, is developed and applied in many fields such as signal processing and compression of image data.
  • FIG. 31 shows a layout example of sub-bands of a Wavelet transformed image in which an original image is divided into 10 sub-bands, LL3, HL3, LH3, HH3, HL2, LH2, HH2, HL1, LH1, HL1.
  • FIG. 30 is a diagram in which Wavelet transform as shown in FIG. 31 is illustrated in the form of filter series. That is, the Wavelet transform is performed in three stages - stages I, II, III. In each stage, “Low” processing and “High” processing are performed in the vertical direction (y direction) and horizontal direction (x direction) separately. In the Low processing, low pass filtering and down-sampling (thinning out) to 1/2 are carried out. In the High processing, high pass filtering and down-sampling to 1/2 are conducted.
  • High processing and Low processing are performed on the original image in the horizontal direction. And on the output of the horizontal direction High processing, High processing and Low processing are performed in the vertical direction.
  • the result of the High processing the vertical direction is the HH1 component, and the result of the Low processing in the vertical direction is HL1.
  • Low processing and High processing are performed in the vertical direction.
  • the result of the vertical-direction Low processing is LL1 and the result of the vertical-direction High processing is LH1. Those are the results obtained in the first Wavelet transforming.
  • Low and High processing in the horizontal direction is applied to the LL1 component. And to the output of the horizontal direction High processing is applied in the vertical direction, and the result is HH2. To the output, Low processing is also applied in the vertical direction, and the result is HL2. Further, to the output obtained by the Low processing in the horizontal direction, Low processing is applied in the vertical direction, and the result is LL2. To the output, High processing is also applied in the vertical direction, and the result is LH2. Those are the results obtained in the second Wavelet transforming.
  • the LL2 is subjected to horizontal-direction Low processing and High processing separately. In the vertical direction, too, Low processing and High processing are performed separately. Thus obtained are the sub-band components HH3, HL3, LH3, LL3. Those are the results obtained in the third Wavelet transforming.
  • the original image is broken down into four frequency components - LL1, HL1, LH1 and HH1 and down-sampled to 1/2 both in the horizontal and vertical directions. Therefore, the size of the image representing the respective components will be 1/4 of that of the original image.
  • LL1 is a low frequency component extracted from the original image and is a blurred image of the original image. Most of information of the original image is contained in that component. Therefore, LL1 is the object for the second Wavelet transform.
  • the HL1 component obtained by the processing in FIG. 30, represents an image with the high frequency component extracted intensively in the horizontal direction of the original image.
  • the LH1 component represents an image with the high frequency component extracted intensively in the vertical direction of the original image.
  • HH1 represents an image with the high frequency component extracted both in the horizontal and vertical directions. In other words, it can be considered to be an image with the high frequency component extracted in the oblique direction.
  • the HL1 component strongly reflects the area where the density value fluctuates violently in the horizontal direction of the original image (edge information in the vertical direction).
  • LH1 component strongly reflects the area where the density value fluctuates violently in the vertical direction of the original image (edge information in the horizontal direction).
  • the HH1 component strongly reflects the area where the density value fluctuates violently in the horizontal and vertical directions of the original image (edge information in the oblique direction).
  • Wavelet transform Such characteristics produced by Wavelet transform can also be said of the components LL2, HL2, LH2, HH2 obtained in the second stage of Wavelet transform of LL1.
  • the same is applicable to the components of Wavelet transform with LL2 as object image.
  • the Wavelet transform breaks the LL image with the low frequency component extracted in the sub-band component image of one stage before down into four 1/4 resolution images corresponding to the low frequency component and the frequency components in vertical, horizontal and oblique directions.
  • sub-band component images can be synthesized by filtering to restore an image of one stage before. This will be explained in FIG. 31. Synthesizing four sub-band component images LL3, HL3, LH3, HH3 can restore LL2, and synthesizing LL2, HL2, LH2 and HH2 restores LL1. And the original image can be restored by using LL1, HL1, LH1, HH1.
  • Wavelet transform can express a plurality of sub-band component images with different resolutions simultaneously, it is also called the multiple resolution analysis. And the Wavelet transform attracts attention as technique that can compress data efficiently by compressing the respective sub-band components.
  • the original image is regarded as the low frequency sub-band component LL1 at one stage before.
  • the next step is to estimate the remaining images - image HL1 with the high frequency component strongly extracted in the horizontal direction, image LH1 with the high frequency component strongly extracted in the vertical direction, and image HH1 with the high frequency component strongly extracted in the horizontal and vertical directions, and to obtain an enlarged image four as large. And this processing is applied to enlarge an original image to a desired size.
  • image input means 10 reads out an original image of a size n picture elements x n picture elements to be enlarged.
  • the original image read by image input means 10 is regulated in image size by II regulating means 12.
  • image size As mentioned in the description of the multiple resolution analysis in Wavelet transform, if an image, an object for transform, is subjected to a Wavelet transform, the sub-band components after transform will always become 1/2 of the original size both in the number of picture elements in the horizontal direction and the number of picture elements in the vertical direction.
  • picture element size obtained by inverse Wavelet transform will be twice as large as that of the original sub-band component both in the number of picture elements in the horizontal direction and the number of picture elements in the vertical direction. That is, the total number of picture elements will be four times as many.
  • II regulating means 12 first regulates a desired enlarged image size Ln picture elements x Ln picture elements - the enlargement ratio being L-to a multiple of 2, that is, dLn picture elements x dLn picture elements and regulates the original image so that its size is dLn/2 picture elements x dLn/2 picture elements.
  • a number of regulating techniques are available, but here in this embodiment it is to be realized by interpolating between the picture elements using Formula 1 or by thinning out picture elements in areas where the gradation less change.
  • the following methods are also possible to apply. That is, the original image is transformed into frequency space by orthogonal transform such as DCT transform, and the frequency components corresponding to dLn/2 picture elements x dLn/2 picture elements are taken out. Or the shortage high frequency components are embedded with "0" and regulated by inverse orthogonal transform corresponding to dLn/2 picture elements x dLn/2 picture elements. But considering the processing efficiency and enlargement by Wavelet transform technique after that, it is not thought that a complicated processing is so efficient. For this reason, simple-interpolating between picture elements or thinning out is adopted.
  • orthogonal transform such as DCT transform
  • Image enlarging means 290A enlarges an original image-regulated to dLn/2 picture elements x dLn/2 picture elements by II regulating means 12 - twice both in the horizontal and vertical directions to an image size close to a desired Ln picture elements x Ln picture elements.
  • the multiple resolution analysis in the Wavelet transform is utilized.
  • the problems with the prior art method in which orthogonal transform is performed and shortage component is compensated in the frequency area are processing time and occurrence of jaggy noise in block joint because an image is divided into a plurality of blocks.
  • the Wavelet transform offers an advantage that because a large image can be handled at a time, no such noise will be caused.
  • image enlarging means 290A will have to estimate the images of dLn/2 picture elements x dLn/2 picture elements corresponding to the remaining three sub-bands HL1, LH1, HH1.
  • FIG. 32 schematically shows that procedure.
  • a method is adopted in which the three images of sub-band components HL1, LH1, HH1 are taken as edge images of LL1 in three directions.
  • the component HL1 is to represent an image with the high frequency component strongly extracted in the horizontal direction of an enlarged image (will be named LL0) having four times as many picture elements as the original image regulated to dLn/2 picture elements x dLn/2 picture elements (FIG. 31 (a)), and the sub-band component LH1 is to represent an image with the high frequency component strongly extracted in the vertical direction of sub-band component LL0.
  • the sub-band component HH1 will be an image with the high frequency extracted both in the horizontal and vertical directions.
  • the sub-band component HL1 reflects the area representing a high frequency component in the horizontal direction, that is, edge information in the vertical direction of the image of sub-band component LL0 (FIG. 32 (b)).
  • the sub-band component LH1 reflects the area representing a high frequency component in the vertical direction, that is, edge information in the horizontal direction of the image of sub-band component LLO (FIG. 32 (c)).
  • the sub-band component HH1 reflects the area representing a high frequency component both in the vertical and horizontal directions, that is, edge information in the oblique direction of the image of sub-band component LL0 (FIG. 32 (d)).
  • edge generating means 290B of an arrangement as shown in FIG. 29 where VdE generating means 2900 extracts the edge component in the vertical direction of an original image regulated to dLn/2 picture elements x dLn/2 picture elements by II regulating means 12 and takes it as shortage HL1 component.
  • HEd generating means 2901 extracts the edge component in the horizontal direction of the original image regulated to dLn/2 picture elements x dLn/2 picture elements by II regulating means 12 and takes it as shortage LH1 component.
  • OEd generating means 2902 extracts the edge component in the oblique direction of the original image regulated to dLn/2 picture elements x dLn/2 picture elements by II regulating means 12 and takes it as of shortage HH1 component.
  • the edge generating means 2900, 2901 and 1902 use edge detection filters to perform detection in three directions as shown in FIG. 33.
  • An example shown in FIG. 33 (a) is to detect the edge in the horizontal direction using a filter in which weighting increases in the horizontal direction.
  • An example shown in FIG. 33 (b) is to detect the edge in the vertical direction using a filter in which weighting increases in the vertical direction.
  • An example shown in FIG. 33 (c) is to detect the edge in the oblique direction using a filter in which weighting increases in the oblique direction. It is not that these filters only are applicable, but other filters may be used.
  • leveling up means 2903 acquires a clear enlarged image of a size dLn picture elements x dLn picture elements.
  • the processing here may be filter series processing that does the reverse of that.
  • EI regulating means 2913 does interpolation between picture elements or thinning out to make up for the delicate difference. To be processed here is at most one picture element, and therefore the processing is done in the area where the image changes are small (areas - other than the edge - where there is small gradation change) and its effect is small.
  • EI regulating means 2913 An enlarged image obtained by EI regulating means 2913 is handed over to other devices by Enlarged Images (EI) output means 2914 and displayed on CRT or used in some other way.
  • EI Enlarged Images
  • the blurring of an image can be kept down unlike the prior art method of merely interpolating between the picture elements in the original image and the prior art device in which the shortage of frequency components in the frequency area are embedded with "0."
  • the present embodiment also produces a clear enlarged image without causing noise like jaggy etc. which is considered a problem encountered with the orthogonal transform method.
  • FIG. 34 is a block diagram showing the arrangement of image enlarging means 290A making up the image processing device of an eighteenth embodiment 18 of the present invention. The operation of this device will be explained.
  • an original image obtained by image input means 10 is regulated by II regulating means 12 from a desired enlarged image size Ln picture elements x Ln picture elements to dLn/2 picture elements x dLn/2 picture elements, both a multiple of 2, in the horizontal and vertical directions.
  • the original image is regulated to a 1/4 size, that is, dLn/2 picture elements x dLn/2 picture elements both in the horizontal and vertical directions.
  • input fine-adjustment means 700 fine-adjusts dLn/2 picture elements x dLn/2 picture elements by one picture element to a multiple of 2, that is, ddLn picture elements x ddLn picture elements so that the sub-band component one level below is acquired at leveling down means 701 from the original image regulated to dLn/2 picture elements x dLn/2 picture elements.
  • And leveling down means 701 performs Wavelet transform on the original image of ddLn picture elements x ddLn picture elements acquired at input fine-adjustment means 700 and generates four sub-band components LL2, HL2, LH2, HH2 with the image size being 1/4 of that of the original image.
  • FIG. 35 schematically shows the outline of the processing at image enlarging means 290A.
  • the edge image in the vertical direction, the edge image in the horizontal direction and the edge image in the oblique direction, which were obtained from a current object image LL1 are taken as the sub-band components HL1, LH1, HH1 that are short in the Wavelet transformed image of the image LL0 obtained by enlarging the current object image by four times. But strictly speaking, this is not applicable to filtering in FIG. 30.
  • HL1 high frequency component data in the horizontal direction and low frequency component data in the vertical direction are extracted as HL1 component by filtering in FIG. 30. Because of that, in the HL1 component, there are extracted a picture element portion where the value fluctuates violently in the horizontal direction (edge etc. extending in the vertical direction) and a picture element portion where the value fluctuation is small in the vertical direction. In Embodiment 17, it is thought that of those portions, the picture element portion where the value change is great in the horizontal direction, that is, the edge portion extends in the vertical direction has a great effect, and edge information in the vertical direction only is taken as HL1 component.
  • edge information in the vertical direction contains plenty of the picture element portion where the value change is great in the horizontal direction but, strictly speaking, that is not always true. This is also applicable to other sub-band components LH1, HH1.
  • the present embodiment it is decided to prepare sub-band components LL2, HL2, LH2, HH2 by Wavelet-transforming the current object image and thus lowering the components by one level.
  • the correction amounts dHL, dLH, dHH for estimating the sub-band components corresponding to the three edge images of the original object image are found from the correlation among the edge images HLe, LHe, HHe in three direction of the low frequency component LL2 in three directions in those sub-band components and the actual three sub-band components HL2, LH2, HH2.
  • Reference Components (RC) generating means 70A, correction estimating means 70B, and component estimating means 70C as shown in FIG. 34.
  • means 702 for generating reference HL component detects edge information in the vertical direction using a filter as shown in FIG. 33 (b) with attention paid to the LL2 component that is present in the low frequency area and expresses the features of the original image more suitable than the sub-band components LL2, HL2, HH2. That edge information will be named reference HL component HLe.
  • HL correction estimating means 705 checks the correlation between the reference HL component HLe and HL2 obtained by leveling down means 701.
  • Means 703 for generating reference LH component and means 704 for generating reference HH component too, select edge information in the horizontal direction of the LL2 component as reference LH component LHe and edge information in the oblique direction of the LL2 component as reference HH component.
  • HL component estimating means 708, LH component estimating means 709 and HH component estimating means 710 firstly enlarge the image of correction components dHL, dLH, and dHH to the components with ddLn picture elements x ddLn picture elements.
  • means 708, 709 and 710 subtract the correction components dHL, dLH, and dHH from above-mentioned edge components of HL1, LH1, HH1 obtained by VEd generating means 2900, HEd generating means 2901 and OEd generating means 2902 respectively, and estimates HL1, LH1, HH1 components at the time when the original image regulated to ddLn picture elements x ddLn picture elements by input fine-adjustment means 700 is taken as sub-band component LL1.
  • HL component estimating means 708, LH component estimating means 709 and HH component estimating means 710 do fine-adjustment by interpolating between the respective picture elements in accordance with Formula 1 so that the picture element size of each corrected image will be ddLn picture elements x ddLn picture elements when the above-mentioned correction components dHL, dLH, dHH are used.
  • this is not the only way, but other methods are possible to apply including a conventional method of enlarging the image size twice both in the horizontal direction and the vertical direction by embedding the shortage component with 0 in the area where the frequency is transformed.
  • HL component estimating means 708, LH component estimating means 709 and HH component estimating means 710 are to adopt a linear interpolation method as in Formula 1.
  • Processing step by leveling up means 2903 and after that is the same as in Embodiment 17.
  • estimation by HL component estimating means 708, LH component estimating means 709 and HH component estimating means 710 is made by adding the difference components between reference components and actual components. In addition to that, the following methods are well suitable.
  • shortage sub-band components, especially high frequency components in Wavelet transform images, - which can not be taken out merely by edge detection in three directions of a regulated original image in Embodiment 17 - can be estimated with high precision and thus the blurring of an image can be kept down.
  • Wavelet transform does not require block division as in orthogonal transform and there arises no block distortion, which is the problem encountered with the prior art method using orthogonal transform.
  • FIG. 36 shows the arrangement of image enlarging means 290A of the image processing device of Embodiment 19, and there will be described the operation of this image processing device.
  • Reference Components (RC) generating means 3601 finds reference components to determine the respective correction amounts which are used in estimation - of HL, LH, HH components - from LL2 in the low frequency area in a sub-band component image obtained by leveling down means 701 - the estimation made by HL component estimating means 708, LH component estimating means 709 and HH estimating means 710.
  • a Laplacian filter as shown in FIG. 18 (a)(b) is used, and the typical edge image of LL2 will be taken as the reference component image.
  • the Laplacian filter is often used for detection of edges where not so much restriction is imposed on the direction, and not edges in a specific direction, as explained in Fig 33.
  • the present embodiment finds the correction amount as in Embodiment 18. This way, the repeating of the edge detection procedure as pointed out in Embodiment 18 can be reduced, and thus the processing is made efficient.
  • HL correction estimating means 705, LH correction estimating means 706 and HH correction estimating means 707 find the respective difference images dHL2, dLH2, dHH2 between the edge image obtained by RC generating means 3601 and HL2, LH2, HH2 obtained by leveling down means 701 (see FIG. 35 (c)), and each difference image is regulated to an image having ddLn picture elements x ddLn picture elements by linear approximation as in Formula 1.
  • edge generating means 3600 detects an edge image using a Laplacian filter from an original image of ddLn picture elements x ddLn picture elements regulated by II regulating means 12.
  • HL component estimating means 708, LH component estimating means 709 and HH component estimating means 710 add correction images obtained by HL correction estimating means 705, LH correction estimating means 706 and HH correction estimating means 707 to the edge image whereby the respective sub-band components HL1, LH1, HH1 can be estimated with high precision.
  • HL component estimating means 708 LH component estimating means 709 and HH component estimating means 710 as explained in Embodiment 18, it is possible to use the product of the correction amounts - by HL correction estimating means 705, LH correction estimating means LH correction estimating means 706, HH correction estimating means 707 - multiplied by a certain transform coefficient matrix or the results of transform by the transform function.
  • FIG. 37 shows the arrangement of the image processing device of Embodiment 20.
  • the outline of the present embodiment is this.
  • the number of picture elements to be enlarged is not known in advance, and an original image is enlarged twice both in the horizontal and vertical directions in accordance with multiple resolution analysis by Wavelet transform, and the enlarged image is shown to the user. And this process will be repeated until the user finds a desired one.
  • An original image of a size n picture elements x n picture elements inputted by image input means 10 is set as an enlargement object image by EP initializing means 3700.
  • ObI enlarging means 3701 enlarges the original image of a size n picture elements x n picture elements twice both in the horizontal direction and the vertical direction, that is, by four times.
  • the enlargement object image can always be enlarged to a size of four times as many picture elements by using image enlargement means in the image processing devices described in Embodiments 17, 18 and 19.
  • Enlarged image presenting means 1302 shows to the user the current enlarged image obtained by ObI enlarging means 3701 on CRT etc. Providing a function of moving the visual point with a cursor etc. if the resolution of the image is exceeded that of CRT etc., a function of cutting out a specific part from the image would help the user to judge if the displayed enlarged image is just a needed one.
  • MP ending judge means 3703 refers the process to image fine-adjustment means 3704 if the image is of a desired size, and, if an indication is received that the size of the enlarged image is not a desired one, sets this enlarged image for next enlargement object image and returns the process to ObI enlarging means 3701.
  • Image fine-adjustment means 3704 asks the user if fine-adjustment is needed. Since multiple resolution analysis by Wavelet transform is used in image enlargement, the enlarged image is always four times as large as the image before the enlargement. The user may think that while the previous image is too small, the enlarged image is too large to display on CRT at a time. Image fine-adjustment means 3704 asks the user if the image size should be adjusted to some extent. If the user wants the image size to be enlarged a little, picture element interpolation is performed. If the user wishes to have the image slightly reduced in size, thinning out of picture elements will be performed. This way, the image size is re-adjusted.
  • an area - other than the edge - where the density change is small is selected. The same is the case with the thinning out.
  • EsEI output means 1214 outputs an enlarged image obtained by image fine-adjustment means 3704, that is, to display the image on CRT etc., to print it out or refers it to other devices.
  • a detailed enlarged image obtained is shown to the user who judges if the size and resolution are just right. After the user finds the image a desired one, a series of enlargement steps can be suspended, and there is no need to set the enlargement ratio in advance and it is possible to enlarge an image simply to a size as desired by the user.
  • the present embodiment is an invention related to efficiency of estimating an enlarged image of a color original image.
  • FIG. 38 is a block diagram showing the arrangement of an image processing device of this embodiment, and there will be described the operation of this image processing device.
  • SC selecting means 2800 selects the green component as standard color component.
  • TR depriving means 2801 finds simple ratio ratio_r of the red component and ratio_b of the blue component to the green component. This process is the same as that in Embodiment 16 and will not be explained.
  • Standard Component Image (SCI) regulating means 3802 and Standard Image (SI) enlarging means 3803 enlarge the standard color component or the green component. And an enlarged image of the standard color thus obtained is subjected to picture element interpolation or thinning out by Standard Enlarged Image (EI) regulating means 3804 so as to obtain a desired image size Ln picture elements x Ln picture elements. Furthermore, Shortage Components (ShC) enlarging means 3805 multiplies the enlarged green component by the simple ratio ratio_r, ratio_b, thereby preparing data on the remaining red and blue components.
  • Enlarged Color Image (ECI) recomposing means 3806 combines these three enlarged components into one, thus producing an enlarged image of the color original image.
  • EsEI output means 1214 outputs an enlarged image obtained by image fine-adjustment means 3704, that is, to display the image on CRT etc., to print it out or refers it to other devices.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Editing Of Facsimile Originals (AREA)
EP00909658A 1999-03-15 2000-03-15 Bildverarbeitungsgerät, bildverarbeitungsmethode und aufnahmemedium Withdrawn EP1164781A1 (de)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
JP6815199 1999-03-15
JP6815199 1999-03-15
JP10470399A JP4129097B2 (ja) 1999-04-13 1999-04-13 画像処理装置及び画像処理方法
JP10470399 1999-04-13
JP17499099 1999-06-22
JP17499099A JP4081926B2 (ja) 1999-06-22 1999-06-22 画像拡大装置
PCT/JP2000/001586 WO2000056060A1 (fr) 1999-03-15 2000-03-15 Dispositif et procede de traitement d'image, et support enregistre

Publications (1)

Publication Number Publication Date
EP1164781A1 true EP1164781A1 (de) 2001-12-19

Family

ID=27299651

Family Applications (1)

Application Number Title Priority Date Filing Date
EP00909658A Withdrawn EP1164781A1 (de) 1999-03-15 2000-03-15 Bildverarbeitungsgerät, bildverarbeitungsmethode und aufnahmemedium

Country Status (2)

Country Link
EP (1) EP1164781A1 (de)
WO (1) WO2000056060A1 (de)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1355271A1 (de) * 2002-04-16 2003-10-22 Ricoh Company, Ltd. Anpassungsfähige nichtlineare Bildvergrösserung mittels Wavelet-Transformkoeffizienten
EP1569442A1 (de) * 2002-12-02 2005-08-31 Olympus Corporation Bilderfassungseinrichtung
EP1615168A1 (de) * 2004-07-09 2006-01-11 STMicroelectronics S.r.l. Farbinterpolation im DWT-Bereich
US7068851B1 (en) 1999-12-10 2006-06-27 Ricoh Co., Ltd. Multiscale sharpening and smoothing with wavelets
US8040385B2 (en) 2002-12-02 2011-10-18 Olympus Corporation Image pickup apparatus
US8565298B2 (en) 1994-09-21 2013-10-22 Ricoh Co., Ltd. Encoder rate control
US8897569B2 (en) 2010-03-01 2014-11-25 Sharp Kabushiki Kaisha Image enlargement device, image enlargement program, memory medium on which an image enlargement program is stored, and display device
WO2020081776A1 (en) * 2018-10-18 2020-04-23 Sony Corporation Adjusting sharpness and details in upscaling output

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03204268A (ja) * 1990-01-05 1991-09-05 Dainippon Printing Co Ltd 高画質画像拡大方法
JPH04229382A (ja) * 1990-12-27 1992-08-18 Ricoh Co Ltd ディジタル画像データの解像度交換装置
JP2916607B2 (ja) * 1991-05-10 1999-07-05 三菱電機株式会社 画像拡大装置
JP3152515B2 (ja) * 1992-09-11 2001-04-03 三洋電機株式会社 画像・データ多重化回路
JP3195142B2 (ja) * 1993-10-29 2001-08-06 キヤノン株式会社 画像処理方法及び装置
JP2639323B2 (ja) * 1993-11-29 1997-08-13 日本電気株式会社 画像拡大装置
JPH07203439A (ja) * 1993-12-28 1995-08-04 Nec Corp 画像信号復号化装置
JPH08294001A (ja) * 1995-04-20 1996-11-05 Seiko Epson Corp 画像処理方法および画像処理装置
JPH08315129A (ja) * 1995-05-15 1996-11-29 Sharp Corp 画像拡大方式
JP3706201B2 (ja) * 1996-07-18 2005-10-12 富士写真フイルム株式会社 画像処理方法
JPH1098611A (ja) * 1996-09-20 1998-04-14 Nec Corp 画像サイズ変換方式
JPH11284840A (ja) * 1998-03-26 1999-10-15 Ricoh Co Ltd 画像形成装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0056060A1 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8565298B2 (en) 1994-09-21 2013-10-22 Ricoh Co., Ltd. Encoder rate control
US7599570B2 (en) 1999-12-10 2009-10-06 Ricoh Co., Ltd Multiscale sharpening and smoothing with wavelets
US7068851B1 (en) 1999-12-10 2006-06-27 Ricoh Co., Ltd. Multiscale sharpening and smoothing with wavelets
EP1610267A2 (de) * 2002-04-16 2005-12-28 Ricoh Company, Ltd. Anpassungsfähige nichtlineare Bildvergrösserung mittels Wavelet-Transformkoeffizienten
EP1355271A1 (de) * 2002-04-16 2003-10-22 Ricoh Company, Ltd. Anpassungsfähige nichtlineare Bildvergrösserung mittels Wavelet-Transformkoeffizienten
EP1610267A3 (de) * 2002-04-16 2006-04-12 Ricoh Company, Ltd. Anpassungsfähige nichtlineare Bildvergrösserung mittels Wavelet-Transformkoeffizienten
US8040385B2 (en) 2002-12-02 2011-10-18 Olympus Corporation Image pickup apparatus
EP1569442A4 (de) * 2002-12-02 2009-08-12 Olympus Corp Bilderfassungseinrichtung
EP1569442A1 (de) * 2002-12-02 2005-08-31 Olympus Corporation Bilderfassungseinrichtung
EP1615168A1 (de) * 2004-07-09 2006-01-11 STMicroelectronics S.r.l. Farbinterpolation im DWT-Bereich
US8897569B2 (en) 2010-03-01 2014-11-25 Sharp Kabushiki Kaisha Image enlargement device, image enlargement program, memory medium on which an image enlargement program is stored, and display device
WO2020081776A1 (en) * 2018-10-18 2020-04-23 Sony Corporation Adjusting sharpness and details in upscaling output
US11252300B2 (en) 2018-10-18 2022-02-15 Sony Corporation Training and upscaling of large size image
US11252301B2 (en) 2018-10-18 2022-02-15 Sony Corporation Adjusting sharpness and details in upscaling output
US11265446B2 (en) 2018-10-18 2022-03-01 Sony Corporation Frame handling for ML-based upscaling
US11533413B2 (en) 2018-10-18 2022-12-20 Sony Group Corporation Enhanced color reproduction for upscaling

Also Published As

Publication number Publication date
WO2000056060A1 (fr) 2000-09-21

Similar Documents

Publication Publication Date Title
US7155069B2 (en) Image processing apparatus, image processing method, and image processing program
US8743963B2 (en) Image/video quality enhancement and super-resolution using sparse transformations
US20020018072A1 (en) Scalable graphics image drawings on multiresolution image with/without image data re-usage
EP1617643A2 (de) Informationsverarbeitungsgerät, Verfahren und Speichermedium dafür
EP1001374A2 (de) Bildverarbeitungsverfahren und gerät
JP4371457B2 (ja) 画像処理装置、方法及びコンピュータ読み取り可能な記憶媒体
EP2300982B1 (de) Bild-/videoqualitätsverbesserung und superauflösung unter verwendung von dünn besiedelten transformationen
JPH09506193A (ja) 離散的コサイン変換を利用して画像をスケーリング(拡大縮小)しフィルタリングするためのコーディング方法および装置
JP2003348328A (ja) ウェーブレット係数を用いた非線形画像処理方法、装置及びプログラム
KR101348931B1 (ko) 이산 웨이블릿 변환 기반 초고해상도 영상 획득 방법
US6813384B1 (en) Indexing wavelet compressed video for efficient data handling
JPH08294001A (ja) 画像処理方法および画像処理装置
EP1164781A1 (de) Bildverarbeitungsgerät, bildverarbeitungsmethode und aufnahmemedium
Vo et al. Selective data pruning-based compression using high-order edge-directed interpolation
EP1229738B1 (de) Bilddekompression von Transformkoeffizienten
US6856706B2 (en) Image processing method and system, and storage medium
US7630568B2 (en) System and method for low-resolution signal rendering from a hierarchical transform representation
JP4081926B2 (ja) 画像拡大装置
JP4598115B2 (ja) 画像処理方法および装置並びに記録媒体
JP4129097B2 (ja) 画像処理装置及び画像処理方法
JP4267159B2 (ja) 画像処理方法および装置並びに記録媒体
US6640016B1 (en) Method, apparatus and recording medium for image processing
JP3750164B2 (ja) 画像処理方法および画像処理装置
US20020093511A1 (en) Apparatus and method for processing a digital image
Velisavljevic Edge-preservation resolution enhancement with oriented wavelets

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20010928

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE GB

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: PANASONIC CORPORATION

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20081212