WO2000075865A1 - Procede de traitement d'image - Google Patents

Procede de traitement d'image Download PDF

Info

Publication number
WO2000075865A1
WO2000075865A1 PCT/JP2000/003640 JP0003640W WO0075865A1 WO 2000075865 A1 WO2000075865 A1 WO 2000075865A1 JP 0003640 W JP0003640 W JP 0003640W WO 0075865 A1 WO0075865 A1 WO 0075865A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge
function
dimensional
image
sampling function
Prior art date
Application number
PCT/JP2000/003640
Other languages
English (en)
Japanese (ja)
Inventor
Kazuo Toraichi
Kouichi Wada
Masakazu Ohhira
Original Assignee
Fluency Research & Development Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fluency Research & Development Co., Ltd. filed Critical Fluency Research & Development Co., Ltd.
Publication of WO2000075865A1 publication Critical patent/WO2000075865A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/403Edge-driven scaling; Edge-based scaling

Definitions

  • the present invention relates to an image processing method for displaying an image reconstructed on the basis of image data in a high-quality scale.
  • image reconstruction is performed by interpolating density value information of the original image.
  • General image reconstruction methods include (1) Nearest Neighbor interpolation, (2) Bi-linear interpolation (Bi-linear interpolation), and (3) Third-order convolution. Interpolation (Cubic Convo lution). These are all based on density value interpolation.
  • Interpolation by the sinc function is also performed in the third-order convolution interpolation, which is said to provide the sharpest enlarged image in the above-described conventional method.
  • the sinc function is not a local table, it is necessary to terminate the sinc function in a finite interval in actual applications, which causes an error.
  • the sampling theorem covers signals in continuous and differentiable finite time intervals, if the target signal contains discontinuous points, the original signal is completely restored. It is not possible.
  • there are many discontinuities in an image and such points are important in determining image quality. Specifically, for example, there is a problem that the vicinity of an edge where the density value changes rapidly corresponds to a discontinuous point, and in such a point, the distortion of reconstruction becomes remarkable.
  • the present invention has been made in view of the above points, and an object of the present invention is to provide an image processing method capable of obtaining a high-quality reconstructed image even when an enlargement process is performed with a small amount of data processing. To provide.
  • the image processing method when enlarging a two-dimensional image using a two-dimensional sampling function in which the influence of sample values on points equidistant from the sample point
  • the edge contained in the image is detected, and in the second step, an elliptic two-dimensional sampling function that is transformed by adapting the two-dimensional sample function to the shape of the edge is generated.
  • For the pixels along the edge included in perform the interpolation operation to increase the number of pixels using the elliptic 2D sampling function, and for the other pixels, increase the number of pixels using the 2D sampling function An interpolation operation is being performed.
  • the two-dimensional sampling function described above is composed of a (m-1) -th order piecewise polynomial that can be continuously differentiated (m-2) times, and desirably has a function value of a local table.
  • a continuously differentiable two-dimensional sampling function By using a continuously differentiable two-dimensional sampling function, a high-quality enlarged image without unnatural steps can be obtained.
  • a two-dimensional sampling function represented by a piecewise polynomial having local platform values it is possible to reduce the amount of data processing when performing enlargement processing and to prevent the occurrence of truncation errors. it can.
  • the elliptic two-dimensional sampling function described above desirably has an elliptical cross-section obtained by deforming a circular cross-section passing through a point having the same function value along the edge direction while maintaining the cross-sectional area.
  • the ratio of the major axis to the minor axis of the elliptical cross section described above is set to a large value when another edge exists along the edge of interest, and another edge along the edge of interest is set. It is desirable to set it to a small value when does not exist. The longer the edge, the greater the degree to which the two-dimensional sampling function is deformed, so that jaggies that are particularly noticeable when the edge is long can be reduced.
  • the first step described above includes a process of determining an edge direction based on local information corresponding to a minute region including a plurality of pixels included in the two-dimensional image, and a process of determining the determined edge. It is desirable to include a process of optimizing a region wider than the minute region in the direction. In order to reduce the amount of data processing, it is preferable to detect edges and determine their directions in as small a region as possible, but it is difficult to reflect the global edge shape. Therefore, after deciding the edge direction for each small area, the edge direction is optimized in an area wider than the small area, so that the global edge shape can be reduced while maintaining a small amount of data processing. The reflected edge detection can be performed accurately. BRIEF DESCRIPTION OF THE FIGURES
  • Figure 1 shows a schematic diagram of the two-dimensional sampling function.
  • FIG. 2 is a diagram illustrating a modification of the two-dimensional sampling function
  • FIG. 3 is a diagram for explaining an enlarged result by an ellipse two-dimensional sampling function
  • FIG. 4 is a diagram for explaining a method of calculating a direction vector
  • FIG. 5 is a configuration diagram of an image processing apparatus according to an embodiment
  • FIG. 6 is a flowchart showing an operation procedure of the image processing apparatus shown in FIG. BEST MODE FOR CARRYING OUT THE INVENTION
  • a sampling function more suitable for interpolation of image density values is selected from a function system called a fluency function system for classifying a function with differentiability as a parameter.
  • the image is enlarged and reconstructed by performing interpolation using a sampling function.
  • the system is constructed on the basis of.
  • the functional system corresponding to the impulse response characterizing this series of function spaces is derived by the following theorem.
  • a two-dimensional sampling function based on the above-described m 3 function is defined as a sampling function suitable for interpolating density values distributed in two dimensions.
  • the two-dimensional m 3 function for the variable (X, y) is defined as follows. (7)
  • the influence of the sample value on points equidistant from the sample point on the X-y plane is equalized.
  • the distance between pixels is calculated by a geometric method. This is because the quality of the image obtained when the reconstruction was actually performed was higher than that obtained when the distance was obtained by other methods such as chessboard distance. This is probably because the spatial spectrum distribution of a general image is isotropically restricted.
  • Figure 1 shows an outline of the two-dimensional sampling function. Next, a procedure for enlarging an image using the above-described two-dimensional sampling function will be described. Here, in order to simplify the explanation, it is assumed that the image is enlarged to an integral multiple. However, the same concept can be applied to the case of a real multiple.
  • a sampling matrix is generated by expressing the two-dimensional sampling function as a matrix according to the magnification.
  • a magnified image matrix to hold the pixel values after convolution and magnification.
  • the original image is a color image represented by the density values of the three RGB primary colors.
  • the density value for one color element is represented by the following matrix.
  • the two-dimensional sampling function is represented by a matrix corresponding to the magnification, and is referred to as a sampling matrix.
  • the sampling matrix when the image is enlarged n times can be expressed as follows.
  • the size of the matrix is 4 n.
  • a two-dimensional sampling function is defined to be circular when viewed in a section passing through points of equal function value.
  • a two-dimensional sampling function with directionality can be generated (see Fig. 2).
  • the two-dimensional sampling function that has directionality by being deformed is called “elliptic two-dimensional sampling function”.
  • FIG. 3 is a diagram for explaining an enlarged result by an elliptic two-dimensional sampling function.
  • Fig. 3 (a) shows the result of enlargement by the conventional method of nearest neighbor interpolation
  • Fig. 3 (b) shows the result of enlargement by the conventional method of third-order convolution
  • Fig. 3 (c) shows the result of enlargement by the two-dimensional sampling function.
  • Fig. 3 (d) shows the results of the enlargement by the elliptic two-dimensional sampling function, respectively (when both are 10 times larger).
  • the edges of natural images have complex structures such as partial edge breaks and intersections of multiple edges. Determining the direction using only local information causes distortion in the image.
  • an edge direction is first determined based on local information. Then, by acquiring the general information step by step, a more appropriate edge direction is obtained. The procedure is described below.
  • An edge in a digital image refers to a point at which the density value changes abruptly between two pixels, and thus the edge is actually between two pixels.
  • either a pixel with a high density value or a pixel with a low density value is expressed as an edge from two pixels sandwiching this edge.
  • two pixels sandwiching the edge are considered as edges.
  • two types of edge data are prepared: edge data with high density pixels in contact with the original edge as the first edge, and edge data with low density pixels as the second edge. The direction is determined every time.
  • jaggies are a problem in gently continuous edges, and in areas where the edge shape changes finely, or in areas where edges are dense, jaggies may be imaged due to the complexity of the edge shape. The impact on quality is small. Therefore, from the detected edge data, only slowly continuous edges that cause jaggies are selected and used for direction detection.
  • the least squares method is a technique for finding a straight line that minimizes the distance between two-dimensionally distributed points.
  • the straight line obtained in this way is called a regression line.
  • the least squares method is applied to the edge pattern, and the direction of the obtained regression line is set as the edge direction at the center of the region.
  • a regression line is determined for each small area of 3 ⁇ 3 pixels. At this time, there are some patterns where the regression line is not uniquely determined, such as intersection of edges. In such a case, the direction is calculated based on the surrounding edge information whose direction is determined. For this calculation, the direction vector of the target pixel is used.
  • the direction vectors of the edges whose directions have been obtained by the least squares method in the target area are added to obtain the direction vector of the target pixel.
  • Is the direction of the straight line represented by the target point and the direction vector. Therefore, here, when the direction vector is a, it is considered that a -a.
  • the determination is made based on the magnitude of the inner product.
  • a vector having a small angle formed by the direction vectors and thus having a large inner product is selected.
  • a vector b 'having a larger inner product with the vector a is selected. In this way, the sum of each direction vector is obtained, and the direction of the edge that cannot be determined by the least squares method can be determined.
  • direction optimization is performed based on the obtained local direction information.
  • the direction optimization is realized by adding the direction vectors of the edges in the 5 ⁇ 5 pixel area centered on the edge. However, if the edge in the target area is not continuous with the edge at the center of the area, the direction information is not added.
  • weighting is performed in inverse proportion to the distance from the center of the area. The above processing is performed for all edges to optimize the direction.
  • the adaptive sampling matrix In order to transform the above two-dimensional sampling function into an ellipse, the lengths of the major axis and the minor axis are required. However, since the sampling function is deformed so that the cross-sectional area at a certain function value is constant, the ratio of the length of the major axis to the minor axis may be determined in practice. Hereinafter, this ratio is referred to as “shape factor”.
  • the shape factor is calculated based on information on edges existing in the direction of the target edge. In other words, if an edge exists ahead of the edge direction, and if the directions match, the shape is greatly changed, and conversely, if there is no other edge in the edge direction, deformation is suppressed. Determine the shape factor.
  • the two-dimensional sampling function on this mapping plane is the adaptive sampling function to be found. Convolution is performed using the adaptive sampling matrix obtained for each pixel. However, a standard two-dimensional sampling function is used for pixels other than edges. The convolution is performed in the same manner as the convolution using the two-dimensional sampling function described above.
  • the image processing method of the present invention by using the two-dimensional sampling function, a sharp enlarged image can be obtained as compared with the conventional method. Also, by using an enlargement method that performs interpolation by deforming the two-dimensional sampling function according to the edge shape, it is possible to prevent jaggies from appearing.
  • FIG. 5 is a diagram showing a schematic configuration of an image processing apparatus to which the image processing method of the present invention using an elliptic two-dimensional sampling function is applied.
  • the image processing device shown in FIG. 5 includes an image data storage unit 10, an edge determination unit 20, a sampling matrix setting unit 30, an interpolation operation unit 40, and an enlarged data storage unit 50. I have.
  • the image data storage unit 10 stores a two-dimensional image data consisting of density values corresponding to each pixel constituting a two-dimensional image to be interpolated.
  • the edge determination unit 20 determines whether the pixel of interest (target pixel) to be interpolated is included in an edge in the two-dimensional image, and if so, determines the status of the pixel of interest. I do.
  • the status of the target pixel, the edge direction of the angle 0 of edges included the target pixel includes a shape factor a r when deforming the elliptical two-dimensional sampling function.
  • the sampling matrix setting unit 30 generates a sampling matrix used for interpolation processing of a target pixel. For example, when the target pixel is not included in the edge, the sampling matrix setting unit 30 generates the sampling matrix [M 3] shown in the above equation (9) for the interpolation processing. When the target pixel is included in the edge, the sampling matrix setting unit 30 sets the edge judgment Based on the angle ⁇ of the edge direction obtained by the part 20 and the shape factor a r , the contents of the above-mentioned relational expression of the mapping expressed by the expression (11) are determined, and the expression shown in the expression (9) is obtained. The sample matrix is transformed to generate an adaptive sampling matrix corresponding to the situation of the target pixel.
  • the interpolation calculation unit 40 reads the density values of a plurality of pixels located in a predetermined range around the target pixel from the pixel data storage unit 10 and calculates the enlarged image matrix [G n] shown in the above equation (10). By generating and filtering this enlarged image matrix [G n] using the sampling matrix function generated by the sampling matrix setting unit 30, the interpolation operation by convolution of the original image and the sample matrix is performed. To calculate the density value (interpolation value) of the target pixel. The density values of the target pixel thus calculated are sequentially stored in the enlarged data storage unit 50.
  • FIG. 6 is a flowchart showing an operation procedure of the image processing apparatus shown in FIG.
  • the interpolation calculation unit 40 sets a pixel of interest (pixel of interest) to be interpolated (step 100).
  • the sampling matrix generation unit 30 generates a sampling matrix according to the situation of the target pixel (step 101). If the target pixel is not included in the edge, the effect is notified from the edge judgment unit 20 and the sampling matrix generation unit 30 sends the sampling matrix (10) Generate a sampling matrix.
  • the target pixel is included in the edge, the effect, the angle 0 in the edge direction, and the shape coefficient ar are notified from the edge determination unit 20.
  • the sampling matrix generation unit 30 deforms in the edge direction. Generate an adaptive sampling matrix.
  • the interpolation calculation unit 40 reads out the density values of a predetermined number of pixels included in a predetermined range around the target pixel (step 102), and reads out the density values of the read out pixels and the sample matrix generation. An interpolation operation using the sampling matrix generated by the unit 30 is performed to obtain the density value of the target pixel (step 103).
  • the interpolation calculation unit 40 determines whether or not the calculation of the density value has been completed for all the pixels (step 104). If there is another pixel for which the density value has not been calculated, a negative determination is made and the processing from step 100 onward is repeated. When the density values have been completed for all the pixels, an affirmative determination is made in the determination in step 104, and a series of enlargement processing ends.
  • the interpolation process using the elliptic two-dimensional sampling function adapted to the edge shape is performed.
  • the edge direction is optimized in an area wider than the minute area, so that a small amount of data processing is maintained while maintaining a large amount of global processing. This makes it possible to perform accurate edge detection that reflects various edge shapes.
  • interpolation processing when enlarging an edge portion included in a two-dimensional image, interpolation processing using an ellipse two-dimensional sampling function adapted to the edge shape is performed. Interpolation processing can be performed according to the above, and a high-quality reconstructed image can be obtained when the two-dimensional image is enlarged.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé de traitement d'image permettant de former une image restructurée haute définition même si le grossissement de l'image s'effectue par traitement d'un petit volume de données. On crée une image grossie en réalisant une interpolation d'une valeur de concentration par utilisation d'une fonction d'échantillonnage bidimensionnelle produite de telle sorte que les valeurs de fonction des points équidistants d'un point d'échantillonnage et constituant une image bidimensionnelle soient égales l'une à l'autre. On détecte un contour compris dans une image, on détermine la direction, on transforme une fonction d'échantillonnage bidimensionnelle en adaptant la fonction à la direction afin de créer une fonction d'échantillonnage adaptative et on effectue une interpolation autour de la partie de contour en utilisant la fonction d'échantillonnage adaptative.
PCT/JP2000/003640 1999-06-03 2000-06-05 Procede de traitement d'image WO2000075865A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP15685299 1999-06-03
JP11/156852 1999-06-03

Publications (1)

Publication Number Publication Date
WO2000075865A1 true WO2000075865A1 (fr) 2000-12-14

Family

ID=15636796

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2000/003640 WO2000075865A1 (fr) 1999-06-03 2000-06-05 Procede de traitement d'image

Country Status (1)

Country Link
WO (1) WO2000075865A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005109340A1 (fr) * 2004-05-12 2005-11-17 Sanyo Electric Co., Ltd. Dispositif et programme d’agrandissement d’image
JP2009140435A (ja) * 2007-12-10 2009-06-25 Sharp Corp 画像処理装置、画像表示装置、画像形成装置、画像処理方法、コンピュータプログラム及び記憶媒体
JP2011070595A (ja) * 2009-09-28 2011-04-07 Kyocera Corp 画像処理装置、画像処理方法、および画像処理プログラム
JP2011070594A (ja) * 2009-09-28 2011-04-07 Kyocera Corp 画像処理装置、画像処理方法、および画像処理プログラム
GB2487242A (en) * 2011-01-17 2012-07-18 Sony Corp Interpolation Using Shear Transform
US8340472B2 (en) 2008-02-26 2012-12-25 Fujitsu Limited Pixel interpolation apparatus and method
JP2013225308A (ja) * 2012-04-20 2013-10-31 Canon Inc 周波数アンラッピングによる画像リサンプリング

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05207271A (ja) * 1992-01-24 1993-08-13 Matsushita Electric Ind Co Ltd 画像拡大装置
US5294984A (en) * 1988-07-23 1994-03-15 Ryoichi Mori Video signal processing system for producing intermediate pixel data from neighboring pixel data to improve image quality

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5294984A (en) * 1988-07-23 1994-03-15 Ryoichi Mori Video signal processing system for producing intermediate pixel data from neighboring pixel data to improve image quality
JPH05207271A (ja) * 1992-01-24 1993-08-13 Matsushita Electric Ind Co Ltd 画像拡大装置

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005109340A1 (fr) * 2004-05-12 2005-11-17 Sanyo Electric Co., Ltd. Dispositif et programme d’agrandissement d’image
JP2009140435A (ja) * 2007-12-10 2009-06-25 Sharp Corp 画像処理装置、画像表示装置、画像形成装置、画像処理方法、コンピュータプログラム及び記憶媒体
JP4510069B2 (ja) * 2007-12-10 2010-07-21 シャープ株式会社 画像処理装置、画像表示装置、画像形成装置、画像処理方法、コンピュータプログラム及び記憶媒体
US8340472B2 (en) 2008-02-26 2012-12-25 Fujitsu Limited Pixel interpolation apparatus and method
JP2011070595A (ja) * 2009-09-28 2011-04-07 Kyocera Corp 画像処理装置、画像処理方法、および画像処理プログラム
JP2011070594A (ja) * 2009-09-28 2011-04-07 Kyocera Corp 画像処理装置、画像処理方法、および画像処理プログラム
GB2487242A (en) * 2011-01-17 2012-07-18 Sony Corp Interpolation Using Shear Transform
US8588554B2 (en) 2011-01-17 2013-11-19 Sony Corporation Interpolation
JP2013225308A (ja) * 2012-04-20 2013-10-31 Canon Inc 周波数アンラッピングによる画像リサンプリング

Similar Documents

Publication Publication Date Title
Parsania et al. A comparative analysis of image interpolation algorithms
JP4150947B2 (ja) 画像処理装置および方法、並びに記録媒体
EP1347410B1 (fr) Interpolation et agrandissement d'images à base de contours
Lee et al. Nonlinear image upsampling method based on radial basis function interpolation
US6816166B2 (en) Image conversion method, image processing apparatus, and image display apparatus
WO2000046740A1 (fr) Procede lineaire et non lineaire de conversion de resolution d'image par changement d'echelle
KR20130001213A (ko) 입력 이미지로부터 증가된 픽셀 해상도의 출력 이미지를 생성하는 방법 및 시스템
JP2001512265A (ja) 3dコンピュータ・グラフィックスにおけるテクスチャ・マッピング
US20070171287A1 (en) Image enlarging device and program
JP2008512767A (ja) 一般的な2次元空間変換の表現システム及び方法
JP2002525723A (ja) ディジタル画像をズームする方法及び装置
JP3890174B2 (ja) 画像処理方法及び画像処理装置、及びコンピュータ可読媒体
CN101142614A (zh) 使用各向异性滤波的单通道图像变形系统和方法
JPH08294001A (ja) 画像処理方法および画像処理装置
WO2000075865A1 (fr) Procede de traitement d'image
Van De Ville et al. Least-squares spline resampling to a hexagonal lattice
JP5388780B2 (ja) 画像処理装置、画像処理方法、および画像処理プログラム
JP5388779B2 (ja) 画像処理装置、画像処理方法、および画像処理プログラム
US7428346B2 (en) Image processing method and image processing device
JP3200351B2 (ja) 画像処理装置及びその方法
JP3972625B2 (ja) 画像処理装置および画像処理方法
JPH11353472A (ja) 画像処理装置
JP2001043357A (ja) 画像処理方法、画像処理装置及び記録媒体
JP3655814B2 (ja) 拡大画像生成装置およびその方法
JPH11353306A (ja) 二次元データ補間方式

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref country code: JP

Ref document number: 2001 502064

Kind code of ref document: A

Format of ref document f/p: F

AK Designated states

Kind code of ref document: A1

Designated state(s): CN JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase