WO2000075865A1 - Image processing method - Google Patents

Image processing method Download PDF

Info

Publication number
WO2000075865A1
WO2000075865A1 PCT/JP2000/003640 JP0003640W WO0075865A1 WO 2000075865 A1 WO2000075865 A1 WO 2000075865A1 JP 0003640 W JP0003640 W JP 0003640W WO 0075865 A1 WO0075865 A1 WO 0075865A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge
function
dimensional
image
sampling function
Prior art date
Application number
PCT/JP2000/003640
Other languages
French (fr)
Japanese (ja)
Inventor
Kazuo Toraichi
Kouichi Wada
Masakazu Ohhira
Original Assignee
Fluency Research & Development Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fluency Research & Development Co., Ltd. filed Critical Fluency Research & Development Co., Ltd.
Publication of WO2000075865A1 publication Critical patent/WO2000075865A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/403Edge-driven scaling

Definitions

  • the present invention relates to an image processing method for displaying an image reconstructed on the basis of image data in a high-quality scale.
  • image reconstruction is performed by interpolating density value information of the original image.
  • General image reconstruction methods include (1) Nearest Neighbor interpolation, (2) Bi-linear interpolation (Bi-linear interpolation), and (3) Third-order convolution. Interpolation (Cubic Convo lution). These are all based on density value interpolation.
  • Interpolation by the sinc function is also performed in the third-order convolution interpolation, which is said to provide the sharpest enlarged image in the above-described conventional method.
  • the sinc function is not a local table, it is necessary to terminate the sinc function in a finite interval in actual applications, which causes an error.
  • the sampling theorem covers signals in continuous and differentiable finite time intervals, if the target signal contains discontinuous points, the original signal is completely restored. It is not possible.
  • there are many discontinuities in an image and such points are important in determining image quality. Specifically, for example, there is a problem that the vicinity of an edge where the density value changes rapidly corresponds to a discontinuous point, and in such a point, the distortion of reconstruction becomes remarkable.
  • the present invention has been made in view of the above points, and an object of the present invention is to provide an image processing method capable of obtaining a high-quality reconstructed image even when an enlargement process is performed with a small amount of data processing. To provide.
  • the image processing method when enlarging a two-dimensional image using a two-dimensional sampling function in which the influence of sample values on points equidistant from the sample point
  • the edge contained in the image is detected, and in the second step, an elliptic two-dimensional sampling function that is transformed by adapting the two-dimensional sample function to the shape of the edge is generated.
  • For the pixels along the edge included in perform the interpolation operation to increase the number of pixels using the elliptic 2D sampling function, and for the other pixels, increase the number of pixels using the 2D sampling function An interpolation operation is being performed.
  • the two-dimensional sampling function described above is composed of a (m-1) -th order piecewise polynomial that can be continuously differentiated (m-2) times, and desirably has a function value of a local table.
  • a continuously differentiable two-dimensional sampling function By using a continuously differentiable two-dimensional sampling function, a high-quality enlarged image without unnatural steps can be obtained.
  • a two-dimensional sampling function represented by a piecewise polynomial having local platform values it is possible to reduce the amount of data processing when performing enlargement processing and to prevent the occurrence of truncation errors. it can.
  • the elliptic two-dimensional sampling function described above desirably has an elliptical cross-section obtained by deforming a circular cross-section passing through a point having the same function value along the edge direction while maintaining the cross-sectional area.
  • the ratio of the major axis to the minor axis of the elliptical cross section described above is set to a large value when another edge exists along the edge of interest, and another edge along the edge of interest is set. It is desirable to set it to a small value when does not exist. The longer the edge, the greater the degree to which the two-dimensional sampling function is deformed, so that jaggies that are particularly noticeable when the edge is long can be reduced.
  • the first step described above includes a process of determining an edge direction based on local information corresponding to a minute region including a plurality of pixels included in the two-dimensional image, and a process of determining the determined edge. It is desirable to include a process of optimizing a region wider than the minute region in the direction. In order to reduce the amount of data processing, it is preferable to detect edges and determine their directions in as small a region as possible, but it is difficult to reflect the global edge shape. Therefore, after deciding the edge direction for each small area, the edge direction is optimized in an area wider than the small area, so that the global edge shape can be reduced while maintaining a small amount of data processing. The reflected edge detection can be performed accurately. BRIEF DESCRIPTION OF THE FIGURES
  • Figure 1 shows a schematic diagram of the two-dimensional sampling function.
  • FIG. 2 is a diagram illustrating a modification of the two-dimensional sampling function
  • FIG. 3 is a diagram for explaining an enlarged result by an ellipse two-dimensional sampling function
  • FIG. 4 is a diagram for explaining a method of calculating a direction vector
  • FIG. 5 is a configuration diagram of an image processing apparatus according to an embodiment
  • FIG. 6 is a flowchart showing an operation procedure of the image processing apparatus shown in FIG. BEST MODE FOR CARRYING OUT THE INVENTION
  • a sampling function more suitable for interpolation of image density values is selected from a function system called a fluency function system for classifying a function with differentiability as a parameter.
  • the image is enlarged and reconstructed by performing interpolation using a sampling function.
  • the system is constructed on the basis of.
  • the functional system corresponding to the impulse response characterizing this series of function spaces is derived by the following theorem.
  • a two-dimensional sampling function based on the above-described m 3 function is defined as a sampling function suitable for interpolating density values distributed in two dimensions.
  • the two-dimensional m 3 function for the variable (X, y) is defined as follows. (7)
  • the influence of the sample value on points equidistant from the sample point on the X-y plane is equalized.
  • the distance between pixels is calculated by a geometric method. This is because the quality of the image obtained when the reconstruction was actually performed was higher than that obtained when the distance was obtained by other methods such as chessboard distance. This is probably because the spatial spectrum distribution of a general image is isotropically restricted.
  • Figure 1 shows an outline of the two-dimensional sampling function. Next, a procedure for enlarging an image using the above-described two-dimensional sampling function will be described. Here, in order to simplify the explanation, it is assumed that the image is enlarged to an integral multiple. However, the same concept can be applied to the case of a real multiple.
  • a sampling matrix is generated by expressing the two-dimensional sampling function as a matrix according to the magnification.
  • a magnified image matrix to hold the pixel values after convolution and magnification.
  • the original image is a color image represented by the density values of the three RGB primary colors.
  • the density value for one color element is represented by the following matrix.
  • the two-dimensional sampling function is represented by a matrix corresponding to the magnification, and is referred to as a sampling matrix.
  • the sampling matrix when the image is enlarged n times can be expressed as follows.
  • the size of the matrix is 4 n.
  • a two-dimensional sampling function is defined to be circular when viewed in a section passing through points of equal function value.
  • a two-dimensional sampling function with directionality can be generated (see Fig. 2).
  • the two-dimensional sampling function that has directionality by being deformed is called “elliptic two-dimensional sampling function”.
  • FIG. 3 is a diagram for explaining an enlarged result by an elliptic two-dimensional sampling function.
  • Fig. 3 (a) shows the result of enlargement by the conventional method of nearest neighbor interpolation
  • Fig. 3 (b) shows the result of enlargement by the conventional method of third-order convolution
  • Fig. 3 (c) shows the result of enlargement by the two-dimensional sampling function.
  • Fig. 3 (d) shows the results of the enlargement by the elliptic two-dimensional sampling function, respectively (when both are 10 times larger).
  • the edges of natural images have complex structures such as partial edge breaks and intersections of multiple edges. Determining the direction using only local information causes distortion in the image.
  • an edge direction is first determined based on local information. Then, by acquiring the general information step by step, a more appropriate edge direction is obtained. The procedure is described below.
  • An edge in a digital image refers to a point at which the density value changes abruptly between two pixels, and thus the edge is actually between two pixels.
  • either a pixel with a high density value or a pixel with a low density value is expressed as an edge from two pixels sandwiching this edge.
  • two pixels sandwiching the edge are considered as edges.
  • two types of edge data are prepared: edge data with high density pixels in contact with the original edge as the first edge, and edge data with low density pixels as the second edge. The direction is determined every time.
  • jaggies are a problem in gently continuous edges, and in areas where the edge shape changes finely, or in areas where edges are dense, jaggies may be imaged due to the complexity of the edge shape. The impact on quality is small. Therefore, from the detected edge data, only slowly continuous edges that cause jaggies are selected and used for direction detection.
  • the least squares method is a technique for finding a straight line that minimizes the distance between two-dimensionally distributed points.
  • the straight line obtained in this way is called a regression line.
  • the least squares method is applied to the edge pattern, and the direction of the obtained regression line is set as the edge direction at the center of the region.
  • a regression line is determined for each small area of 3 ⁇ 3 pixels. At this time, there are some patterns where the regression line is not uniquely determined, such as intersection of edges. In such a case, the direction is calculated based on the surrounding edge information whose direction is determined. For this calculation, the direction vector of the target pixel is used.
  • the direction vectors of the edges whose directions have been obtained by the least squares method in the target area are added to obtain the direction vector of the target pixel.
  • Is the direction of the straight line represented by the target point and the direction vector. Therefore, here, when the direction vector is a, it is considered that a -a.
  • the determination is made based on the magnitude of the inner product.
  • a vector having a small angle formed by the direction vectors and thus having a large inner product is selected.
  • a vector b 'having a larger inner product with the vector a is selected. In this way, the sum of each direction vector is obtained, and the direction of the edge that cannot be determined by the least squares method can be determined.
  • direction optimization is performed based on the obtained local direction information.
  • the direction optimization is realized by adding the direction vectors of the edges in the 5 ⁇ 5 pixel area centered on the edge. However, if the edge in the target area is not continuous with the edge at the center of the area, the direction information is not added.
  • weighting is performed in inverse proportion to the distance from the center of the area. The above processing is performed for all edges to optimize the direction.
  • the adaptive sampling matrix In order to transform the above two-dimensional sampling function into an ellipse, the lengths of the major axis and the minor axis are required. However, since the sampling function is deformed so that the cross-sectional area at a certain function value is constant, the ratio of the length of the major axis to the minor axis may be determined in practice. Hereinafter, this ratio is referred to as “shape factor”.
  • the shape factor is calculated based on information on edges existing in the direction of the target edge. In other words, if an edge exists ahead of the edge direction, and if the directions match, the shape is greatly changed, and conversely, if there is no other edge in the edge direction, deformation is suppressed. Determine the shape factor.
  • the two-dimensional sampling function on this mapping plane is the adaptive sampling function to be found. Convolution is performed using the adaptive sampling matrix obtained for each pixel. However, a standard two-dimensional sampling function is used for pixels other than edges. The convolution is performed in the same manner as the convolution using the two-dimensional sampling function described above.
  • the image processing method of the present invention by using the two-dimensional sampling function, a sharp enlarged image can be obtained as compared with the conventional method. Also, by using an enlargement method that performs interpolation by deforming the two-dimensional sampling function according to the edge shape, it is possible to prevent jaggies from appearing.
  • FIG. 5 is a diagram showing a schematic configuration of an image processing apparatus to which the image processing method of the present invention using an elliptic two-dimensional sampling function is applied.
  • the image processing device shown in FIG. 5 includes an image data storage unit 10, an edge determination unit 20, a sampling matrix setting unit 30, an interpolation operation unit 40, and an enlarged data storage unit 50. I have.
  • the image data storage unit 10 stores a two-dimensional image data consisting of density values corresponding to each pixel constituting a two-dimensional image to be interpolated.
  • the edge determination unit 20 determines whether the pixel of interest (target pixel) to be interpolated is included in an edge in the two-dimensional image, and if so, determines the status of the pixel of interest. I do.
  • the status of the target pixel, the edge direction of the angle 0 of edges included the target pixel includes a shape factor a r when deforming the elliptical two-dimensional sampling function.
  • the sampling matrix setting unit 30 generates a sampling matrix used for interpolation processing of a target pixel. For example, when the target pixel is not included in the edge, the sampling matrix setting unit 30 generates the sampling matrix [M 3] shown in the above equation (9) for the interpolation processing. When the target pixel is included in the edge, the sampling matrix setting unit 30 sets the edge judgment Based on the angle ⁇ of the edge direction obtained by the part 20 and the shape factor a r , the contents of the above-mentioned relational expression of the mapping expressed by the expression (11) are determined, and the expression shown in the expression (9) is obtained. The sample matrix is transformed to generate an adaptive sampling matrix corresponding to the situation of the target pixel.
  • the interpolation calculation unit 40 reads the density values of a plurality of pixels located in a predetermined range around the target pixel from the pixel data storage unit 10 and calculates the enlarged image matrix [G n] shown in the above equation (10). By generating and filtering this enlarged image matrix [G n] using the sampling matrix function generated by the sampling matrix setting unit 30, the interpolation operation by convolution of the original image and the sample matrix is performed. To calculate the density value (interpolation value) of the target pixel. The density values of the target pixel thus calculated are sequentially stored in the enlarged data storage unit 50.
  • FIG. 6 is a flowchart showing an operation procedure of the image processing apparatus shown in FIG.
  • the interpolation calculation unit 40 sets a pixel of interest (pixel of interest) to be interpolated (step 100).
  • the sampling matrix generation unit 30 generates a sampling matrix according to the situation of the target pixel (step 101). If the target pixel is not included in the edge, the effect is notified from the edge judgment unit 20 and the sampling matrix generation unit 30 sends the sampling matrix (10) Generate a sampling matrix.
  • the target pixel is included in the edge, the effect, the angle 0 in the edge direction, and the shape coefficient ar are notified from the edge determination unit 20.
  • the sampling matrix generation unit 30 deforms in the edge direction. Generate an adaptive sampling matrix.
  • the interpolation calculation unit 40 reads out the density values of a predetermined number of pixels included in a predetermined range around the target pixel (step 102), and reads out the density values of the read out pixels and the sample matrix generation. An interpolation operation using the sampling matrix generated by the unit 30 is performed to obtain the density value of the target pixel (step 103).
  • the interpolation calculation unit 40 determines whether or not the calculation of the density value has been completed for all the pixels (step 104). If there is another pixel for which the density value has not been calculated, a negative determination is made and the processing from step 100 onward is repeated. When the density values have been completed for all the pixels, an affirmative determination is made in the determination in step 104, and a series of enlargement processing ends.
  • the interpolation process using the elliptic two-dimensional sampling function adapted to the edge shape is performed.
  • the edge direction is optimized in an area wider than the minute area, so that a small amount of data processing is maintained while maintaining a large amount of global processing. This makes it possible to perform accurate edge detection that reflects various edge shapes.
  • interpolation processing when enlarging an edge portion included in a two-dimensional image, interpolation processing using an ellipse two-dimensional sampling function adapted to the edge shape is performed. Interpolation processing can be performed according to the above, and a high-quality reconstructed image can be obtained when the two-dimensional image is enlarged.

Abstract

An image processing method for forming a high-definition restructured image even if image magnification is conducted by processing a little amount of data. A magnified image is created by performing concentration value interpolation using a two-dimensional sampling function so produced that the function values of the points equidistant from a sampling point and constituting a two-dimensional image are made equal to each other. An edge included in an image is detected, the direction is determined, a two-dimensional sampling function is transformed by adapting the function to the direction thereby to create an adaptive sampling function, and interpolation about the edge part is performed using the adaptive sampling function.

Description

明 細 書 画像処理方法 技術分野  Description Image processing method Technical field
本発明は、 画像データに基づいて再構成される画像を高品位に拡大して表示す る画像処理方法に関する。 背景技術  The present invention relates to an image processing method for displaying an image reconstructed on the basis of image data in a high-quality scale. Background art
近年、 ィン夕ーネットを中心にネットワーク ·インフラストラクチャ一の整備 が世界規模で進められたことにより、 ネットワークを介して画像デー夕を取得し たり、 提供したりすることが容易に行えるようになった。 それとともに、 デジ夕 ル画像の用途は広がり、 これを加工するための画像処理技術の必要性も高まって いる。 デジタル画像の拡大再構成は、 そういった画像処理技術の一つである。 そ の一方で、 デジタル画像の拡大再構成は、 解像度の異なるメディアを結ぶ解像度 変換技術としての側面も持つ。 様々な解像度のメディアが混在するネッ卜ワーク 環境においては、 その重要度も高い。  In recent years, the development of the network infrastructure centered on the Internet has progressed on a global scale, and it has become easier to acquire and provide image data via the network. Was. At the same time, the use of digital images has expanded, and the need for image processing technology to process them has increased. Magnified reconstruction of digital images is one such image processing technology. On the other hand, enlarged reconstruction of digital images also has an aspect as resolution conversion technology that connects media with different resolutions. In a network environment where media of various resolutions are mixed, its importance is high.
一般に、 画像の拡大、 縮小、 回転といった幾何学的変換においては、 元の画像 の濃度値情報を補間することで画像の再構成が行われる。 一般的な画像再構成方 法としては、 (1 ) 最近接内挿 (Neares t Ne ighbor) 、 ( 2 ) 双線形補間法 (Bi -l inear Interpo l at i on ) 、 (3 ) 3次畳み込み内挿 (Cub ic Convo lu t i on ) の 3つが挙げられる。 これらは、 いずれも濃度値の補間を基本としている。  Generally, in geometric transformation such as enlargement, reduction, and rotation of an image, image reconstruction is performed by interpolating density value information of the original image. General image reconstruction methods include (1) Nearest Neighbor interpolation, (2) Bi-linear interpolation (Bi-linear interpolation), and (3) Third-order convolution. Interpolation (Cubic Convo lution). These are all based on density value interpolation.
また、 近年では、 超解像法や、 ニューラルネットワークを用いた高品質画像拡 大法等の新しい手法に関する研究も行われている。 これらの手法では、 上述した 補間を主体とした画像再構成方法では再現することのできない、 サンプリングで 失われた高周波成分を復元することで高品質に拡大を行うことを目指している。 ところで、 上述した補間を主体とした画像再構成方法では、 精度の高い補間を 行うためには、 標本化定理より導出される s i n c関数が標本化関数として用い られている。 s i n c関数を標本化関数として用いた従来の補間法は、 帯域制限 された信号を完全に補間できることが保証されているため、 最も精度の高い補間 法として一般に用いられてきた。 上述した従来方法において最も鮮明な拡大画像 が得られるとされる 3次畳み込み内挿でも、 s i n c関数による補間が行われて いる。 しかし、 s i n c関数は局所台でないために、 実際の応用では s i n c関 数を有限区間で打ち切る必要があり、 誤差の原因となる。 また、 標本化定理が対 象としているのは、 連続かつ微分可能な有限時区間の信号であるから、 対象とな る信号に不連続な点が含まれる場合には原信号を完全に復元することはできない。 一般に、 画像では不連続な点が数多く存在し、 しかもそのような点が画像の品質 を決定する上で重要な意味をもつ。 具体的には、 例えば、 濃度値の変化が急激な エッジ周辺等が不連続な点に対応し、 このような点では再構成の歪みが顕著にな るという問題がある。 In recent years, research on new methods such as the super-resolution method and the high-quality image enlargement method using a neural network has been conducted. These methods aim at high-quality enlargement by restoring the high-frequency components lost by sampling, which cannot be reproduced by the above-described interpolation-based image reconstruction method. By the way, in the above-described image reconstruction method mainly based on interpolation, a sinc function derived from the sampling theorem is used as a sampling function in order to perform highly accurate interpolation. The conventional interpolation method using the sinc function as the sampling function Since it is guaranteed that the interpolated signal can be completely interpolated, it has been generally used as the most accurate interpolation method. Interpolation by the sinc function is also performed in the third-order convolution interpolation, which is said to provide the sharpest enlarged image in the above-described conventional method. However, since the sinc function is not a local table, it is necessary to terminate the sinc function in a finite interval in actual applications, which causes an error. Also, since the sampling theorem covers signals in continuous and differentiable finite time intervals, if the target signal contains discontinuous points, the original signal is completely restored. It is not possible. In general, there are many discontinuities in an image, and such points are important in determining image quality. Specifically, for example, there is a problem that the vicinity of an edge where the density value changes rapidly corresponds to a discontinuous point, and in such a point, the distortion of reconstruction becomes remarkable.
また、 デジタル画像には、 高倍率に拡大したときに斜め方向のエッジでジャギ 一が顕在化し画質が低下するという問題もある。 また、 上述した超解像法等の新 しく提案されている手法では、 処理を反復的に繰り返す必要があることから、 補 間を主体とした従来方法と比較して処理に要する時間が長くなるという問題があ る。 発明の開示  In addition, digital images have a problem that jaggies appear at edges in an oblique direction when the image is enlarged at a high magnification, and the image quality deteriorates. In addition, the newly proposed method such as the above-mentioned super-resolution method requires a repetitive process, so the time required for the process is longer than that of the conventional method that mainly uses interpolation. There is a problem. Disclosure of the invention
本発明は、 このような点に鑑みて創作されたものであり、 その目的は、 少ない データ処理量で拡大処理を行った際にも高品位な再構成画像を得ることができる 画像処理方法を提供することにある。  The present invention has been made in view of the above points, and an object of the present invention is to provide an image processing method capable of obtaining a high-quality reconstructed image even when an enlargement process is performed with a small amount of data processing. To provide.
本発明の画像処理方法は、 標本点から等距離にある点への標本値の影響が等しい 2次元標本化関数を用いて 2次元画像を拡大する場合に、 第 1のステップにおい て、 2次元画像に含まれるエッジを検出し、 第 2のステップにおいて、 2次元標 本化関数をエッジの形状に適応させて変形した楕円 2次元標本化関数を生成し、 第 3のステップにおいて、 2次元画像に含まれるエッジに沿った画素については、 楕円 2次元標本化関数を用いて画素数を増加させる補間演算を行い、 それ以外の 画素については、 2次元標本化関数を用いて画素数を増加させる補間演算を行つ ている。 2次元画像に含まれるエッジ部分を拡大する場合に、 エッジ形状に適応 させた楕円 2次元標本化関数を用いた補間処理が行われるため、 エッジ形状に沿 つた滑らかな補間処理を行うことができ、 2次元画像の拡大処理を行った際に高 品位な再構成画像を得ることができる。 The image processing method according to the present invention, when enlarging a two-dimensional image using a two-dimensional sampling function in which the influence of sample values on points equidistant from the sample point In the second step, the edge contained in the image is detected, and in the second step, an elliptic two-dimensional sampling function that is transformed by adapting the two-dimensional sample function to the shape of the edge is generated. For the pixels along the edge included in, perform the interpolation operation to increase the number of pixels using the elliptic 2D sampling function, and for the other pixels, increase the number of pixels using the 2D sampling function An interpolation operation is being performed. Adapt to the edge shape when enlarging the edge part included in the 2D image Interpolation processing using a two-dimensional sampling function is performed, which enables smooth interpolation processing along the edge shape, and high-quality reconstructed images when performing 2D image enlargement processing Can be obtained.
また、 上述した 2次元標本化関数は、 (m— 2 ) 回連続微分可能な (m— 1 ) 次の区分多項式からなり、 局所台の関数値を有することが望ましい。 連続微分可 能な 2次元標本化関数を用いることにより、 不自然な段差等がない高品位な拡大 画像を得ることができる。 また、 局所台の値を有する区分多項式によって表現さ れる 2次元標本化関数を用いることにより、 拡大処理を行う際のデータ処理量を 少なくすることができるとともに、 打ち切り誤差の発生を防止することができる。 また、 上述した楕円 2次元標本化関数は、 関数値が等しい点を通る円形断面を、 断面積を維持しながらエッジの方向に沿って変形した楕円形断面を有することが 望ましい。 楕円形断面を有する楕円 2次元標本化関数を用いることにより、 エツ ジ方向に沿った急激な濃度値の変化がなくなるため、 エッジ方向のジャギーを低 減することができる。  The two-dimensional sampling function described above is composed of a (m-1) -th order piecewise polynomial that can be continuously differentiated (m-2) times, and desirably has a function value of a local table. By using a continuously differentiable two-dimensional sampling function, a high-quality enlarged image without unnatural steps can be obtained. In addition, by using a two-dimensional sampling function represented by a piecewise polynomial having local platform values, it is possible to reduce the amount of data processing when performing enlargement processing and to prevent the occurrence of truncation errors. it can. The elliptic two-dimensional sampling function described above desirably has an elliptical cross-section obtained by deforming a circular cross-section passing through a point having the same function value along the edge direction while maintaining the cross-sectional area. By using an elliptic two-dimensional sampling function having an elliptical cross section, sharp changes in the density value along the edge direction are eliminated, and jaggies in the edge direction can be reduced.
また、 上述した楕円形断面の長軸と短軸の比は、 着目しているエッジに沿って 他のエッジが存在する場合に大きな値に設定し、 着目しているエッジに沿って他 のエツジが存在しない場合に小さな値に設定することが望ましい。 エツジが長く なるほど 2次元標本化関数を変形する度合いが大きくなるため、 エッジが長い場 合に特に目立つジャギーを低減することができる。  In addition, the ratio of the major axis to the minor axis of the elliptical cross section described above is set to a large value when another edge exists along the edge of interest, and another edge along the edge of interest is set. It is desirable to set it to a small value when does not exist. The longer the edge, the greater the degree to which the two-dimensional sampling function is deformed, so that jaggies that are particularly noticeable when the edge is long can be reduced.
また、 上述した第 1のステップには、 2次元画像に含まれる複数の画素からな る微小領域に対応する局所的な情報に基づいてエッジの方向を決定する処理と、 この決定されたエッジの方向に対して微小領域よりも広い領域を対象として最適 化を行う処理を含むことが望ましい。 データ処理量を少なくするためには、 でき るだけ微小領域に分けてエッジを検出しその方向を決定することが好ましいが、 反対に大局的なエッジ形状を反映させることが難しくなる。 そこで、 微小領域毎 にエッジの方向を決定した後に、 この微小領域よりも広い領域においてエッジ方 向の最適化を行うことにより、 少ないデータ処理量を維持しながら、 大局的なェ ッジ形状を反映させた正確なエッジ検出を行うことが可能になる。 図面の簡単な説明 Further, the first step described above includes a process of determining an edge direction based on local information corresponding to a minute region including a plurality of pixels included in the two-dimensional image, and a process of determining the determined edge. It is desirable to include a process of optimizing a region wider than the minute region in the direction. In order to reduce the amount of data processing, it is preferable to detect edges and determine their directions in as small a region as possible, but it is difficult to reflect the global edge shape. Therefore, after deciding the edge direction for each small area, the edge direction is optimized in an area wider than the small area, so that the global edge shape can be reduced while maintaining a small amount of data processing. The reflected edge detection can be performed accurately. BRIEF DESCRIPTION OF THE FIGURES
図 1は、 2次元標本化関数の概形を示す図、  Figure 1 shows a schematic diagram of the two-dimensional sampling function.
図 2は、 2次元標本化関数の変形について説明する図、  FIG. 2 is a diagram illustrating a modification of the two-dimensional sampling function,
図 3は、 楕円 2次元標本化関数による拡大結果について説明する図、 図 4は、 方向ベクトルの計算方法について説明する図、  FIG. 3 is a diagram for explaining an enlarged result by an ellipse two-dimensional sampling function, FIG. 4 is a diagram for explaining a method of calculating a direction vector,
図 5は、 一実施形態の画像処理装置の構成図、  FIG. 5 is a configuration diagram of an image processing apparatus according to an embodiment,
図 6は、 図 5に示した画像処理装置の動作手順を示す流れ図である。 発明を実施するための最良の形態  FIG. 6 is a flowchart showing an operation procedure of the image processing apparatus shown in FIG. BEST MODE FOR CARRYING OUT THE INVENTION
以下、 本発明の画像処理方法について、 図面を参照しながら詳細に説明する。 本発明の画像処理方法は、 微分可能性をパラメ一夕として関数を分類するフルー ェンシ関数系と称される関数系から、 画像の濃度値の補間により適した標本化関 数を選択し、 この標本化関数を用いて補間を行うことにより画像の拡大再構成を 行っている。  Hereinafter, the image processing method of the present invention will be described in detail with reference to the drawings. According to the image processing method of the present invention, a sampling function more suitable for interpolation of image density values is selected from a function system called a fluency function system for classifying a function with differentiability as a parameter. The image is enlarged and reconstructed by performing interpolation using a sampling function.
次に、 フルーェンシ関数が張る関数空間の定義と、 この関数空間を特徴付ける インパルス応答に相当する関数系が、 双直交標本化定理によって導出されること を説明する。  Next, we explain the definition of the function space spanned by the fluency function and the fact that the function system corresponding to the impulse response characterizing this function space is derived by the biorthogonal sampling theorem.
(関数空間の定義) :  (Definition of function space):
矩形関数 上 Rectangle function above
) := 2  ): = 2
1 ( 1 ) 1 (1)
0. >一 により生成される階段状関数からなる関数空間を 2 "ズ — "),{ " } : ( 2 )
Figure imgf000006_0001
とするとき、 関数空間 m Sは次のように定義される。 S
0. The function space consisting of the step-like functions generated by> is 2 "-"), {"}: (2)
Figure imgf000006_0001
Then, the function space m S is defined as follows. S
Figure imgf000007_0001
Figure imgf000007_0001
( >2)  (> 2)
関数空間 mS (m= l、 2、 ……、 oo) は、 (m— 2) 回連続微分可能な高々 (m— 1) 次の区分的多項式からなり、 時間領域において局所台を有する関数系 を基底として構成される。 これらの関数空間は、 連続微分可能性 mをパラメ一夕 として、 階段状関数空間 (m= lのとき) から、 フーリエ帯域制限関数空間 (m →ooのとき) までを結びつける一連の関数空間となっている。 この一連の関数空 間を特徴付けるィンパルス応答に相当する関数系は、 次の定理により導かれる。 The function space m S (m = l, 2, ……, oo) consists of at most (m−1) -order piecewise polynomials that can be continuously differentiated (m−2) times, and has a local platform in the time domain. The system is constructed on the basis of. These function spaces are a series of function spaces that link the step-like function space (when m = l) to the Fourier band-limited function space (when m → oo), with the continuous differentiability m as a parameter. Has become. The functional system corresponding to the impulse response characterizing this series of function spaces is derived by the following theorem.
(双直交標本化定理) :  (Biorthogonal sampling theorem):
任意の関数  Any function
f≡ms は、 標本値 f n= f (n) に対して、 f≡ m s is, for a sample value f n = f (n),
/( =∑ [5 (^-«) (4 a)
Figure imgf000007_0002
の関係を満たす。 このときに、 ら は、
/ (= ∑ [5 (^-«) (4 a)
Figure imgf000007_0002
Satisfy the relationship. At this time,
-1  -1
[ Σ (- i (卜 ) βίωίάω [Σ (-i ( ) β ίωί άω
ω (5) であり、 [S*] Ψら ω (5) [S *] Plara
は、 Is
1, p = n 1, p = n
[S^(t-n)[s m ] (t-p)dt = [S ^ (tn) [s m ] (tp) dt =
p≠ n (6 p ≠ n (6
[ , [,
を満たす関数である。 Is a function that satisfies
上述した定理は、 原信号が関数空間 msにあるのならば、 その標本値より関数The above theorem states that if the original signal is in the function space ms , then the function
[s] m を標本化関数として用いて再現できることを表す。 ここで、 上述した関数 [s] Indicates that m can be reproduced using the sampling function. Where the function
[s] m の場合のように、 本明細書では、 ある文字 「A」 の左下側に記載される文 字 「1」 と、 「A」 の左上側に記載される文字 「2」 とは、 本来は同列に記載さ れるべきであるが、 そのような表現ができないために 「 八」 のように記載場所 をずらして表現するものとする。 上述したようにして導出さ.れた関数系 [s] m^ (m≥ 1 ) では、 パラメータ mは小さい程その関数系に含まれる関数の局所性が 高くなる。 一方で、 補間した信号の滑らかさも考慮に入れ、 少なくとも 1回連続 微分可能な m= 3のクラスの標本化関数を選択した。 以後、 この関数を 「m3関 数」 と称することにする。 [s] As in the case of m , in this specification, the character “1” written at the lower left of a certain character “A” and the character “2” written at the upper left of “A” Originally, they should be listed in the same column, but since such expressions are not possible, they should be described in a staggered place like “8”. In the function system [s] m ^ (m≥1) derived as described above, the smaller the parameter m, the higher the locality of the functions included in the function system. On the other hand, taking into account the smoothness of the interpolated signal, a sampling function of m = 3 class that can be continuously differentiated at least once was selected. Hereafter, this function will be referred to as “m3 function”.
次に、 2次元に分布する濃度値の補間に適した標本化関数として、 上述した m 3関数を基にした 2次元標本化関数を定義する。  Next, a two-dimensional sampling function based on the above-described m 3 function is defined as a sampling function suitable for interpolating density values distributed in two dimensions.
(2次元標本化関数の定義) :  (Definition of 2D sampling function):
1次元の標本化関数を [s] 3^ ( t ) としたときに、 変数 (X , y) についての 2次元の m 3関数を次のように定義する。
Figure imgf000008_0001
( 7) この様に定義することで、 X— y平面上において標本点から等距離にある点へ の標本値の影響を等しくする。 ここでは、 画素間の距離を幾何学的な方法で算出 している。 これは、 実際に再構成を行ったときに得られる画像が、 チェス盤距離 など他の方法によって距離を求めた場合よりも品質が高いことを確認できたから である。 これは、 一般的な画像の空間スぺクトル分布が等方的に制限されている ためだと考えられる。 図 1に、 2次元標本化関数の概形を示す。 次に、 上述した 2次元標本化関数を用いて画像を拡大する手順について説明す る。 ここでは、 説明を簡略化するために、 画像を整数倍に拡大する場合を考える ものとするが、 実数倍の場合においても同様の考え方で実行できる。
When the one-dimensional sampling function is [s] 3 ^ (t), the two-dimensional m 3 function for the variable (X, y) is defined as follows.
Figure imgf000008_0001
(7) By defining in this way, the influence of the sample value on points equidistant from the sample point on the X-y plane is equalized. Here, the distance between pixels is calculated by a geometric method. This is because the quality of the image obtained when the reconstruction was actually performed was higher than that obtained when the distance was obtained by other methods such as chessboard distance. This is probably because the spatial spectrum distribution of a general image is isotropically restricted. Figure 1 shows an outline of the two-dimensional sampling function. Next, a procedure for enlarging an image using the above-described two-dimensional sampling function will be described. Here, in order to simplify the explanation, it is assumed that the image is enlarged to an integral multiple. However, the same concept can be applied to the case of a real multiple.
まず、 2次元標本化関数を拡大倍率に応じて行列で表現した標本化行列を生成 する。 さらに、 畳み込みを行い拡大後の画素値を保持するための拡大画像行列を 用, aする。  First, a sampling matrix is generated by expressing the two-dimensional sampling function as a matrix according to the magnification. In addition, we use a magnified image matrix to hold the pixel values after convolution and magnification.
原画像は、 RGB 3原色の濃度値で表されるカラー画像であり、 そのサイズを NxXNyとするとき、 一つの色要素についての濃度値は次の行列で表される。  The original image is a color image represented by the density values of the three RGB primary colors. When its size is NxXNy, the density value for one color element is represented by the following matrix.
Figure imgf000009_0001
また、 2次元化した標本化関数を拡大倍率に応じた行列で表現し、 標本化行列 とする。 画像を n倍に拡大する場合の標本化行列は次のように表せる。 マトリク スのサイズは 4 nである。
Figure imgf000009_0001
Also, the two-dimensional sampling function is represented by a matrix corresponding to the magnification, and is referred to as a sampling matrix. The sampling matrix when the image is enlarged n times can be expressed as follows. The size of the matrix is 4 n.
【 ー一 2+;7). is] ψ-2+2*η-2) …[ - 2) 岡  [ー 一 2+; 7). Is] ψ-2 + 2 * η-2)… [-2) Oka
… - 【s
Figure imgf000009_0002
…-[S
Figure imgf000009_0002
Figure imgf000009_0003
次に、 原画像の行列を縦横それぞれ η倍に引き伸ばした行列 [Gn] を用意す る。 原画像にはない要素は 0にする。 ) g(l+-,l)=0 g(l+-,l) =0
so
Figure imgf000009_0003
Next, a matrix [Gn] is prepared by expanding the matrix of the original image vertically and horizontally by η times. Elements that are not in the original image are set to 0. ) g (l +-, l) = 0 g (l +-, l) = 0
n n  n n
g(U+-) =0 g(l+-,l+-)=0 g (U +-) = 0 g (l + -, l +-) = 0
n n n  n n n
(10) g(U+-)=0 · · ·  (10) g (U +-) = 0
n  n
… Λ)— 上述した標本化行列 [ M 3 ] を用いて、 拡大画像行列 [ G n ] をフィルタリン グする事で、 原画像と標本化行列の畳み込みが実現される。 … Λ) — The convolution of the original image and the sampling matrix is realized by filtering the enlarged image matrix [G n] using the sampling matrix [M 3] described above.
ところで、 画像拡大を実施する上で考えなければならない問題の一つが、 エツ ジにおけるジャギーの顕在化である。 通常のデジタル画像は、 方形格子状にサン プリングされるため、 斜めの線は常にジャギーの要素を含んでいる。 エッジの形 状は画像の品質に大きく影響するため、 画質の向上を考える場合にこの問題は看 過できない。 そこで、 本発明の画像処理方法では、 2次元標本化関数をエッジ形 状に適応して変形することで、 ジャギーの低減を図っている。  By the way, one of the issues that must be considered when implementing image enlargement is the emergence of jaggy in edges. Normal digital images are sampled in a square grid, so diagonal lines always contain jagged elements. Since the shape of the edge greatly affects the quality of the image, this problem cannot be overlooked when considering improvement in image quality. Therefore, in the image processing method of the present invention, jaggy is reduced by adapting and deforming the two-dimensional sampling function to the edge shape.
上述したように、 2次元標本化関数は、 関数値の等しい点を通る断面で見ると、 円形を為すように定義されている。 この 2次元標本化関数を、 切断面が楕円形に なるように変形することで、 方向性を持った 2次元標本化関数が生成できる (図 2参照) 。 ここでは、 変形することにより方向性を持った 2次元標本化関数を 「楕円 2次元標本化関数」 と呼ぶこととする。 この変形方向がエッジ方向と一致 したとき、 エッジ形状を滑らかに補間することができる。  As mentioned above, a two-dimensional sampling function is defined to be circular when viewed in a section passing through points of equal function value. By transforming this two-dimensional sampling function so that the cut surface becomes elliptical, a two-dimensional sampling function with directionality can be generated (see Fig. 2). Here, the two-dimensional sampling function that has directionality by being deformed is called “elliptic two-dimensional sampling function”. When this deformation direction matches the edge direction, the edge shape can be smoothly interpolated.
図 3は、 楕円 2次元標本化関数による拡大結果について説明する図である。 図 3 ( a ) は従来法である最近接内挿による拡大結果、 図 3 ( b ) は従来法である 3次畳み込み内挿による拡大結果、 図 3 ( c ) は 2次元標本化関数による拡大結 果、 図 3 ( d ) は楕円 2次元標本化関数による拡大結果をそれぞれ示している (いずれも 1 0倍の場合) 。  FIG. 3 is a diagram for explaining an enlarged result by an elliptic two-dimensional sampling function. Fig. 3 (a) shows the result of enlargement by the conventional method of nearest neighbor interpolation, Fig. 3 (b) shows the result of enlargement by the conventional method of third-order convolution, and Fig. 3 (c) shows the result of enlargement by the two-dimensional sampling function. As a result, Fig. 3 (d) shows the results of the enlargement by the elliptic two-dimensional sampling function, respectively (when both are 10 times larger).
変形した標本化関数 (楕円 2次元標本化関数) を一般の画像拡大に適用するに は、 画像中のエッジ方向を適切に決定する必要がある。 次に、 エッジ方向を決定 する方法について説明する。 自然画像のエッジは、 部分的なエッジの途切れ、 複数エッジの交差などの複雑 な構造を持つ。 局所的な情報だけで方向を決定することは、 画像に歪みをもたら す原因となる。 本発明では、 まず局所情報をもとにエッジ方向を決定する。 その 後、 段階的に大局の情報を取り込むことで、 より適切なエッジ方向を求める。 以 下に、 その手順を説明する。 In order to apply the modified sampling function (elliptical two-dimensional sampling function) to general image enlargement, it is necessary to appropriately determine the edge direction in the image. Next, a method for determining the edge direction will be described. The edges of natural images have complex structures such as partial edge breaks and intersections of multiple edges. Determining the direction using only local information causes distortion in the image. In the present invention, an edge direction is first determined based on local information. Then, by acquiring the general information step by step, a more appropriate edge direction is obtained. The procedure is described below.
エッジ検出には、 8方向のラプラシアンフィル夕を用いる。 デジタル画像にお けるエッジとは、 2つの画素間で濃度値が急激に変化している点を指すので実際 には 2つの画素の間がエッジとなる。 通常のエッジ検出では、 このエッジを挟む 2つの画素から、 濃度値の高い画素か、 低い画素のいずれかをエッジとして表現 する。 本実施形態では、 エッジ周辺の補間を行うときに標本化関数をエッジ形状 に適応して変形することが目的となるので、 エッジを挟む 2画素をそれぞれェッ ジとして考える。 すなわち、 本来のエッジと接した高濃度値の画素を第 1のエツ ジとしたエッジデータと、 低濃度値の画素を第 2のエッジとしたエツジデ一夕の 2種類を用意し、 それぞれのデータ毎に方向を決定する。 また、 ジャギーの顕在 化が問題となるのは緩やかに連続したエッジ部分であって、 エッジ形状が細かく 変化するような部分や、 エッジが密集した部分ではエッジ形状の複雑さのために ジャギーが画像品質に与える影響は小さい。 そこで、 検出したエッジデータから、 ジャギーが問題となる緩やかに連続するエッジのみを選別し、 これを方向検出に 用いる。  Eight directions of Laplacian fill are used for edge detection. An edge in a digital image refers to a point at which the density value changes abruptly between two pixels, and thus the edge is actually between two pixels. In normal edge detection, either a pixel with a high density value or a pixel with a low density value is expressed as an edge from two pixels sandwiching this edge. In the present embodiment, since the purpose is to adapt the sampling function to the edge shape when performing interpolation around the edge, two pixels sandwiching the edge are considered as edges. In other words, two types of edge data are prepared: edge data with high density pixels in contact with the original edge as the first edge, and edge data with low density pixels as the second edge. The direction is determined every time. Also, the appearance of jaggies is a problem in gently continuous edges, and in areas where the edge shape changes finely, or in areas where edges are dense, jaggies may be imaged due to the complexity of the edge shape. The impact on quality is small. Therefore, from the detected edge data, only slowly continuous edges that cause jaggies are selected and used for direction detection.
各エッジでの方向を決定するために、 主成分分析による最小二乗法を用いる。 最小二乗法は、 2次元に分布する点に対して、 各点との距離が最小となる直線を 求めるための技法である。 このようにして求められた直線を回帰直線と呼ぶ。 こ こでは、 エッジパターンに対して最小二乗法を適用し、 求められた回帰直線の方 向を領域中心のエッジ方向とする。 まず、 3 X 3画素の小領域毎に回帰直線を求 める。 このとき、 エッジが交差しているなど、 回帰直線が一意に定まらないバタ —ンがいくつか存在する。 そのような場合には、 方向が確定している周囲のエツ ジ情報を元に方向を算出する。 この計算には、 対象画素の方向ベクトルを用いる。 すなわち、 対象領域内で最小二乗法により方向が求められたエッジの方向べクト ルを足し合わせ、 対象画素の方向ベクトルを求める。 ただし、 ここで決定しょう としている方向とは、 対象点と方向べクトルによって表される直線の方向である , したがって、 ここでは方向ベクトルを aとしたときに、 a =— aと考える。 In order to determine the direction at each edge, the least squares method using principal component analysis is used. The least squares method is a technique for finding a straight line that minimizes the distance between two-dimensionally distributed points. The straight line obtained in this way is called a regression line. Here, the least squares method is applied to the edge pattern, and the direction of the obtained regression line is set as the edge direction at the center of the region. First, a regression line is determined for each small area of 3 × 3 pixels. At this time, there are some patterns where the regression line is not uniquely determined, such as intersection of edges. In such a case, the direction is calculated based on the surrounding edge information whose direction is determined. For this calculation, the direction vector of the target pixel is used. That is, the direction vectors of the edges whose directions have been obtained by the least squares method in the target area are added to obtain the direction vector of the target pixel. However, let's decide here Is the direction of the straight line represented by the target point and the direction vector. Therefore, here, when the direction vector is a, it is considered that a =-a.
上述した理由により、 ベクトルの和を計算する前に、 どちらのベクトルを用い るかを判断する必要がある。 本発明では、 内積の大小によって判断する。 すなわ ち、 2本の直線のそれぞれにより近い直線が、 求める方向を示す直線であるとい えるので、 方向ベクトルのなす角度が小さい、 したがって、 内積が大きいべクト ルを選択する。 図 4に示した例では、 ベクトル aとの内積がより大きいベクトル b ' を選択する。 このようにして、 各方向ベクトルの和を求め、 最小二乗法では 確定できなかったエッジの方向を決定することができる。  For the reasons described above, it is necessary to determine which vector to use before calculating the sum of the vectors. In the present invention, the determination is made based on the magnitude of the inner product. In other words, since the straight line closer to each of the two straight lines is said to be the straight line indicating the desired direction, a vector having a small angle formed by the direction vectors and thus having a large inner product is selected. In the example shown in Fig. 4, a vector b 'having a larger inner product with the vector a is selected. In this way, the sum of each direction vector is obtained, and the direction of the edge that cannot be determined by the least squares method can be determined.
ところで、 上述したように、 方向決定は 3 X 3画素という極めて局所的な情報 を基に行われているため、 このままでは、 より大局的なエッジ形状の変化を反映 することが困難である。 そこで、 求められた局所的な方向情報をもとに方向の最 適化を行う。 次に、 その手法を説明する。 方向の最適化は、 エッジを中心とした 5 X 5画素の領域において領域内のエッジが持つ方向べクトルを足し合わせるこ とで実現する。 ただし、 対象領域内にあるエッジでも領域中心のエッジと連続し ていない場合には、 その方向情報を加えない。 また、 べクトルの和を求める際に、 領域中心からの距離に反比例する重み付けを行う。 以上のような処理を全てのェ ッジに対して行い、 方向を最適化する。  By the way, as described above, since the direction is determined based on extremely local information of 3 × 3 pixels, it is difficult to reflect a more general change in the edge shape as it is. Therefore, direction optimization is performed based on the obtained local direction information. Next, the method will be described. The direction optimization is realized by adding the direction vectors of the edges in the 5 × 5 pixel area centered on the edge. However, if the edge in the target area is not continuous with the edge at the center of the area, the direction information is not added. When calculating the sum of the vectors, weighting is performed in inverse proportion to the distance from the center of the area. The above processing is performed for all edges to optimize the direction.
次に、 適応型標本化行列について説明する。 上述した 2次元標本化関数を楕円 に変形するには、 長軸と短軸の長さが必要となる。 ただし、 標本化関数の変形は、 ある関数値での断面積が一定となるようにして行うので、 実際には長軸と短軸の 長さの比を決定すればよい。 以後、 この比率を 「形状係数」 と呼ぶ。 形状係数は、 対象エッジの方向に存在するエッジの情報を元に算出する。 すなわち、 エッジ方 向の先にエッジが存在する場合、 さらにその方向が一致する場合には形状を大き く変化させ、 逆にエッジ方向に他のエッジが存在しない場合には変形を抑止する ように形状係数を決定する。  Next, the adaptive sampling matrix will be described. In order to transform the above two-dimensional sampling function into an ellipse, the lengths of the major axis and the minor axis are required. However, since the sampling function is deformed so that the cross-sectional area at a certain function value is constant, the ratio of the length of the major axis to the minor axis may be determined in practice. Hereinafter, this ratio is referred to as “shape factor”. The shape factor is calculated based on information on edges existing in the direction of the target edge. In other words, if an edge exists ahead of the edge direction, and if the directions match, the shape is greatly changed, and conversely, if there is no other edge in the edge direction, deformation is suppressed. Determine the shape factor.
以上のようにして求められたエッジ方向と形状係数を元に、 対象画素の状況に 適応した標本化行列を生成する。 いま、 形状係数を a「、 エッジ方向を角度 0と すると、 点 (X , y ) の写像 (χ ' , y ' ) は次式で表される。 、 Based on the edge direction and shape factor obtained as described above, a sampling matrix suitable for the situation of the target pixel is generated. Now, assuming that the shape factor is a "and the edge direction is angle 0, the mapping (χ ', y') of the point (X, y) is expressed by the following equation. ,
(11)
Figure imgf000013_0001
ただし、 a = , β =
(11)
Figure imgf000013_0001
Where a =, β =
この写像平面上の 2次元標本化関数が、 求める適応型標本化関数である。 各画 素毎に求めた適応型標本化行列を用いて畳み込みを行う。 ただし、 エッジ以外の 画素に対しては、 標準の 2次元標本化関数を用いる。 畳み込みの方法は、 上述し た 2次元標本化関数による畳み込みの場合と同様に行う。 The two-dimensional sampling function on this mapping plane is the adaptive sampling function to be found. Convolution is performed using the adaptive sampling matrix obtained for each pixel. However, a standard two-dimensional sampling function is used for pixels other than edges. The convolution is performed in the same manner as the convolution using the two-dimensional sampling function described above.
このように、 本発明の画像処理方法は、 2次元化した標本化関数を用いること により、 従来方式と比較して鮮鋭な拡大画像が得られる。 また、 エッジ形状に適 応して 2次元標本化関数を変形して補間を行う拡大方式を用いることにより、 ジ ャギ一の顕在化を防止することができる。  As described above, in the image processing method of the present invention, by using the two-dimensional sampling function, a sharp enlarged image can be obtained as compared with the conventional method. Also, by using an enlargement method that performs interpolation by deforming the two-dimensional sampling function according to the edge shape, it is possible to prevent jaggies from appearing.
図 5は、 楕円 2次元標本化関数を用いた本発明の画像処理方法が適用される画 像処理装置の概略的な構成を示す図である。 図 5に示す画像処理装置は、 画像デ 一夕格納部 1 0、 エッジ判定部 2 0、 標本化行列設定部 3 0、 補間演算部 4 0、 拡大データ格納部 5 0を含んで構成されている。  FIG. 5 is a diagram showing a schematic configuration of an image processing apparatus to which the image processing method of the present invention using an elliptic two-dimensional sampling function is applied. The image processing device shown in FIG. 5 includes an image data storage unit 10, an edge determination unit 20, a sampling matrix setting unit 30, an interpolation operation unit 40, and an enlarged data storage unit 50. I have.
画像データ格納部 1 0は、 補間対象となる 2次元画像を構成する各画素に対応 した濃度値からなる 2次元の画像デ一夕を格納する。 エッジ判定部 2 0は、 補間 対象となっている着目画素 (対象画素) が 2次元画像内のエッジに含まれている か否か、 含まれている場合には、 この着目画素の状況を判定する。 上述したよう に、 着目画素の状況には、 この着目画素が含まれるエッジのエッジ方向の角度 0 と、 楕円 2次元標本化関数を変形させる場合の形状係数 a r が含まれる。 The image data storage unit 10 stores a two-dimensional image data consisting of density values corresponding to each pixel constituting a two-dimensional image to be interpolated. The edge determination unit 20 determines whether the pixel of interest (target pixel) to be interpolated is included in an edge in the two-dimensional image, and if so, determines the status of the pixel of interest. I do. As described above, the status of the target pixel, the edge direction of the angle 0 of edges included the target pixel includes a shape factor a r when deforming the elliptical two-dimensional sampling function.
標本化行列設定部 3 0は、 対象画素の補間処理に用いる標本化行列を生成する。 例えば、 対象画素がエッジに含まれない場合には、 標本化行列設定部 3 0は、 上 述した (9 ) 式に示した標本化行列 [M 3 ] を補間処理用に生成する。 また、 対 象画素がエッジに含まれている場合には、 標本化行列設定部 3 0は、 エッジ判定 部 2 0によって求められたエッジ方向の角度 Θと形状係数 a r とに基づいて、 上 述した (1 1 ) 式で示された写像の関係式の内容を決定し、 (9 ) 式に示した標 本化行列を変形して、 対象画素の状況に対応した適応型標本化行列を生成する。 補間演算部 4 0は、 対象画素周辺の所定範囲に位置する複数の画素の濃度値を 画素データ格納部 1 0から読み出して上述した (1 0 ) 式に示した拡大画像行列 [ G n ] を生成し、 この拡大画像行列 [ G n ] を標本化行列設定部 3 0によって 生成された標本化行列関数を用いてフィルタリングすることにより、 原画像と標 本化行列の畳み込み演算による補間演算を行って、 対象画素の濃度値 (補間値) を演算する。 このようにして算出された対象画素の濃度値は、 拡大データ格納部 5 0に順番に格納される。 The sampling matrix setting unit 30 generates a sampling matrix used for interpolation processing of a target pixel. For example, when the target pixel is not included in the edge, the sampling matrix setting unit 30 generates the sampling matrix [M 3] shown in the above equation (9) for the interpolation processing. When the target pixel is included in the edge, the sampling matrix setting unit 30 sets the edge judgment Based on the angle Θ of the edge direction obtained by the part 20 and the shape factor a r , the contents of the above-mentioned relational expression of the mapping expressed by the expression (11) are determined, and the expression shown in the expression (9) is obtained. The sample matrix is transformed to generate an adaptive sampling matrix corresponding to the situation of the target pixel. The interpolation calculation unit 40 reads the density values of a plurality of pixels located in a predetermined range around the target pixel from the pixel data storage unit 10 and calculates the enlarged image matrix [G n] shown in the above equation (10). By generating and filtering this enlarged image matrix [G n] using the sampling matrix function generated by the sampling matrix setting unit 30, the interpolation operation by convolution of the original image and the sample matrix is performed. To calculate the density value (interpolation value) of the target pixel. The density values of the target pixel thus calculated are sequentially stored in the enlarged data storage unit 50.
図 6は、 図 5に示した画像処理装置の動作手順を示す流れ図である。 まず、 補 間演算部 4 0は、 補間対象となる着目画素 (着目画素) を設定する (ステップ 1 0 0 ) 。 また、 対象画素が決まると、 標本化行列生成部 3 0は、 対象画素の状況 に応じた標本化行列を生成する (ステップ 1 0 1 ) 。 対象画素がエッジに含まれ ていない場合には、 その旨がエッジ判定部 2 0から通知され、 標本化行列生成部 3 0は、 変形処理を伴わない標本化行列 ( (9 ) 式に示した標本化行列) を生成 する。 一方、 対象画素がエッジに含まれる場合には、 その旨とエッジ方向の角度 0と形状係数 a r がエッジ判定部 2 0から通知され、 標本化行列生成部 3 0は、 エッジ方向に変形した適応型標本化行列を生成する。 FIG. 6 is a flowchart showing an operation procedure of the image processing apparatus shown in FIG. First, the interpolation calculation unit 40 sets a pixel of interest (pixel of interest) to be interpolated (step 100). When the target pixel is determined, the sampling matrix generation unit 30 generates a sampling matrix according to the situation of the target pixel (step 101). If the target pixel is not included in the edge, the effect is notified from the edge judgment unit 20 and the sampling matrix generation unit 30 sends the sampling matrix (10) Generate a sampling matrix. On the other hand, when the target pixel is included in the edge, the effect, the angle 0 in the edge direction, and the shape coefficient ar are notified from the edge determination unit 20.The sampling matrix generation unit 30 deforms in the edge direction. Generate an adaptive sampling matrix.
次に、 補間演算部 4 0は、 対象画素周辺の所定範囲に含まれる所定個数の画素 の濃度値を読み出し (ステップ 1 0 2 ) 、 この読み出した各画素の濃度値と、 標 本化行列生成部 3 0によって生成された標本化行列とを用いた補間演算を行って、 対象画素の濃度値を求める (ステップ 1 0 3 ) 。  Next, the interpolation calculation unit 40 reads out the density values of a predetermined number of pixels included in a predetermined range around the target pixel (step 102), and reads out the density values of the read out pixels and the sample matrix generation. An interpolation operation using the sampling matrix generated by the unit 30 is performed to obtain the density value of the target pixel (step 103).
このようにして、 ある対象画素の濃度値が補間演算によって求まると、 補間演 算部 4 0は、 全ての画素について濃度値の計算が終了したか否かを判定し (ステ ップ 1 0 4 ) 、 濃度値が計算されていない他の画素が存在する場合には否定判断 を行って、 ステップ 1 0 0以降の処理を繰り返す。 また、 全ての画素について濃 度値が終了した場合には、 ステップ 1 0 4の判定において肯定判断が行われ、 一 連の拡大処理が終了する。 このように、 本実施形態の画像処理装置では、 2次元画像に含まれるエッジ部 分を拡大する場合に、 エッジ形状に適応させた楕円 2次元標本化関数を用いた補 間処理が行われるため、 エッジ形状に沿った滑らかな補間処理を行うことができ、 2次元画像の拡大処理を行った際に高品位な再構成画像を得ることができる。 また、 連続微分可能な 2次元標本化関数を用いることにより、 不自然な段差等 がない高品位な拡大画像を得ることができる。 特に、 局所台の値を有する区分多 項式によって表現される 2次元標本化関数を用いることにより、 拡大処理を行う 際のデータ処理量を少なくすることができるとともに、 打ち切り誤差の発生を防 止することができる。 また、 楕円形断面を有する楕円 2次元標本化関数を用いる ことにより、 エッジ方向に沿った急激な濃度値の変化がなくなるため、 エッジ方 向のジャギーを低減することができる。 また、 エッジが長くなるほど 2次元標本 化関数を変形する度合いが大きくなるため、 エッジが長い場合に特に目立つジャ ギーを低減することができる。 In this way, when the density value of a certain target pixel is obtained by the interpolation calculation, the interpolation calculation unit 40 determines whether or not the calculation of the density value has been completed for all the pixels (step 104). If there is another pixel for which the density value has not been calculated, a negative determination is made and the processing from step 100 onward is repeated. When the density values have been completed for all the pixels, an affirmative determination is made in the determination in step 104, and a series of enlargement processing ends. As described above, in the image processing apparatus of the present embodiment, when the edge portion included in the two-dimensional image is enlarged, the interpolation process using the elliptic two-dimensional sampling function adapted to the edge shape is performed. It is possible to perform a smooth interpolation process along the edge shape, and to obtain a high-quality reconstructed image when performing a 2D image enlargement process. Also, by using a continuously differentiable two-dimensional sampling function, it is possible to obtain high-quality enlarged images without unnatural steps. In particular, by using a two-dimensional sampling function expressed by a piecewise polynomial having local platform values, it is possible to reduce the amount of data processing when performing the enlarging process and prevent the occurrence of truncation errors. can do. In addition, by using an elliptic two-dimensional sampling function having an elliptical cross section, a sharp change in density value along the edge direction is eliminated, so that jaggies in the edge direction can be reduced. Also, the longer the edge, the greater the degree to which the two-dimensional sampling function is deformed, so that jaggies that are particularly noticeable when the edge is long can be reduced.
また、 2次元画像の微小領域毎にエッジの方向を決定した後に、 この微小領域 よりも広い領域においてエッジ方向の最適化を行うことにより、 少ないデ一夕処 理量を維持しながら、 大局的なエッジ形状を反映させた正確なエッジ検出を行う ことが可能になる。 産業上の利用可能性  In addition, after determining the edge direction for each minute area of the two-dimensional image, the edge direction is optimized in an area wider than the minute area, so that a small amount of data processing is maintained while maintaining a large amount of global processing. This makes it possible to perform accurate edge detection that reflects various edge shapes. Industrial applicability
上述したように、 本発明によれば、 2次元画像に含まれるエッジ部分を拡大す る場合に、 エッジ形状に適応させた楕円 2次元標本化関数を用いた補間処理が行 われるため、 エッジ形状に沿った滑らかな補間処理を行うことができ、 2次元画 像の拡大処理を行った際に高品位な再構成画像を得ることができる。  As described above, according to the present invention, when enlarging an edge portion included in a two-dimensional image, interpolation processing using an ellipse two-dimensional sampling function adapted to the edge shape is performed. Interpolation processing can be performed according to the above, and a high-quality reconstructed image can be obtained when the two-dimensional image is enlarged.

Claims

請 求 の 範 囲 The scope of the claims
1 . 標本点から等距離にある点への標本値の影響が等しい 2次元標本化関数を用 いて 2次元画像を拡大する画像処理方法において、  1. In an image processing method that enlarges a two-dimensional image using a two-dimensional sampling function in which the effects of sample values on points equidistant from the sample point are equal,
前記 2次元画像に含まれるエッジを検出する第 1のステップと、  A first step of detecting edges included in the two-dimensional image;
前記 2次元標本化関数を前記ェッジの形状に適応させて変形した楕円 2次元標 本化関数を生成する第 2のステップと、  A second step of generating a deformed elliptic two-dimensional sample function by adapting the two-dimensional sampling function to the shape of the edge;
前記 2次元画像に含まれる前記エッジに沿った画素については、 前記楕円 2次 元標本化関数を用いて画素数を増加させる補間演算を行い、 それ以外の画素につ いては、 前記 2次元標本化関数を用いて画素数を増加させる補間演算を行う第 3 を有することを特徴とする画像処理方法。  For the pixels along the edge included in the two-dimensional image, an interpolation operation for increasing the number of pixels is performed using the elliptic two-dimensional sampling function, and for the other pixels, the two-dimensional sample is used. An image processing method comprising: performing an interpolation operation to increase the number of pixels by using a conversion function.
2 . 前記 2次元標本化関数は、 (m— 2 ) 回連続微分可能な (m— 1 ) 次の区分 多項式からなり、 局所台の関数値を有することを特徴とする請求の範囲第 1項記 載の画像処理方法。  2. The said two-dimensional sampling function is composed of a (m-1) -th order piecewise polynomial that can be continuously differentiated (m-2) times, and has a function value of a local table. The image processing method described.
3 . 前記楕円 2次元標本化関数は、 関数値が等しい点を通る円形断面を、 断面積 を維持しながら前記エッジの方向に沿って変形した楕円形断面を有することを特 徴とする請求の範囲第 1項記載の画像処理方法。  3. The elliptic two-dimensional sampling function is characterized in that the elliptical two-dimensional sampling function has an elliptical cross section obtained by deforming a circular cross section passing through a point having the same function value along the direction of the edge while maintaining a cross sectional area. 2. The image processing method according to claim 1, wherein
4 . 前記楕円形断面の長軸と短軸の比は、 前記エッジに沿って他のエッジが存在 する場合に大きな値に設定し、 前記エッジに沿って他のエッジが存在しない場合 に小さな値に設定することを特徴とする請求の範囲第 3項記載の画像処理方法。 4. The ratio of the major axis to the minor axis of the elliptical cross section is set to a large value when another edge exists along the edge, and a small value when no other edge exists along the edge. 4. The image processing method according to claim 3, wherein the image processing method is set to:
5 . 前記第 1のステップは、 前記 2次元画像に含まれる複数の画素からなる微小 領域に対応する局所的な情報に基づいて前記エッジの方向を決定する処理と、 こ の決定された前記エッジの方向に対して前記微小領域よりも広い領域を対象とし て最適化を行う処理を含むことを特徴とする請求の範囲第 1項記載の画像処理方 法。 5. The first step includes a step of determining a direction of the edge based on local information corresponding to a minute region including a plurality of pixels included in the two-dimensional image; and a step of determining the determined edge. 2. The image processing method according to claim 1, further comprising a process of optimizing a region wider than the minute region in the direction of.
PCT/JP2000/003640 1999-06-03 2000-06-05 Image processing method WO2000075865A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP11/156852 1999-06-03
JP15685299 1999-06-03

Publications (1)

Publication Number Publication Date
WO2000075865A1 true WO2000075865A1 (en) 2000-12-14

Family

ID=15636796

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2000/003640 WO2000075865A1 (en) 1999-06-03 2000-06-05 Image processing method

Country Status (1)

Country Link
WO (1) WO2000075865A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005109340A1 (en) * 2004-05-12 2005-11-17 Sanyo Electric Co., Ltd. Image enlarging device and program
JP2009140435A (en) * 2007-12-10 2009-06-25 Sharp Corp Image processing device, image display device, image formation device, image processing method, computer program, and storage medium
JP2011070595A (en) * 2009-09-28 2011-04-07 Kyocera Corp Image processing apparatus, image processing method and image processing program
JP2011070594A (en) * 2009-09-28 2011-04-07 Kyocera Corp Image processing apparatus, image processing method and image processing program
GB2487242A (en) * 2011-01-17 2012-07-18 Sony Corp Interpolation Using Shear Transform
US8340472B2 (en) 2008-02-26 2012-12-25 Fujitsu Limited Pixel interpolation apparatus and method
JP2013225308A (en) * 2012-04-20 2013-10-31 Canon Inc Image resampling by frequency unwrapping

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05207271A (en) * 1992-01-24 1993-08-13 Matsushita Electric Ind Co Ltd Picture magnification equipment
US5294984A (en) * 1988-07-23 1994-03-15 Ryoichi Mori Video signal processing system for producing intermediate pixel data from neighboring pixel data to improve image quality

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5294984A (en) * 1988-07-23 1994-03-15 Ryoichi Mori Video signal processing system for producing intermediate pixel data from neighboring pixel data to improve image quality
JPH05207271A (en) * 1992-01-24 1993-08-13 Matsushita Electric Ind Co Ltd Picture magnification equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005109340A1 (en) * 2004-05-12 2005-11-17 Sanyo Electric Co., Ltd. Image enlarging device and program
JP2009140435A (en) * 2007-12-10 2009-06-25 Sharp Corp Image processing device, image display device, image formation device, image processing method, computer program, and storage medium
JP4510069B2 (en) * 2007-12-10 2010-07-21 シャープ株式会社 Image processing apparatus, image display apparatus, image forming apparatus, image processing method, computer program, and storage medium
US8340472B2 (en) 2008-02-26 2012-12-25 Fujitsu Limited Pixel interpolation apparatus and method
JP2011070595A (en) * 2009-09-28 2011-04-07 Kyocera Corp Image processing apparatus, image processing method and image processing program
JP2011070594A (en) * 2009-09-28 2011-04-07 Kyocera Corp Image processing apparatus, image processing method and image processing program
GB2487242A (en) * 2011-01-17 2012-07-18 Sony Corp Interpolation Using Shear Transform
US8588554B2 (en) 2011-01-17 2013-11-19 Sony Corporation Interpolation
JP2013225308A (en) * 2012-04-20 2013-10-31 Canon Inc Image resampling by frequency unwrapping

Similar Documents

Publication Publication Date Title
JP4150947B2 (en) Image processing apparatus and method, and recording medium
EP1347410B1 (en) Edge-based enlargement and interpolation of images
Lee et al. Nonlinear image upsampling method based on radial basis function interpolation
US6816166B2 (en) Image conversion method, image processing apparatus, and image display apparatus
WO2000046740A1 (en) Non-linear and linear method of scale-up or scale-down image resolution conversion
KR20130001213A (en) Method and system for generating an output image of increased pixel resolution from an input image
JP2001512265A (en) Texture mapping in 3D computer graphics
JP2011504682A (en) Resize image sequence
US20070171287A1 (en) Image enlarging device and program
JP2008512767A (en) General two-dimensional spatial transformation expression system and method
JP2009026340A (en) Generating smooth feature line for subdivision surface
JP3890174B2 (en) Image processing method, image processing apparatus, and computer-readable medium
CN101142614A (en) Single channel image deformation system and method using anisotropic filtering
JPH08294001A (en) Image processing method and image processing unit
WO2000075865A1 (en) Image processing method
Van De Ville et al. Least-squares spline resampling to a hexagonal lattice
JP5388780B2 (en) Image processing apparatus, image processing method, and image processing program
JP5388779B2 (en) Image processing apparatus, image processing method, and image processing program
US7428346B2 (en) Image processing method and image processing device
JP3200351B2 (en) Image processing apparatus and method
JP3972625B2 (en) Image processing apparatus and image processing method
JPH11353472A (en) Image processor
JP3655814B2 (en) Enlarged image generation apparatus and method
Kherd et al. The Use of Biharmonic Dejdamrong Surface in Gray Image Enlargement Process
JP2000182040A (en) Method and device for providing expression of image data, method and device for expressing image data, method and device for converting kernel for image processing, converting method and converter

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref country code: JP

Ref document number: 2001 502064

Kind code of ref document: A

Format of ref document f/p: F

AK Designated states

Kind code of ref document: A1

Designated state(s): CN JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase