JP2009206552A - Image processing apparatus - Google Patents

Image processing apparatus Download PDF

Info

Publication number
JP2009206552A
JP2009206552A JP2008044002A JP2008044002A JP2009206552A JP 2009206552 A JP2009206552 A JP 2009206552A JP 2008044002 A JP2008044002 A JP 2008044002A JP 2008044002 A JP2008044002 A JP 2008044002A JP 2009206552 A JP2009206552 A JP 2009206552A
Authority
JP
Japan
Prior art keywords
pixel
signal
pixels
color
interpolation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2008044002A
Other languages
Japanese (ja)
Inventor
Kunio Yamada
邦男 山田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Victor Company of Japan Ltd
Original Assignee
Victor Company of Japan Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Victor Company of Japan Ltd filed Critical Victor Company of Japan Ltd
Priority to JP2008044002A priority Critical patent/JP2009206552A/en
Publication of JP2009206552A publication Critical patent/JP2009206552A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Color Image Communication Systems (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide an image processing apparatus that has few jaggies, is smooth, and can interpolate pixels having small number of false colors. <P>SOLUTION: Sections 12-15 for evaluating the presence or the absence and the directional properties of contours set a plurality of pixel regions with inputted R, B, G_a, and G_b pixels as each center to be evaluation regions, and performs the evaluation of the presence or absence of the contours for a pixel signal of the R pixel, B pixel, and G pixel in the evaluation regions and the evaluation of the directional properties of existing contours. Interpolation and high-frequency addition processing sections 16-19 calculate the interpolated value of two color elements, based on the values of a plurality of pixels in the two color elements that are at the center pixel, arranged in a direction indicated by determined directional properties and differ from the center pixel, when determining that the directional properties of a G signal and those of B/R signals are within a preset angle range, and interpolate the calculated interpolation values of the two color elements as pixel values of the two color elements in the center pixel. <P>COPYRIGHT: (C)2009,JPO&INPIT

Description

本発明は画像処理装置に係り、特にベイヤ配列の画像データに対して画素補間を行う画像処理装置に関する。   The present invention relates to an image processing apparatus, and more particularly to an image processing apparatus that performs pixel interpolation on Bayer array image data.

ベイヤ配列の色フィルタを受光面上に設けた固体撮像素子が知られている。このベイヤ配列の色フィルタは、図21に模式的に示すように、赤色光を透過させる1画素分の赤色フィルタ部Rと、緑色光を透過させる1画素分の緑色フィルタ部Gと、青色光を透過させる1画素分の青色フィルタ部Bとが配置された構成である。図21に示すベイヤ配列の色フィルタは、上記の色フィルタ部R,G,Bのうち、輝度信号の寄与する割合の高い緑色光を透過させる緑色フィルタ部Gを1画素ピッチおき毎に市松状に配置し、残りの箇所に赤色フィルタ部Rと青色フィルタ部Bとを1画素ピッチおき毎に市松状に配置した構成である。   2. Description of the Related Art A solid-state imaging device in which a Bayer array color filter is provided on a light receiving surface is known. As schematically shown in FIG. 21, this Bayer array color filter includes a red filter portion R for one pixel that transmits red light, a green filter portion G for one pixel that transmits green light, and blue light. The blue filter part B for 1 pixel which permeate | transmits is arrange | positioned. The Bayer array color filter shown in FIG. 21 has a checkered pattern of green filter portions G that transmit green light having a high contribution ratio of luminance signals in every other pixel pitch among the color filter portions R, G, and B described above. The red filter part R and the blue filter part B are arranged in a checkered pattern every other pixel pitch in the remaining portions.

このように、ベイヤ配列の色フィルタは、同一の色要素の色フィルタ部が幾何学的に飛び飛びの画素位置に存在するため、ベイヤ配列の色フィルタを受光面上に設けた固体撮像素子から出力した撮像信号を処理する画像処理装置は、全画素について三原色の色要素の揃った信号にするために、この飛び飛びの画素位置に存在する色を補間する補間処理を行う。   In this way, the color filter of the Bayer array is output from a solid-state imaging device in which the color filter of the Bayer array is provided on the light receiving surface because the color filter portion of the same color element exists at the geometrically skipped pixel position. The image processing apparatus that processes the captured image signal performs an interpolation process for interpolating the colors existing at the skipped pixel positions in order to obtain a signal in which the color elements of the three primary colors are aligned for all the pixels.

ここで、上記の画像処理装置が、上記の補間処理を単純な近傍画素の平均を計算することによって行うと、エッジ部分のジャギーや偽色(原画像に存在しない色彩)が発生することが広く知られている。そのため、従来から、これらのジャギーや偽色を軽減するための各種の画像処理装置が提案されている。その提案の多くは、色の連続性や相関性をもとに補間の条件を制御するような処理を行っている。   Here, when the above-described image processing apparatus performs the above-described interpolation processing by calculating the average of simple neighboring pixels, jaggies and false colors (colors that do not exist in the original image) of the edge portion are widely generated. Are known. For this reason, various image processing apparatuses for reducing these jaggies and false colors have been proposed. Many of the proposals perform processing that controls the interpolation conditions based on the continuity and correlation of colors.

例えば、斜め方向の相関性にポイントをおいた技術が従来、提案されている(例えば、特許文献1、特許文献2参照)。特許文献1には,画素値の複数の方向の勾配を算出し、それをもとに各方向の重みを変えて補間を行う画像処理装置が開示されている。また、特許文献2には、処理対象の画素の上下左右の画素の各値との相関性をもとに補間係数を決定して補間を行う画像処理装置が開示されている。   For example, a technique that focuses on the correlation in an oblique direction has been conventionally proposed (see, for example, Patent Document 1 and Patent Document 2). Patent Document 1 discloses an image processing device that calculates gradients in a plurality of directions of pixel values and performs interpolation by changing the weights in the respective directions based on the gradients. Patent Document 2 discloses an image processing apparatus that performs interpolation by determining an interpolation coefficient based on the correlation with each value of the upper, lower, left, and right pixels of a pixel to be processed.

また、偽色の問題のうちで、赤色信号R,青色信号Bの狭帯域を原因とするものに対する対策処理を行う従来の画像処理装置が知られている(例えば、特許文献3参照)。この特許文献3記載の画像処理装置では、高域におけるRGB相関性を利用して、緑色信号Gの高域を赤色信号R,青色信号Bにそれぞれ加算することにより広帯域化して、偽色を軽減している。   In addition, a conventional image processing apparatus that performs countermeasure processing for a problem caused by a narrow band of a red signal R and a blue signal B among false color problems is known (see, for example, Patent Document 3). In the image processing apparatus described in Patent Document 3, by using the RGB correlation in the high frequency band, the high frequency band of the green signal G is added to the red signal R and the blue signal B, respectively, so that the false color is reduced. is doing.

特開平10−164602号公報JP-A-10-164602 特開平11−177994号公報Japanese Patent Laid-Open No. 11-177994 特開2000−50290号公報JP 2000-50290 A

しかし、これらの先行技術においては、RGBの各原色の相関性が一般に高いなどといった経験則や一般的性質などを利用していないため、RGBの各原色のそれぞれが独立に相関性推定と補間係数が決定され、エッジ部のジャギーや偽色の抑圧が不十分であることがある。   However, since these prior arts do not use empirical rules or general properties such as the high correlation between the primary colors of RGB, each of the primary colors of RGB is independently correlated with the correlation estimation and interpolation coefficients. In some cases, the suppression of jaggies and false colors at the edge portion is insufficient.

また、特許文献3における高域におけるRGB相関性は常に成立するものではなく、無相関の場合も逆相関の場合もあり、それらの場合は単純な加算処理は画質に悪影善を及ぼす。   In addition, the RGB correlation in the high frequency in Patent Document 3 does not always hold, and there are cases where there is no correlation and there is an inverse correlation. In these cases, simple addition processing adversely affects the image quality.

本発明は以上の点に鑑みなされたもので、ジャギーが少なく滑らかでかつ偽色の少ない画素補間を行うことができる画像処理装置を提供することを目的とする。   The present invention has been made in view of the above points, and an object thereof is to provide an image processing apparatus that can perform pixel interpolation with less jaggy and smoothness with less false colors.

上記の目的を達成するため、第1の発明は、ベイヤ配列の色フィルタを受光面上に設けた固体撮像素子から出力された撮像信号を構成する色要素が三原色のうちいずれか一の原色である既知の各画素において、その既知の色要素以外の他の二つの色要素の原色信号を周辺の画素の値に基づいて補間する画像処理を行う画像処理装置であって、
入力された撮像信号の画素が、その画素の色要素とその画素を中心画素としたときの周辺の画素の色要素との関係に基づいて、複数の種類の画素タイプのいずれであるかを分類する画素タイプ分類手段と、画素タイプ分類手段により分類された各画素タイプ毎に、入力された撮像信号の画素と、その画素を中心とする複数の周辺の画素とからなる予め設定された画素領域内で、その画素領域内の各画素の値を用いて所定の演算式により演算して、三原色全ての原色信号について輪郭成分の最も大きな方向性を検出する検出手段と、画素タイプ分類手段により分類された各画素タイプのそれぞれについて、検出手段により検出した方向性が示す方向のうち、基準とする所定の一の原色信号の方向性が示す方向と残りの二つの原色信号の方向性が示す方向との間の角度が予め設定した角度範囲以内にあるか否かをそれぞれ判定する判定手段と、判定手段により角度が角度範囲以内にあると判定したときは、判定された方向性が示す方向に配置された中心画素におけるその中心画素とは異なる二つの色要素の複数の画素の値に基づいて、二つの色要素の補間値を算出し、この算出した二つの色要素の補間値を中心画素における二つの色要素の画素値として内挿する内挿手段と、を有することを特徴とする。
In order to achieve the above object, according to a first aspect of the present invention, a color element constituting an imaging signal output from a solid-state imaging device having a Bayer array color filter provided on a light receiving surface is one of the three primary colors. An image processing apparatus that performs image processing for interpolating primary color signals of two other color elements other than the known color elements based on values of surrounding pixels in each known pixel,
Based on the relationship between the color element of the input image signal and the color element of the surrounding pixel when that pixel is the central pixel, it is classified as one of multiple pixel types Pixel type classifying means, and for each pixel type classified by the pixel type classifying means, a preset pixel region consisting of a pixel of the input imaging signal and a plurality of peripheral pixels centered on the pixel The pixel type classification unit classifies the detection result by using a predetermined arithmetic expression using the value of each pixel in the pixel region, and detects the largest directionality of the contour component for all three primary colors. For each of the pixel types, the direction indicated by the direction of a predetermined primary color signal as a reference and the directionality of the remaining two primary color signals out of the directions indicated by the directionality detected by the detection means The determination means for determining whether or not the angle between the two directions is within a preset angle range, and when the determination means determines that the angle is within the angle range, the determined directionality is indicated Based on the values of a plurality of pixels of two color elements different from the central pixel in the center pixel arranged in the direction, the interpolation values of the two color elements are calculated, and the calculated interpolation values of the two color elements are calculated. Interpolating means for interpolating as pixel values of two color elements in the center pixel.

また、上記の目的を達成するため、第2の発明は、第1の発明の構成に加えて、角度が前記角度範囲以内にあると判定した前記方向性が示す方向に配置された前記中心画素と同一の色要素の前記画素領域内の複数の画素の値に基づいて算出した、該中心画素と同一の色要素の原色信号の高周波数成分を、前記二つの色要素の画素値に、これらの画素値を求める際に用いた前記所定の演算式の演算結果に付される正負いずれかの符号に応じて加減算する加減算手段を有する。   In order to achieve the above object, in addition to the configuration of the first invention, the second invention includes the center pixel arranged in a direction indicated by the directionality determined to have an angle within the angle range. The high frequency components of the primary color signal of the same color element as the central pixel calculated based on the values of the plurality of pixels in the pixel area of the same color element as the pixel values of the two color elements Addition / subtraction means for adding / subtracting according to the sign of either positive or negative attached to the calculation result of the predetermined calculation formula used when obtaining the pixel value.

更に、上記の目的を達成するため、第3の発明は、上記発明の内挿手段を、判定手段により角度が角度範囲以内にあると判定した方向性が示す方向に配置された中心画素におけるその中心画素とは異なる二つの色要素の複数の画素の値を、二つの色要素毎に加重平均し、更にその加重平均値を中心画素の周辺のその色要素の複数の画素の最大値と最小値により決定される値によって制限された補間値とする手段であることを特徴とする。本発明は、広義の連続性や相関性を利用したものであり、斜め方向の補間に重点を置いた画素値のエッジ方向性を利用している。   Furthermore, in order to achieve the above-mentioned object, the third aspect of the invention relates to the interpolation means of the above-mentioned invention in the center pixel arranged in the direction indicated by the directionality determined by the determination means that the angle is within the angle range. The values of a plurality of pixels of two color elements different from the central pixel are weighted averaged for each of the two color elements, and the weighted average value is the maximum and minimum of the pixels of the color element around the central pixel. The interpolating value is limited by a value determined by the value. The present invention uses continuity and correlation in a broad sense, and uses the edge directionality of pixel values with emphasis on diagonal interpolation.

本発明によれば、三原色の画素の相関性を生かしたジャギーが少なく滑らかでかつ偽色の少ない画素補間を行うことができる。また、本発明によれば、画素補間に際して順相関・無相関・逆相関に配慮することで補間後の映像信号の広帯域化を実現することができる。   According to the present invention, it is possible to perform pixel interpolation that is smooth and has few false colors with little jaggy making use of the correlation of pixels of the three primary colors. In addition, according to the present invention, it is possible to realize a wide band of a video signal after interpolation by considering forward correlation, non-correlation, and inverse correlation in pixel interpolation.

次に、本発明の一実施の形態について図面と共に詳細に説明する。
図1は、本発明になる画像処理装置の一実施の形態のブロック図を示す。この実施の形態の画像処理装置10は、ベイヤ配列の色フィルタを受光面上に設けた固体撮像素子から出力した撮像信号を処理する画像処理装置であって、固体撮像素子から出力されたベイヤ配列の撮像信号を入力として受ける中心画素タイプ分類部11と、中心画素タイプ分類部11により分類されて振り分け出力された撮像信号による表示映像に輪郭が含まれているか否かの輪郭の有無と、その輪郭の表示映像における方向を示す方向性とを互いに独立してそれぞれ評価する輪郭の有無・方向性評価部12、13、14、15と、輪郭の有無・方向性評価部12、13、14、15の出力信号に対して内挿・高域加算処理をそれぞれ別々に行う内挿・高域加算処理部16、17、18、19と、内挿・高域加算処理部16、17、18、19の各出力信号に対して無彩色化処理を行う無彩色化処理部20、21、22、23とより構成される。
Next, an embodiment of the present invention will be described in detail with reference to the drawings.
FIG. 1 shows a block diagram of an embodiment of an image processing apparatus according to the present invention. An image processing apparatus 10 according to this embodiment is an image processing apparatus that processes an imaging signal output from a solid-state imaging device having a Bayer array color filter provided on a light receiving surface, and is configured to output a Bayer array output from the solid-state imaging device. Center pixel type classifying unit 11 that receives the image pickup signal as an input, presence / absence of a contour indicating whether or not a display image based on the image signal classified and output by the central pixel type classifying unit 11 includes a contour, Outline presence / orientation evaluation units 12, 13, 14, and 15 that independently evaluate the directionality indicating the direction in the display image of the contour, and the presence / absence / direction evaluation unit 12, 13, 14, and 15 of the contour, Interpolation / high-frequency addition processing units 16, 17, 18, and 19 that separately perform interpolation / high-frequency addition processing on the 15 output signals, and interpolation / high-frequency addition processing units 16, 17, 18, More composed achromatic processing unit 20, 21, 22, 23 for performing achromatic processing for each output signal of 9.

中心画素タイプ分類部11は、固体撮像素子から入力されたベイヤ配列の撮像信号の色フィルタ部の位置にある既知の中心画素が後述する4種類の画素タイプのうちどの画素タイプであるかを判別して分類し、分類した画素タイプに応じてその中心画素を振り分け出力する。   The central pixel type classification unit 11 determines which pixel type is a known central pixel at the position of the color filter unit of the Bayer array image signal input from the solid-state imaging device, from among four types of pixel types described later. The center pixel is sorted and output according to the classified pixel type.

輪郭の有無・方向性評価部12、13、14、15は、入力された画像信号の輪郭の有無を評価し、更にその輪郭の有無の評価結果に基づいて輪郭の方向性を評価する。
内挿・高域加算処理部16、17、18、19は、輪郭の有無・方向性評価部12、13、14、15に1対1に対応して設けられており、輪郭の有無・方向性評価部12、13、14、15から出力された輪郭の方向性の評価結果に基づいて、所定の2つの色要素の画素の内挿と高域加算の補間処理を行う。
The presence / absence / direction evaluation unit 12, 13, 14, 15 of the contour evaluates the presence / absence of the contour of the input image signal, and further evaluates the directionality of the contour based on the evaluation result of the presence / absence of the contour.
The interpolation / high-frequency addition processing units 16, 17, 18, and 19 are provided in a one-to-one correspondence with the contour presence / absence / direction evaluation units 12, 13, 14, and 15. Based on the evaluation results of the directionality of the contours output from the sex evaluation units 12, 13, 14, and 15, interpolation processing of pixels of predetermined two color elements and high-frequency addition are performed.

無彩色化処理部20、21、22、23は、内挿・高域加算処理部16、17、18、19に1対1に対応して設けられており、内挿・高域加算処理部16、17、18、19から出力された処理結果に対して、後述する条件を満足するときに無彩色化処理を行って、それぞれ所定の2つの色要素の画素の補間処理信号を出力する。   The achromatic color processing units 20, 21, 22, and 23 are provided in one-to-one correspondence with the interpolation / high-frequency addition processing units 16, 17, 18, and 19, and the interpolation / high-frequency addition processing unit. The processing results output from 16, 17, 18, and 19 are subjected to achromatic processing when a condition described later is satisfied, and output interpolation processing signals for pixels of predetermined two color elements, respectively.

次に、本実施の形態の動作について、図2A及び図2Bのフローチャートと図3乃至図20と共に詳細に説明する。中心画素タイプ分類部11は、ベイヤ配列の色フィルタを受光面上に設けた固体撮像素子から出力した撮像信号を入力として受ける(図2AのステップS1)。この入力撮像信号の各画素信号は、図21に示したベイヤ配列に基づいて、赤色信号R、緑色信号G、及び青色信号Bのどれかである。また、入力撮像信号の縦方向3画素、横方向3画素の中心画素の色要素がR、B、Gのどれであるかによって、図3(A)〜(D)に示す4種類に分類することができる。ここで、中心画素は、ベイヤ配列の色フィルタの色フィルタ部に対応した既知の画素である。   Next, the operation of the present embodiment will be described in detail with reference to the flowcharts of FIGS. 2A and 2B and FIGS. 3 to 20. The center pixel type classifying unit 11 receives as an input an imaging signal output from a solid-state imaging device having a Bayer array color filter provided on the light receiving surface (step S1 in FIG. 2A). Each pixel signal of the input image pickup signal is one of a red signal R, a green signal G, and a blue signal B based on the Bayer array shown in FIG. Further, the input image pickup signal is classified into four types shown in FIGS. 3A to 3D depending on which of R, B, and G the color element of the center pixel of the vertical direction 3 pixels and the horizontal direction 3 pixels is. be able to. Here, the center pixel is a known pixel corresponding to the color filter portion of the color filter in the Bayer array.

図3(A)は、図21に示した色フィルタ部Rの位置にある中心画素が三原色のうちの色要素Rの画素(以下、R画素ともいう)の場合であり、上下左右の画素が三原色のうちの色要素Gの画素(以下、G画素ともいう)であり、左右の斜め方向の画素が三原色のうちの色要素Bの画素(以下、B画素ともいう)である。また、図3(B)は、図21に示した色フィルタ部Bの位置にある中心画素がB画素の場合であり、上下左右の画素がG画素であり、左右の斜め方向の画素がR画素である。   FIG. 3A shows the case where the central pixel at the position of the color filter portion R shown in FIG. 21 is a pixel of the color element R of the three primary colors (hereinafter also referred to as R pixel). The pixels of the color element G of the three primary colors (hereinafter also referred to as G pixels), and the pixels in the left and right diagonal directions are the pixels of the color element B of the three primary colors (hereinafter also referred to as B pixels). FIG. 3B shows the case where the central pixel at the position of the color filter portion B shown in FIG. 21 is a B pixel, the upper and lower left and right pixels are G pixels, and the left and right diagonal pixels are R pixels. Pixel.

図21に示した色フィルタ部Gの位置にある既知の中心画素がG画素の場合は、上下左右の画素に応じて、図3(C)、(D)に示す2種類ある。図3(C)、(D)に示す画素配列は、いずれも中心画素がG画素で、左右の斜め方向の画素もG画素である点で共通するが、図3(C)の画素配列は、上下の画素がB画素、左右の画素がR画素であるのに対し、図3(D)の画素配列は、上下の画素がR画素、左右の画素がB画素である点で相違する。ここでは、図3(C)の画素配列における中心画素をG_a画素、図3(D)の画素配列における中心画素をG_b画素と呼んで区別するものとする。   When the known central pixel at the position of the color filter portion G shown in FIG. 21 is a G pixel, there are two types shown in FIGS. 3C and 3D depending on the upper, lower, left, and right pixels. The pixel arrays shown in FIGS. 3C and 3D are common in that the center pixel is a G pixel and the left and right diagonal pixels are also G pixels, but the pixel array in FIG. 3 is different from the pixel arrangement shown in FIG. 3D in that the upper and lower pixels are R pixels and the left and right pixels are B pixels. Here, the central pixel in the pixel array in FIG. 3C is referred to as a G_a pixel, and the central pixel in the pixel array in FIG. 3D is referred to as a G_b pixel.

中心画素タイプ分類部11は、入力されたベイヤ配列の撮像信号の画素が、図3(A)〜(D)に示した4種類のどのタイプであるかを判別し、判別した入力画素がR画素の場合は、輪郭の有無・方向性評価部12に入力画素信号を供給する。同様に、中心画素タイプ分類部11は、判別した入力画素がB画素の場合は、輪郭の有無・方向性評価部13に、判別した入力画素がG_a画素の場合は、輪郭の有無・方向性評価部14に、判別した入力画素がG_b画素の場合は、輪郭の有無・方向性評価部15に、それぞれ入力画素信号を供給する(以上、図2AのステップS2)。   The central pixel type classifying unit 11 determines which type of the four types of pixels of the input image signal of the Bayer array is shown in FIGS. 3A to 3D, and the determined input pixel is R In the case of a pixel, an input pixel signal is supplied to the contour presence / absence / direction evaluation unit 12. Similarly, the center pixel type classification unit 11 determines whether or not there is a contour when the determined input pixel is a B pixel, and whether or not there is a contour when the determined input pixel is a G_a pixel. When the determined input pixel is a G_b pixel, the input pixel signal is supplied to the outline presence / orientation evaluation unit 15 (step S2 in FIG. 2A).

輪郭の有無・方向性評価部12は、入力されたR画素を中心とする縦方向7画素、横方向7画素の図4(A)〜(C)に示す画素範囲を評価領域とし、その評価領域の図4(A)に太丸で示すR画素の画素信号についての輪郭の有無の評価(図2AのステップS3)と、図4(B)に太丸で示すB画素の画素信号についての輪郭の有無の評価(図2AのステップS4)と、図4(C)に太丸で示すG画素の画素信号についての輪郭の有無の評価(図2AのステップS5)とを行う。続いて、輪郭の有無・方向性評価部12は、ステップS3〜S5において輪郭が存在すると評価した場合に、その存在する輪郭の方向性の評価を行う(図2AのステップS15〜S17)。   The outline presence / absence / direction evaluation unit 12 uses the pixel range shown in FIGS. 4A to 4C of 7 pixels in the vertical direction and 7 pixels in the horizontal direction centered on the input R pixel as an evaluation region, and the evaluation is performed. Evaluation of the presence / absence of the outline of the pixel signal of the R pixel indicated by a bold circle in FIG. 4A of the region (step S3 in FIG. 2A) and the pixel signal of the B pixel indicated by the bold circle in FIG. The evaluation of the presence / absence of a contour (step S4 in FIG. 2A) and the evaluation of the presence / absence of a contour (step S5 in FIG. 2A) for the pixel signal of the G pixel indicated by a bold circle in FIG. 4C are performed. Subsequently, when the presence / absence / direction evaluation unit 12 of the outline evaluates that the outline exists in steps S3 to S5, the directionality of the existing outline is evaluated (steps S15 to S17 in FIG. 2A).

同様に、輪郭の有無・方向性評価部13は、入力されたB画素を中心とする縦方向7画素、横方向7画素の図5(A)〜(C)に示す画素範囲を評価領域とし、その評価領域の図5(A)、(B)、(C)に太丸で示すR画素、B画素、G画素の画素信号についての輪郭の有無の評価を行う(図2AのステップS6〜S8)。続いて、輪郭の有無・方向性評価部13は、ステップS6〜S8において輪郭が存在すると評価した場合に、その存在する輪郭の方向性の評価を行う(図2AのステップS18〜S20)。   Similarly, the outline presence / absence / direction evaluation unit 13 uses the pixel range shown in FIGS. 5A to 5C with 7 pixels in the vertical direction and 7 pixels in the horizontal direction centered on the input B pixel as an evaluation region. 5A, 5B, and 5C of the evaluation region is evaluated for the presence or absence of contours for the pixel signals of R, B, and G pixels indicated by bold circles (steps S6 to S6 in FIG. 2A). S8). Subsequently, when the presence / absence / direction evaluation unit 13 of the outline evaluates that the outline exists in steps S6 to S8, the directionality of the existing outline is evaluated (steps S18 to S20 in FIG. 2A).

また、輪郭の有無・方向性評価部14は、入力されたG_a画素を中心とする縦方向7画素、横方向7画素程度の図6(A)〜(C)に示す画素範囲を評価領域とし、その評価領域の図6(A)、(B)、(C)に太丸で示すR画素、B画素、G画素の画素信号についての輪郭の有無の評価を行う(図2AのステップS9〜S11)。続いて、輪郭の有無・方向性評価部15は、ステップS9〜S11において輪郭が存在すると評価した場合に、その存在する輪郭の方向性の評価を行う(図2AのステップS21〜S23)。   Further, the outline presence / absence / direction evaluation unit 14 uses the pixel range shown in FIGS. 6A to 6C of about 7 pixels in the vertical direction and 7 pixels in the horizontal direction centered on the input G_a pixel as an evaluation region. 6 (A), (B), and (C) of the evaluation region, the presence / absence of the outline of the pixel signals of the R pixel, B pixel, and G pixel indicated by bold circles is evaluated (steps S9 to S9 in FIG. 2A) S11). Subsequently, when the presence / absence / direction evaluation unit 15 of the outline evaluates that the outline exists in steps S9 to S11, the directionality of the existing outline is evaluated (steps S21 to S23 in FIG. 2A).

同様に、輪郭の有無・方向性評価部15も、入力されたG_b画素を中心とする縦方向7画素、横方向7画素程度の図7(A)〜(C)に示す画素範囲を評価領域とし、その評価領域の図7(A)、(B)、(C)に太丸で示すR画素、B画素、G画素の画素信号についての輪郭の有無の評価を行う(図2AのステップS12〜S14)。続いて、輪郭の有無・方向性評価部15は、ステップS12〜S14において輪郭が存在すると評価した場合に、その存在する輪郭の方向性の評価を行う(図2AのステップS24〜S26)。   Similarly, the presence / absence / direction evaluation unit 15 of the outline also uses the pixel range shown in FIGS. 7A to 7C of about 7 pixels in the vertical direction and 7 pixels in the horizontal direction around the input G_b pixel as the evaluation region. 7A, 7B, and 7C in the evaluation region, the presence / absence of an outline is evaluated for pixel signals of R pixels, B pixels, and G pixels indicated by bold circles (step S12 in FIG. 2A). To S14). Subsequently, when the presence / absence / direction evaluation unit 15 of the outline evaluates that the outline exists in steps S12 to S14, the directionality of the existing outline is evaluated (steps S24 to S26 in FIG. 2A).

事実上、輪郭の有無・方向性評価部12〜15による輪郭の有無の評価と方向性の評価とは同時に行われる。また、輪郭の有無の評価と方向性の評価は、図4〜図7に示した太丸で示された画素について行われるが、各画素タイプと評価をしようとしている色要素によって画素配置が異なるため、その評価のために複数の演算子型を用意しなければならない。   In effect, the presence / absence of the outline and the evaluation of the directionality are simultaneously performed by the outline presence / absence / direction evaluation units 12 to 15. The evaluation of the presence / absence of the outline and the evaluation of the directionality are performed for the pixels indicated by the bold circles shown in FIGS. 4 to 7, but the pixel arrangement differs depending on each pixel type and the color element to be evaluated. Therefore, multiple operator types must be prepared for the evaluation.

具体的には、R画素におけるR信号の評価においては、図8に示す3行3列の演算子型OP0を用い、R画素におけるB信号の評価においては、図9に示す4行4列の演算子型OP1を用い、R画素におけるG信号の評価においては、図10に示す演算子型OP2を用いる。また、B画素におけるR信号の評価においては、図9に示す演算子型OP1を用い、B画素におけるB信号の評価においては図8に示す清算子型OP0を用い、B画素におけるG信号の評価においては図10に示す演算子型OP2を用いる。   Specifically, in the evaluation of the R signal in the R pixel, the operator type OP0 of 3 rows and 3 columns shown in FIG. 8 is used, and in the evaluation of the B signal in the R pixel, the 4 rows and 4 columns shown in FIG. The operator type OP1 shown in FIG. 10 is used in the evaluation of the G signal in the R pixel using the operator type OP1. In the evaluation of the R signal in the B pixel, the operator type OP1 shown in FIG. 9 is used. In the evaluation of the B signal in the B pixel, the liquidator type OP0 shown in FIG. 8 is used, and the evaluation of the G signal in the B pixel is performed. In FIG. 10, an operator type OP2 shown in FIG. 10 is used.

また、G_a画素におけるR信号の評価においては図11に示す3行4列の演算子型OP3を用い、G_a画素におけるB信号の詳価においては図12に示す4行3列の演算子型0P4を用い、G_a画素におけるG信号の評価においては図13に示す演算子型OP5を用いる。更に、G_b画素におけるR信号の評価においては図12に示す演算子型OP4を用い、G_b画素におけるB信号の評価においては図11に示す演算子型OP3を用い、G_b画素におけるG信号の評価においては図13に示す演算子型OP5を用いる。   Further, in the evaluation of the R signal in the G_a pixel, the operator type OP3 of 3 rows and 4 columns shown in FIG. 11 is used, and in the detail of the B signal in the G_a pixel, the operator type 0P4 of 4 rows and 3 columns shown in FIG. In the evaluation of the G signal in the G_a pixel, the operator type OP5 shown in FIG. 13 is used. Furthermore, in the evaluation of the R signal in the G_b pixel, the operator type OP4 shown in FIG. 12 is used, and in the evaluation of the B signal in the G_b pixel, the operator type OP3 shown in FIG. 11 is used, and in the evaluation of the G signal in the G_b pixel. Uses the operator type OP5 shown in FIG.

そして、上記の演算子型OP0〜OP5のそれぞれについて0°,22.5°,45°,67.5°,90°,112.5°,135°,157.5°の8方向のエッジ方向性を評価する微分演算子を構築する。図14〜図19は、演算子型OP0〜OP5についての8方向の微分演算子群を示す。また、図14〜図19において、丸は微分演算子としての画素を示し、また、各画素の右肩の数字は画素番号を示す。また、図14〜図19における各図(A)、(B)、(C)、(D)、(E)、(F)、(G)、(H)に示す直線は、それぞれ0°,22.5°,45°,67.5°,90°,112.5°,135°,157.5°の方向性を示す。   For each of the operator types OP0 to OP5, eight edge directions of 0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, and 157.5 ° are provided. Construct a differential operator that evaluates sex. 14 to 19 show eight-way differential operator groups for the operator types OP0 to OP5. In FIGS. 14 to 19, a circle indicates a pixel as a differential operator, and a number on the right shoulder of each pixel indicates a pixel number. Moreover, the straight lines shown in FIGS. 14A to 19B are respectively 0 °, 0 °, and (C), (D), (E), (F), (G), and (H). The directivity of 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 ° is shown.

ここで、図14〜図19における画素番号iの画素値をX[i]とし、演算子の係数c[i]を白丸の画素についてc[i]=1、黒丸の画素についてc[i]=−1、斜線の画素についてc[i]=0とし、演算子の全画素数をNo、白丸の画素の数をNw、黒丸の画素の数をNbとして、その演算子における相関性を(1)式により正規化した形で評価することにする。   Here, the pixel value of pixel number i in FIGS. 14 to 19 is X [i], and the operator coefficient c [i] is c [i] = 1 for white circle pixels and c [i] for black circle pixels. = −1, c [i] = 0 for the hatched pixels, No as the total number of pixels of the operator, Nw as the number of white circle pixels, and Nb as the number of black circle pixels, and the correlation in the operator is ( 1) Evaluation is performed in a form normalized by the equation.

Figure 2009206552
Figure 2009206552

方向性評価では、(1)式の演算式による計算を上記の8つの角度のそれぞれについて行い、その中の最大値を示す角度をもってその角度の方向性を有するものと評価する。(1)式の値は、同一オペレータを使用する条件では、ある方向性における相対的な画素値の傾斜を表している。   In the evaluation of directionality, the calculation by the calculation formula (1) is performed for each of the above eight angles, and the angle indicating the maximum value is evaluated as having the directionality of the angle. The value of equation (1) represents the relative inclination of the pixel value in a certain direction under the condition of using the same operator.

なお、(1)式の計算結果がどの角度においても所定の閾値を超えない場合は“輪郭なし”と判定する。以上で上記のステップS15〜S17、S18〜S20、S21〜S23、S24〜S26の処理が終了する。   If the calculation result of the expression (1) does not exceed a predetermined threshold at any angle, it is determined that there is no contour. Above, the process of said step S15-S17, S18-S20, S21-S23, S24-S26 is complete | finished.

続いて、輪郭の有無・方向性評価部12〜15は、同一画素タイプ内での色による2つの方向性を比較して、近いかどうかを判定する(図2AのステップS27〜S28、S29〜S30、S31〜S32、S34〜S35)。ここでは、「方向性が近い」とは、比較する2つの方向性が同一の場合か、又は±22.5°以内の場合をいうものとする。   Subsequently, the contour presence / absence / direction evaluation units 12 to 15 determine whether they are close by comparing two directions by color within the same pixel type (steps S27 to S28 and S29 to FIG. 2A). S30, S31 to S32, S34 to S35). Here, “the directionality is close” means that the two directions to be compared are the same or within ± 22.5 °.

すなわち、画素タイプR画素用の輪郭の有無・方向性評価部12は、R信号の方向性とG信号の方向性とが近いか否かの判定(ステップS27)と、B信号の方向性とG信号の方向性とが近いか否かの判定(ステップS28)とを行う。また、画素タイプB画素用の輪郭の有無・方向性評価部13は、R信号の方向性とG信号の方向性とが近いか否かの判定(ステップS29)と、B信号の方向性とG信号の方向性とが近いか否かの判定(ステップS30)とを行う。また、画素タイプG_a画素用の輪郭の有無・方向性評価部14は、R信号の方向性とG信号の方向性とが近いか否かの判定(ステップS31)と、B信号の方向性とG信号の方向性とが近いか否かの判定(ステップS32)とを行う。また、画素タイプG_b画素用の輪郭の有無・方向性評価部15は、R信号の方向性とG信号の方向性とが近いか否かの判定(ステップS34)と、B信号の方向性とG信号の方向性とが近いか否かの判定(ステップS35)とを行う。   That is, the contour presence / orientation evaluation unit 12 for the pixel type R pixel determines whether or not the directionality of the R signal and the directionality of the G signal are close (step S27), and the directionality of the B signal. It is determined whether or not the directivity of the G signal is close (step S28). Further, the outline presence / orientation evaluation unit 13 for the pixel type B pixel determines whether or not the directionality of the R signal and the directionality of the G signal are close (step S29), and the directionality of the B signal. It is determined whether or not the directivity of the G signal is close (step S30). Further, the outline presence / orientation evaluation unit 14 for the pixel type G_a pixel determines whether or not the directionality of the R signal and the directionality of the G signal are close (step S31), and the directionality of the B signal. It is determined whether or not the directivity of the G signal is close (step S32). Further, the contour presence / orientation evaluation unit 15 for the pixel type G_b pixel determines whether or not the directionality of the R signal and the directionality of the G signal are close (step S34), and the directionality of the B signal. It is determined whether or not the directivity of the G signal is close (step S35).

以上のような輪郭の有無の情報及び方向性の比較判定結果をもとに、以降のプロセスでの判断が行われる。ここで、以下“XをYの方向性で内挿”という場合には、YについてYに適合した演算子を使って算出した上記の方向性を、XについてXに適合した演算子型の下記のC言語的表記された演算を行った結果を補間値とするということを意味するものにする。   Based on the information on the presence / absence of the contour and the result of the directionality comparison, the determination in the subsequent processes is performed. Here, hereinafter, when “X is interpolated with the direction of Y”, the above-mentioned directionality calculated using an operator suitable for Y with respect to Y is the following of an operator type suitable for X with respect to X: This means that the result of the operation expressed in the C language is used as an interpolation value.

ここで、前記の演算子型OP1〜OP4のそれぞれについて、前記の8つの方向性における中心画素の未知の色要素の補間値は、以下のように、方向性に沿った画素の値を加重平均し、更にその加重平均値を中心画素の複数の周辺画素の最大値と最小値とにより決定される値によって制限された制限値で表される。   Here, for each of the operator types OP1 to OP4, the interpolation value of the unknown color element of the central pixel in the eight directions is a weighted average of the values of the pixels along the direction as follows: Further, the weighted average value is represented by a limit value limited by a value determined by the maximum value and the minimum value of a plurality of peripheral pixels of the central pixel.

なお、以下の説明において、p[i]は画素番号iを示しており、これは図14〜図19におけるものと同じである。また、max(,,,)は括弧内の最大値、min(,,,)は括弧内の最小値を示す。また、limit_min_max(X,min1,max1)は、Xをmin1を下限、max1を上限として制限することを意味するものとする。更に、補間値は「intp」とする。   In the following description, p [i] indicates a pixel number i, which is the same as that in FIGS. Max (,,,) indicates the maximum value in parentheses, and min (,,,) indicates the minimum value in parentheses. Further, limit_min_max (X, min1, max1) means that X is limited with min1 as the lower limit and max1 as the upper limit. Further, the interpolation value is “intp”.

(1)OP1について
max1=max(p[5],p[6],p[9],p[10]);min1=min(p[5],p[6],p[9],p[10]);thr=max1-min1;
//OP1-0°:最近傍4画素の単純平均
intp=(p[5]+p[6]+p[9]+p[10]+2)/4;
//OP1-22.5°:
if(abs(p[6]-p[7])<thr && abs(p[8]-p[9])<thr)
intp=limit_min_max((p[6]+p[7]+p[8]+p[9]+2)/4,min1,max1);
else
intp=(p[6]+p[9]+1)/2;
//OP1-45°:斜め2画素の単純平均
intp=(p[6]+p[9]+1)/2;
//OP1-67.5°:
if(abs(p[2]-p[6])<thr && abs(p[9]-p[13])<thr)
intp=limit_min_max((p[2]+p[6]+p[9]+p[13]+2)/4,min1,max1);
else
intp=(p[6]+p[9]+1)/2;
//OP1-90°:最近傍4画素の単純平均
intp=(p[5]+p[6]+p[9]+p[10]+2)/4;
//OP1-112.5°:
if(abs(p[1]-p[5])<thr && abs(p[10]-p[14])<thr)
intp=limit_min_max((p[1]+p[5]+p[10]+p[14]+2)/4,min1,max1);
else
intp=(p[5]+p[10]+1)/2;
//OP1-135°:斜め2画素の単純平均
intp=(p[5]+p[10]+1)/2;
//OP1-157.5°:
if(abs(p[4]-p[5])<thr && abs(p[10]-p[11])<thr)
intp=limit_min_max((p[4]+ p[5]+ p[10]+ p[11]+2)/4,min1,max1);
else
intp=(p[5]+p[10]+1)/2;
(2)OP2について
max1=max(p[8],p[11],p[12],p[15]);min1=min(p[8],p[11],p[12],p[15]);
thr=max1-min1;
//OP2-0°:左右2画素の単純平均
intp=(p[11]+p[12]+1)/2;
//OP2-22.5°:
if(abs(p[9]-p[12])<thr && abs(p[11]-p[14])<thr)
intp=limit_min_max((p[9]+ p[11]+ p[12]+ p[14]+2)/4,min1,max1);
else
intp=(p[11]+p[12]+1)/2;
//OP2-45°:最近傍4画素の単純平均
intp=(p[8]+p[11]+p[12]+p[15]+2)/4;
//OP2-67.5°:
if(abs(p[5]-p[8])<thr && abs(p[15]-p[18])<thr)
intp=limit_min_max((p[5]+ p[8]+ p[15]+ p[18]+2)/4,min1,max1);
else
intp=(p[8]+p[15]+1)/2;
//OP2-90°:上下2画素の単純平均
intp=(p[8]+p[15]+1)/2;
//OP2-112.5°:
if(abs(p[4]-p[8])<thr && abs(p[15]-p[19])<thr)
intp=limit_min_max((p[4]+p[8]+p[15]+p[19]+2)/4,min1,max1);
else
intp=(p[8]+p[15]+1)/2;
//OP2-135°:最近傍4画素の単純平均
intp=(p[8]+p[11]+p[12]+p[15] +2)/4;
//OP2-157.5°:
if(abs(p[7]-p[11])<thr && abs(p[12]-p[16])<thr)
intp=limit_min_max((p[7]+p[11]+p[12]+p[16]+2)/4,min1,max1);
else
intp=(p[11]+p[12]+1)/2;
(3)OP3について
max1=max(p[5],p[6]);min1=min(p[5],p[6]);thr=max1-min1;
//OP3-0°:左右2画素の単純平均
intp=(p[5]+p[6]+1)/2;
//OP3-22.5°:
if(abs(p[3]-p[6])<thr && abs(p[5]-p[8])<thr)
intp=limit_min_max((p[3]+p[6]+p[5]+p[8]+2)/4,min1,max1);
else
intp=(p[5]+p[6]+1)/2;
//OP3-45°:最近傍4画素の単純平均
if(abs(p[2]-p[6])<thr && abs(p[5]-p[9])<thr)
intp=limit_min_max((p[2]+p[6]+p[5]+p[9]+2)/4,min1,max1);
else
intp=(p[5]+p[6]+1)/2;
//OP3-67.5°:
intp=limit_min_max((p[2]+p[9]+1)/2,min1,max1);
//OP3-90°:左右2画素の単純平均
intp=(p[5]+p[6]+1)/2;
//OP3-112.5°:
intp=limit_min_max((p[1]+p[10]+1)/2,min1,max1);
//OP3-135°:最近傍4画素の単純平均
if(abs(p[1]-p[5])<thr && abs(p[6]-p[10])<thr)
intp=limit_min_max((p[1]+p[5]+p[6]+ p[10]+2)/4,min1,max1);
else
intp=(p[5]+p[6]+1)/2;
//OP3-157.5°:
if(abs(p[0]-p[5])<thr && abs(p[6]-p[11])<thr)
intp=limit_min_max((p[0]+p[5]+p[6]+p[11]+2)/4,min1,max1);
else
intp=(p[5]+p[6]+1)/2;
(4)OP4について
max1=max(p[4],p[7]);min1=min(p[4],p[7]);thr=max1-min1;
//OP4-0°:上下2画素の単純平均
intp= (p[4]+p[7]+1)/2;
//OP4-22.5°:
intp=limit_min_max((p[5]+p[6]+1)/2,min1,max1);
//OP4-45°:
if(abs(p[4]-p[5])<thr && abs(p[6]-p[7])<thr)
intp=limit_min_max((p[4]+p[5]+p[6]+p[7]+2)/4,min1,max1);
else
intp=(p[4]+p[7]+1)/2;
//OP4-67.5°:
if(abs(p[2]-p[4])<thr && abs(p[7]-p[9])<thr)
intp=limit_min_max((p[2]+p[4]+p[7]+p[9]+2)/4,min1,max1);
else
intp=(p[4]+p[7]+1)/2;
//OP4-90°:上下2画素の単純平均
intp=(p[4]+p[7]+1)/2;
//OP4-112.5°:
if(abs(p[0]-p[4])<thr && abs(p[7]-p[11])<thr)
intp=limit_min_max((p[0]+p[4]+p[7]+p[11]+2)/4,min1,max1);
else
intp=(p[4]+p[7]+1)/2;
//OP4-135°:
if(abs(p[3]-p[4])<thr && abs(p[7]-p[8])<thr)
intp=limit_min_max((p[3]+p[4]+p[7]+p[8]+2)/4,min1,max1);
else
intp=(p[4]+p[7]+1)/2;
//OP4-157.5°:
intp=limit_min_max((p[3]+p[8]+1)/2,min1,max1);
ここで、演算子型OP1,OP2については被補間画素(中心画素)の最近傍4画素で、また、演算子型OP3,OP4については被補間画素(中心画素)の最近傍2画素の画素値で最大値・最小値を制限しているが、これは補間値が突出することを防止するためのものである。ただし、最近傍画素の画素値の最大値・最小値を用いることに限定されず、他の制限値を使用することも可能である。
(1) About OP1
max1 = max (p [5], p [6], p [9], p [10]); min1 = min (p [5], p [6], p [9], p [10]); thr = max1-min1;
// OP1-0 °: Simple average of the nearest 4 pixels
intp = (p [5] + p [6] + p [9] + p [10] +2) / 4;
//OP1-22.5°:
if (abs (p [6] -p [7]) <thr && abs (p [8] -p [9]) <thr)
intp = limit_min_max ((p [6] + p [7] + p [8] + p [9] +2) / 4, min1, max1);
else
intp = (p [6] + p [9] +1) / 2;
// OP1-45 °: Simple average of two diagonal pixels
intp = (p [6] + p [9] +1) / 2;
//OP1-67.5°:
if (abs (p [2] -p [6]) <thr && abs (p [9] -p [13]) <thr)
intp = limit_min_max ((p [2] + p [6] + p [9] + p [13] +2) / 4, min1, max1);
else
intp = (p [6] + p [9] +1) / 2;
// OP1-90 °: Simple average of the nearest 4 pixels
intp = (p [5] + p [6] + p [9] + p [10] +2) / 4;
//OP1-112.5°:
if (abs (p [1] -p [5]) <thr && abs (p [10] -p [14]) <thr)
intp = limit_min_max ((p [1] + p [5] + p [10] + p [14] +2) / 4, min1, max1);
else
intp = (p [5] + p [10] +1) / 2;
// OP1-135 °: Simple average of two diagonal pixels
intp = (p [5] + p [10] +1) / 2;
//OP1-157.5°:
if (abs (p [4] -p [5]) <thr && abs (p [10] -p [11]) <thr)
intp = limit_min_max ((p [4] + p [5] + p [10] + p [11] +2) / 4, min1, max1);
else
intp = (p [5] + p [10] +1) / 2;
(2) About OP2
max1 = max (p [8], p [11], p [12], p [15]); min1 = min (p [8], p [11], p [12], p [15]);
thr = max1-min1;
// OP2-0 °: Simple average of left and right 2 pixels
intp = (p [11] + p [12] +1) / 2;
//OP2-22.5°:
if (abs (p [9] -p [12]) <thr && abs (p [11] -p [14]) <thr)
intp = limit_min_max ((p [9] + p [11] + p [12] + p [14] +2) / 4, min1, max1);
else
intp = (p [11] + p [12] +1) / 2;
// OP2-45 °: Simple average of the nearest 4 pixels
intp = (p [8] + p [11] + p [12] + p [15] +2) / 4;
//OP2-67.5°:
if (abs (p [5] -p [8]) <thr && abs (p [15] -p [18]) <thr)
intp = limit_min_max ((p [5] + p [8] + p [15] + p [18] +2) / 4, min1, max1);
else
intp = (p [8] + p [15] +1) / 2;
// OP2-90 °: Simple average of upper and lower two pixels
intp = (p [8] + p [15] +1) / 2;
//OP2-112.5°:
if (abs (p [4] -p [8]) <thr && abs (p [15] -p [19]) <thr)
intp = limit_min_max ((p [4] + p [8] + p [15] + p [19] +2) / 4, min1, max1);
else
intp = (p [8] + p [15] +1) / 2;
// OP2-135 °: Simple average of the nearest 4 pixels
intp = (p [8] + p [11] + p [12] + p [15] +2) / 4;
//OP2-157.5°:
if (abs (p [7] -p [11]) <thr && abs (p [12] -p [16]) <thr)
intp = limit_min_max ((p [7] + p [11] + p [12] + p [16] +2) / 4, min1, max1);
else
intp = (p [11] + p [12] +1) / 2;
(3) About OP3
max1 = max (p [5], p [6]); min1 = min (p [5], p [6]); thr = max1-min1;
// OP3-0 °: Simple average of left and right 2 pixels
intp = (p [5] + p [6] +1) / 2;
//OP3-22.5°:
if (abs (p [3] -p [6]) <thr && abs (p [5] -p [8]) <thr)
intp = limit_min_max ((p [3] + p [6] + p [5] + p [8] +2) / 4, min1, max1);
else
intp = (p [5] + p [6] +1) / 2;
// OP3-45 °: Simple average of the nearest 4 pixels
if (abs (p [2] -p [6]) <thr && abs (p [5] -p [9]) <thr)
intp = limit_min_max ((p [2] + p [6] + p [5] + p [9] +2) / 4, min1, max1);
else
intp = (p [5] + p [6] +1) / 2;
//OP3-67.5°:
intp = limit_min_max ((p [2] + p [9] +1) / 2, min1, max1);
// OP3-90 °: Simple average of left and right 2 pixels
intp = (p [5] + p [6] +1) / 2;
//OP3-112.5°:
intp = limit_min_max ((p [1] + p [10] +1) / 2, min1, max1);
// OP3-135 °: Simple average of the nearest 4 pixels
if (abs (p [1] -p [5]) <thr && abs (p [6] -p [10]) <thr)
intp = limit_min_max ((p [1] + p [5] + p [6] + p [10] +2) / 4, min1, max1);
else
intp = (p [5] + p [6] +1) / 2;
//OP3-157.5°:
if (abs (p [0] -p [5]) <thr && abs (p [6] -p [11]) <thr)
intp = limit_min_max ((p [0] + p [5] + p [6] + p [11] +2) / 4, min1, max1);
else
intp = (p [5] + p [6] +1) / 2;
(4) About OP4
max1 = max (p [4], p [7]); min1 = min (p [4], p [7]); thr = max1-min1;
// OP4-0 °: Simple average of upper and lower two pixels
intp = (p [4] + p [7] +1) / 2;
//OP4-22.5°:
intp = limit_min_max ((p [5] + p [6] +1) / 2, min1, max1);
// OP4-45 °:
if (abs (p [4] -p [5]) <thr && abs (p [6] -p [7]) <thr)
intp = limit_min_max ((p [4] + p [5] + p [6] + p [7] +2) / 4, min1, max1);
else
intp = (p [4] + p [7] +1) / 2;
//OP4-67.5°:
if (abs (p [2] -p [4]) <thr && abs (p [7] -p [9]) <thr)
intp = limit_min_max ((p [2] + p [4] + p [7] + p [9] +2) / 4, min1, max1);
else
intp = (p [4] + p [7] +1) / 2;
// OP4-90 °: Simple average of upper and lower two pixels
intp = (p [4] + p [7] +1) / 2;
//OP4-112.5°:
if (abs (p [0] -p [4]) <thr && abs (p [7] -p [11]) <thr)
intp = limit_min_max ((p [0] + p [4] + p [7] + p [11] +2) / 4, min1, max1);
else
intp = (p [4] + p [7] +1) / 2;
// OP4-135 °:
if (abs (p [3] -p [4]) <thr && abs (p [7] -p [8]) <thr)
intp = limit_min_max ((p [3] + p [4] + p [7] + p [8] +2) / 4, min1, max1);
else
intp = (p [4] + p [7] +1) / 2;
//OP4-157.5°:
intp = limit_min_max ((p [3] + p [8] +1) / 2, min1, max1);
Here, for the operator types OP1 and OP2, the pixel values of the nearest four pixels of the interpolated pixel (center pixel), and for the operator types OP3 and OP4, the pixel values of the nearest two pixels of the interpolated pixel (center pixel). The maximum value and the minimum value are limited by this, but this is to prevent the interpolated value from protruding. However, the present invention is not limited to using the maximum value / minimum value of the pixel value of the nearest pixel, and other limit values can be used.

また、OP1−22.5°、OP1−67.5゜、OP1−112.5°、OP1−157.5°、OP2−22.5°、OP2−67.5°、OP2−112.5°、OP2−157.5°、OP3−22.5°、OP3−45°、OP3−135°、OP3−157.5°、OP4−45°、OP4−67.5°、OP4−112.5°、OP4−135°における4値の加重平均にあたって使用する画素値の輪郭方向両側のペアについても差が上記最大値・最小値の差を超える場合は突出値によって補間値が乱れる危険が大として最近傍画素平均による補間に切り替える。これについても最近傍画素の画素値の最大値・最小値を用いることに限定されず、他の制限値を使用することも可能である。なお、演算子型OP0とOP5とは、既に中心画素の画素値が既知であるので、上記のような加重平均の計算で中央画素における未知の色要素の補間値を求めるプロセスは不要である。   Also, OP1-22.5 °, OP1-67.5 °, OP1-1122.5 °, OP1-1-157.5 °, OP2-22.5 °, OP2-67.5 °, OP2-12.5 ° , OP2-157.5 °, OP3-22.5 °, OP3-45 °, OP3-135 °, OP3-157.5 °, OP4-45 °, OP4-67.5 °, OP4-1122.5 ° As for the pair of pixel values used for the four-value weighted average at OP4-135 ° on both sides in the contour direction, if the difference exceeds the difference between the maximum value and the minimum value, there is a large risk that the interpolation value may be disturbed by the protruding value. Switch to interpixel average interpolation. This is not limited to using the maximum and minimum pixel values of the nearest pixels, and other limit values can be used. Note that the operator types OP0 and OP5 already have the pixel value of the center pixel already known, and therefore the process for obtaining the interpolated value of the unknown color element in the center pixel by the weighted average calculation as described above is unnecessary.

再び輪郭の有無の情報及び方向性の比較結果をもとにした判定プロセスの説明に戻る。図1の内挿・高域加算処理部16〜19は、R信号又はB信号の方向性がG信号の方向性と大きく矛盾しない場合の条件を満たすかどうかを判定し(図2BのステップS37、S48、S59、S66)、その判定結果に応じて、内挿処理と高域加算処理とを行う(図2BのステップS38〜S45、S49〜S56、S60〜S63、S67〜S70)。   Returning to the description of the determination process based on the information on the presence / absence of the contour and the comparison result of the directionality. The interpolation / high-frequency addition processing units 16 to 19 in FIG. 1 determine whether or not the condition for the case where the directionality of the R signal or B signal is not significantly inconsistent with the directionality of the G signal (step S37 in FIG. 2B). , S48, S59, S66), an interpolation process and a high-frequency addition process are performed according to the determination result (steps S38 to S45, S49 to S56, S60 to S63, and S67 to S70 in FIG. 2B).

例えば、画素タイプR画素用の内挿・高域加算処理部16は、R信号又はB信号の方向性がG信号の方向性と大きく矛盾しない場合の条件を満たすと判断したときは(ステップS37のY)、G信号の方向性を優先的に採用し、まずG信号をG信号の方向性で内挿する(ステップS38)。ここで、上記のステップS37での条件は、R信号の輪郭が無く、かつ、G信号とB信号の輪郭が有り方向性が近い場合、又はB信号の輪郭が無く、かつ、G信号とR信号の輪郭が有り方向性が近い場合、又はR信号、G信号及びB信号の輪郭が有り、かつ、R信号とG信号の方向性及びB信号とG信号の方向性が近い場合、又はR信号とB信号の輪郭が無くG信号の輪郭が有る場合である。   For example, when the interpolation / high-frequency addition processing unit 16 for the pixel type R pixel determines that the condition of the direction of the R signal or the B signal is not significantly inconsistent with the direction of the G signal (step S37). Y), the directionality of the G signal is preferentially adopted, and the G signal is first interpolated with the directionality of the G signal (step S38). Here, the condition in step S37 is that the contour of the R signal is not present, the contour of the G signal and the B signal are present and close to each other, or the contour of the B signal is absent, and the G signal and the R signal are present. When there is a contour of the signal and the directionality is close, or when there are contours of the R signal, the G signal and the B signal, and the directionality of the R signal and the G signal and the directionality of the B signal and the G signal are close, or R This is a case where there is no outline of the signal and B signal and there is an outline of the G signal.

続いて、内挿・高域加算処理部16は、B−G信号を同じくG信号の方向性で内挿する(ステップS39)。なお、G信号については演算子型OP2を用い内挿し、B−G信号については近傍の画素のBの値から相当するGの結果を減じたものについて演算子型OP1を用いて内挿する。続いて、内挿・高域加算処理部16は、G信号の内挿結果に対してR信号の高域成分を加算する(ステップS40)。これについては後述する。同様に、内挿・高域加算処理部16は、B−G信号の内挿結果とG信号の内挿結果の和として得られたB信号に、G信号の高域成分を加算する(ステップS41)。   Subsequently, the interpolation / high-frequency addition processing unit 16 similarly interpolates the BG signal with the directionality of the G signal (step S39). The G signal is interpolated using the operator type OP2, and the B-G signal is interpolated using the operator type OP1 for the value obtained by subtracting the corresponding G result from the B value of the neighboring pixel. Subsequently, the interpolation / high-frequency addition processing unit 16 adds the high-frequency component of the R signal to the interpolation result of the G signal (step S40). This will be described later. Similarly, the interpolation / high-frequency addition processing unit 16 adds the high-frequency component of the G signal to the B signal obtained as the sum of the interpolation result of the B-G signal and the interpolation result of the G signal (step) S41).

一方、内挿・高域加算処理部16は、ステップS37の4つの判定条件のいずれも満たさないときには(ステップS37のN)、R信号又はB信号の方向性がG信号の方向性と大きく矛盾すると判断し、R画素の最近傍画素の画素値の平均値でG信号とB−G信号の内挿を行う(ステップS42、S43)。続いて、内挿・高域加算処理部16は、このG信号の内挿結果に対してR信号の高域成分を加算し(ステップS44)、B−G信号の内挿結果とG信号の内挿結果の和として得られたB信号に対して、G信号の高域成分を加算する(ステップS45)。そして、内挿・高域加算処理部16は、ステップS41又はS45の処理結果を、後述の偽色を無彩色化する処理のために無彩色化処理部20へ出力して無彩色化処理させる(図2BのステップS46)。   On the other hand, when none of the four determination conditions in step S37 is satisfied (N in step S37), the interpolation / high-frequency addition processing unit 16 greatly contradicts the directionality of the R signal or B signal with the directionality of the G signal. Then, the G signal and the BG signal are interpolated with the average value of the pixel values of the nearest pixels of the R pixel (steps S42 and S43). Subsequently, the interpolation / high-frequency addition processing unit 16 adds the high-frequency component of the R signal to the interpolation result of the G signal (step S44), and the interpolation result of the B-G signal and the G signal The high frequency component of the G signal is added to the B signal obtained as the sum of the interpolation results (step S45). Then, the interpolation / high-frequency addition processing unit 16 outputs the processing result of step S41 or S45 to the achromatic processing unit 20 for a processing for achromatizing a false color to be described later, and performs the achromatic processing. (Step S46 in FIG. 2B).

また、画素タイプB画素用の内挿・高域加算処理部17は、R信号又はB信号の方向性がG信号の方向性と大きく矛盾しない場合の条件を満たすと判断したときは(ステップS48のY)、G信号の方向性を優先的に採用し、まずG信号をG信号の方向性で内挿する(ステップS49)。ここで、上記のステップS48での条件は、R信号の輪郭が無く、かつ、G信号とB信号の輪郭が有り方向性が近い場合、又はB信号の輪郭が無く、かつ、G信号とR信号の輪郭が有り方向性が近い場合、又はR信号、G信号及びB信号の輪郭が有り、かつ、R信号とG信号の方向性及びB信号とG信号の方向性が近い場合、又はR信号とB信号の輪郭が無くG信号の輪郭が有る場合である。   Further, when the interpolation / high-frequency addition processing unit 17 for the pixel type B pixel determines that the condition of the direction of the R signal or the B signal is not significantly inconsistent with the direction of the G signal (step S48). Y), the directionality of the G signal is preferentially adopted, and the G signal is first interpolated with the directionality of the G signal (step S49). Here, the condition in step S48 is that the contour of the R signal is not present, the contour of the G signal and the B signal are present, and the directionality is close, or the contour of the B signal is not present, and the G signal and R When there is a contour of the signal and the directionality is close, or when there are contours of the R signal, the G signal and the B signal, and the directionality of the R signal and the G signal and the directionality of the B signal and the G signal are close, or R This is a case where there is no outline of the signal and B signal and there is an outline of the G signal.

続いて、内挿・高域加算処理部17は、R−G信号を同じくG信号の方向性で内挿する(ステップS50)。なお、G信号については演算子型OP2を用い内挿し、R−G信号については近傍の画素のRの値から相当するGの結果を減じたものについて演算子型OP1を用いて内挿する。続いて、内挿・高域加算処理部17は、G信号の内挿結果に対してB信号の高域成分を加算する(ステップS51)。これについては後述する。同様に、内挿・高域加算処理部17は、R−G信号の内挿結果とG信号の内挿結果の和として得られたR信号に、G信号の高域成分を加算する(ステップS52)。   Subsequently, the interpolation / high-frequency addition processing unit 17 similarly interpolates the RG signal with the directionality of the G signal (step S50). The G signal is interpolated using the operator type OP2, and the RG signal is interpolated using the operator type OP1 for a value obtained by subtracting the corresponding G result from the R value of the neighboring pixel. Subsequently, the interpolation / high-frequency addition processing unit 17 adds the high-frequency component of the B signal to the interpolation result of the G signal (step S51). This will be described later. Similarly, the interpolation / high-frequency addition processing unit 17 adds the high-frequency component of the G signal to the R signal obtained as the sum of the interpolation result of the RG signal and the interpolation result of the G signal (Step S1). S52).

一方、内挿・高域加算処理部17は、ステップS48の4つの判定条件のいずれも満たさないときには(ステップS48のN)、R信号又はB信号の方向性がG信号の方向性と大きく矛盾すると判断し、B画素の最近傍画素の画素平均値でG信号とR−G信号の内挿を行う(ステップS53、S54)。続いて、内挿・高域加算処理部17は、このG信号の内挿結果に対してB信号の高域成分を加算し(ステップS55)、R−G信号の内挿結果とG信号の内挿結果の和として得られたR信号に対して、G信号の高域成分を加算する(ステップS56)。そして、内挿・高域加算処理部17は、ステップS52又はS56の処理結果を、後述の偽色を無彩色化する処理のために無彩色化処理部21へ出力して無彩色化処理させる(図2BのステップS57)。   On the other hand, when none of the four determination conditions in step S48 is satisfied (N in step S48), the direction of the R signal or the B signal is largely inconsistent with the direction of the G signal. Then, the G signal and the RG signal are interpolated with the pixel average value of the nearest pixel of the B pixel (steps S53 and S54). Subsequently, the interpolation / high-frequency addition processing unit 17 adds the high-frequency component of the B signal to the interpolation result of the G signal (step S55), and the interpolation result of the RG signal and the G signal The high frequency component of the G signal is added to the R signal obtained as the sum of the interpolation results (step S56). Then, the interpolation / high-frequency addition processing unit 17 outputs the processing result of step S52 or S56 to the achromatic processing unit 21 for achromatic processing to be described later, and achromatic processing is performed. (Step S57 in FIG. 2B).

また、画素タイプG_a画素用の内挿・高域加算処理部18は、R信号又はB信号の方向性がG信号の方向性と大きく矛盾しない場合の条件を満たすと判断したときは(ステップS59のY)、G信号の方向性を優先的に採用し、R−G信号とB−G信号ともにG信号の方向性で内挿する(ステップS60)。ここで、上記のステップS59での条件は、R信号の輪郭が無く、かつ、G信号とB信号の輪郭が有り方向性が近い場合、又はB信号の輪郭が無く、かつ、G信号とR信号の輪郭が有り方向性が近い場合、又はR信号、G信号及びB信号の輪郭が有り、かつ、R信号とG信号の方向性及びB信号とG信号の方向性が近い場合、又はR信号とB信号の輪郭が無くG信号の輪郭が有る場合である。なお、内挿は、R−G信号については近傍の画素のRの値から相当するGの結果を減じたものについて演算子型OP3を用いて、B−G信号については近傍の画素のBの値から相当するGの結果を減じたものについて演算子型OP4を用いて計算する。   Further, when the interpolation / high-frequency addition processing unit 18 for the pixel type G_a pixel determines that the condition of the direction of the R signal or the B signal is not significantly inconsistent with the direction of the G signal (step S59). Y), the directionality of the G signal is preferentially adopted, and both the RG signal and the BG signal are interpolated with the directionality of the G signal (step S60). Here, the condition in step S59 described above is that the contour of the R signal is not present, the contour of the G signal and the B signal are present, and the directionality is close, or the contour of the B signal is absent, and the G signal and R When there is a contour of the signal and the directionality is close, or when there are contours of the R signal, the G signal and the B signal, and the directionality of the R signal and the G signal and the directionality of the B signal and the G signal are close, or R This is a case where there is no outline of the signal and B signal and there is an outline of the G signal. Note that the interpolation uses the operator type OP3 for the RG signal obtained by subtracting the corresponding G result from the R value of the neighboring pixel, and for the BG signal, the B of the neighboring pixel. The value obtained by subtracting the corresponding G result from the value is calculated using the operator type OP4.

続いて、内挿・高域加算処理部18は、R−G信号の内挿結果とG信号の内挿結果の和として得られたR信号に、G信号の高域成分を加算すると共に、B−G信号の内挿結果とG信号の内挿結果の和として得られたB信号に、G信号の高域成分を加算する(ステップS61)。これについては後述する。   Subsequently, the interpolation / high-frequency addition processing unit 18 adds the high-frequency component of the G signal to the R signal obtained as the sum of the interpolation result of the RG signal and the interpolation result of the G signal, The high frequency component of the G signal is added to the B signal obtained as the sum of the interpolation result of the BG signal and the interpolation result of the G signal (step S61). This will be described later.

一方、内挿・高域加算処理部18は、ステップS59の4つの判定条件のいずれも満たさないときには(ステップS59のN)、R信号又はB信号の方向性がG信号の方向性と大きく矛盾すると判断し、R−G信号については中心画素であるG画素の左右のR画素の平均値で内挿を行い、B−G信号については中心画素であるG画素の上下のB画素の平均値で内挿を行う(ステップS62)。続いて、内挿・高域加算処理部18は、この内挿結果であるR信号、B信号に対してG信号の高域成分を加算する(ステップS63)。そして、内挿・高域加算処理部18は、ステップS61又はS63の処理結果を、後述の偽色を無彩色化する処理のために無彩色化処理部22へ出力して無彩色化処理させる(図2BのステップS64)。   On the other hand, when none of the four determination conditions in step S59 is satisfied (N in step S59), the interpolation / high-frequency addition processing unit 18 greatly contradicts the directionality of the R signal or B signal with the directionality of the G signal. For the RG signal, interpolation is performed with the average value of the right and left R pixels of the G pixel as the central pixel, and for the BG signal, the average value of the B pixels above and below the G pixel as the central pixel. Is interpolated (step S62). Subsequently, the interpolation / high-frequency addition processing unit 18 adds the high-frequency component of the G signal to the R signal and B signal that are the interpolation results (step S63). Then, the interpolation / high-frequency addition processing unit 18 outputs the processing result of step S61 or S63 to the achromatic processing unit 22 for processing to achromaticize the false color described later, and performs the achromatic processing. (Step S64 in FIG. 2B).

また、画素タイプG_b画素用の内挿・高域加算処理部19は、R信号又はB信号の方向性がG信号の方向性と大きく矛盾しない場合の条件を満たすと判断したときは(ステップS66のY)、G信号の方向性を優先的に採用し、R−G信号とB−G信号ともにG信号の方向性で内挿する(ステップS67)。ここで、上記のステップS66での条件は、R信号の輪郭が無く、かつ、G信号とB信号の輪郭が有り方向性が近い場合、又はB信号の輪郭が無く、かつ、G信号とR信号の輪郭が有り方向性が近い場合、又はR信号、G信号及びB信号の輪郭が有り、かつ、R信号とG信号の方向性及びB信号とG信号の方向性が近い場合、又はR信号とB信号の輪郭が無くG信号の輪郭が有る場合である。なお、内挿は、R−G信号については近傍の画素のRの値から相当するGの結果を減じたものについて演算子型OP4を用いて、B−G信号については近傍の画素のBの値から相当するGの結果を減じたものについて演算子型OP3を用いて計算する。   Further, when the interpolation / high-frequency addition processing unit 19 for the pixel type G_b pixel determines that the condition of the direction of the R signal or the B signal is not significantly inconsistent with the direction of the G signal (step S66). Y), the directionality of the G signal is preferentially adopted, and both the RG signal and the BG signal are interpolated by the directionality of the G signal (step S67). Here, the condition in step S66 is that the contour of the R signal is not present, the contour of the G signal and the B signal are present, and the directionality is close, or the contour of the B signal is absent, and the G signal and R When there is a contour of the signal and the directionality is close, or when there are contours of the R signal, the G signal and the B signal, and the directionality of the R signal and the G signal and the directionality of the B signal and the G signal are close, or R This is a case where there is no outline of the signal and B signal and there is an outline of the G signal. Note that the interpolation uses the operator type OP4 for the RG signal obtained by subtracting the corresponding G result from the R value of the neighboring pixel, and for the BG signal, the B of the neighboring pixel. The value obtained by subtracting the corresponding G result from the value is calculated using the operator type OP3.

続いて、内挿・高域加算処理部19は、R−G信号の内挿結果とG信号の内挿結果の和として得られたR信号に、G信号の高域成分を加算すると共に、B−G信号の内挿結果とG信号の内挿結果の和として得られたB信号に、G信号の高域成分を加算する(ステップS68)。これについては後述する。   Subsequently, the interpolation / high-frequency addition processing unit 19 adds the high-frequency component of the G signal to the R signal obtained as the sum of the interpolation result of the RG signal and the interpolation result of the G signal, The high frequency component of the G signal is added to the B signal obtained as the sum of the interpolation result of the B-G signal and the interpolation result of the G signal (step S68). This will be described later.

一方、内挿・高域加算処理部19は、ステップS66の4つの判定条件のいずれも満たさないときには(ステップS66のN)、R信号又はB信号の方向性がG信号の方向性と大きく矛盾すると判断し、R−G信号については中心画素であるG画素の上下のR画素の平均値で内挿を行い、B−G信号については中心画素であるG画素の左右のB画素の平均値で内挿を行う(ステップS69)。続いて、内挿・高域加算処理部19は、この内挿結果であるR信号、B信号に対してG信号の高域成分を加算する(ステップS70)。そして、内挿・高域加算処理部19は、ステップS68又はS70の処理結果を、後述の偽色を無彩色化する処理のために無彩色化処理部23へ出力して無彩色化処理させる(図2BのステップS71)。   On the other hand, when none of the four determination conditions in step S66 is satisfied (N in step S66), the interpolation / high-frequency addition processing unit 19 greatly contradicts the directionality of the R signal or B signal with the directionality of the G signal. For the RG signal, interpolation is performed with the average value of the upper and lower R pixels of the G pixel as the central pixel, and for the BG signal, the average value of the left and right B pixels of the G pixel as the central pixel. Is interpolated (step S69). Subsequently, the interpolation / high-frequency addition processing unit 19 adds the high-frequency component of the G signal to the R signal and B signal that are the interpolation results (step S70). Then, the interpolation / high-frequency addition processing unit 19 outputs the processing result of step S68 or S70 to the achromatic processing unit 23 for achromatic processing to be described later, and achromatic processing is performed. (Step S71 in FIG. 2B).

次に、ステップS40、S41、S44、S45、S51、S52、S55、S56、S61、S63、S68、S70の高域加算処理について説明する。   Next, the high-frequency addition processing in steps S40, S41, S44, S45, S51, S52, S55, S56, S61, S63, S68, and S70 will be described.

未知色と既知色の輪郭方向性が一致している場合(相関がある場合)、それぞれの高域相関性が高いと判断し、既に求められた補間値(P[j][i]とする)に、下記のように高域成分を加算する。ここで、(1)式の分子の絶対値記号をはずした下記の(2)式を定義する。この(2)式の符号により、順相関か逆相関かの相関性を判定できる。   When the contour directionality of the unknown color matches the known color (when there is a correlation), it is determined that the high-frequency correlation of each is high, and the already obtained interpolation value (P [j] [i]) is used. ) Is added with a high frequency component as follows. Here, the following formula (2) is defined by removing the absolute value symbol of the numerator of formula (1). The correlation between the forward correlation and the inverse correlation can be determined by the sign of the equation (2).

Figure 2009206552
Figure 2009206552

(a)R画素,B画素の場合
R画素におけるR信号の評価、及びB画素におけるB信号の評価に用いる演算子型OP0と、R画素におけるG信号の評価、及びB画素におけるG信号の評価に用いる演算子型OP2についての(2)式の計算結果の符号がそれぞれ同一な場合は、輪郭が一致し、かつ、輪郭に垂直な方向についての画素値の増減が一致している順相関とし、符号が不一致な場合は逆相関とする。順相関の場合は、未知のG信号に既知の色要素のR/B画素のR/B信号の高域成分を加算する。逆相関の場合は、未知のG信号から既知の色要素のR/B画素のR/B信号の高域成分を減算する(R/B信号の逆位相の高域成分を加算する)(ステップS40、S44、S51、S55)。
(A) In the case of R pixel and B pixel Operator type OP0 used for evaluation of R signal in R pixel and evaluation of B signal in B pixel, evaluation of G signal in R pixel, and evaluation of G signal in B pixel When the signs of the calculation results of the expression (2) for the operator type OP2 used in the above are the same, a forward correlation is obtained in which the contours match and the pixel values increase and decrease in the direction perpendicular to the contours match. If the codes do not match, an inverse correlation is assumed. In the case of forward correlation, the high frequency component of the R / B signal of the R / B pixel of the known color element is added to the unknown G signal. In the case of inverse correlation, the high frequency component of the R / B signal of the R / B pixel of the known color element is subtracted from the unknown G signal (the high frequency component of the reverse phase of the R / B signal is added) (step S40, S44, S51, S55).

高域成分hpの導出にあたっては内挿の方向性に応じて、図14のOP0の微分演算子型群中の画素番号iの画素値をp[i]として以下のように計算する。   In deriving the high frequency component hp, the pixel value of the pixel number i in the differential operator type group of OP0 of FIG. 14 is calculated as p [i] according to the directionality of interpolation as follows.

0°:hp=(2*p[4] - p[3] - p[5]+1)/2;
22.5°:hp=(2*p[4] - p[2] - p[6]+1)/2;
45°:hp=(2*p[4] - p[2] - p[6]+1)/2;
67.5°:hp=(2*p[4] - p[2] - p[6]+1)/2;
90°:hp=(2*p[4] - p[1] - p[7]+1)/2;
112.5°:hp=(2*p[4] - p[0] - p[8]+1)/2;
135°:hp=(2*p[4] - p[0] - p[8]+1)/2;
157.5°:hp=(2*p[4] - p[0] - p[8]+1)/2;
以上のように得られたG信号についても同様な高域抽出を行い、B/R信号に加算する(ステップS41、S45、S52、S56)。
0 °: hp = (2 * p [4]-p [3]-p [5] +1) / 2;
22.5 °: hp = (2 * p [4]-p [2]-p [6] +1) / 2;
45 °: hp = (2 * p [4]-p [2]-p [6] +1) / 2;
67.5 °: hp = (2 * p [4]-p [2]-p [6] +1) / 2;
90 °: hp = (2 * p [4]-p [1]-p [7] +1) / 2;
112.5 °: hp = (2 * p [4]-p [0]-p [8] +1) / 2;
135 °: hp = (2 * p [4]-p [0]-p [8] +1) / 2;
157.5 °: hp = (2 * p [4]-p [0]-p [8] +1) / 2;
The same high-frequency extraction is performed on the G signal obtained as described above and added to the B / R signal (steps S41, S45, S52, and S56).

(b)G_a画素,G_b画素の場合
G_a画素におけるR信号の評価、及びG_b画素におけるB信号の評価に用いる演算子型OP3と、G_a画素におけるB信号の評価、及びG_b画素におけるR信号の評価に用いる演算子型OP4と、G_a画素におけるG信号の評価、及びG_b画素におけるG信号の評価に用いる演算子型OP5についての(2)式の計算結果の符号がそれぞれ同一な場合は、輪郭が一致し、かつ、輪郭に垂直な方向についての画素値の増減が一致している順相関とし、符号が不一致な場合は逆相関とする。順相関の場合は、未知色のR/B信号に既知の色要素のG画素のG信号の高域成分を加算する。逆相関の場合は未知色のR/B信号から既知の色要素のG画素のG信号の高域成分を減算する(G信号の逆位相の高域成分を加算する)(ステップS61、S63、S68、S70)。
(B) In the case of G_a pixel and G_b pixel Operator type OP3 used for evaluation of R signal in G_a pixel and evaluation of B signal in G_b pixel, evaluation of B signal in G_a pixel, and evaluation of R signal in G_b pixel When the sign of the calculation result of the expression (2) is the same for the operator type OP4 used for the calculation of the G signal in the G_a pixel and the operator type OP5 used for the G signal evaluation in the G_b pixel, the contour is A forward correlation in which the pixel values increase and decrease in the direction perpendicular to the contour match, and a negative correlation is used if the signs do not match. In the case of forward correlation, the high frequency component of the G signal of the G pixel of the known color element is added to the R / B signal of the unknown color. In the case of inverse correlation, the high-frequency component of the G signal of the G pixel of the known color element is subtracted from the R / B signal of the unknown color (the high-frequency component of the opposite phase of the G signal is added) (steps S61, S63, S68, S70).

高域成分hpの導出にあたっては内挿の方向性に応じて、図19のOP5の微分演算子型群中の画素番号iの画素値をp[i]として以下のように計算する。   In deriving the high-frequency component hp, the pixel value of the pixel number i in the differential operator type group of OP5 in FIG. 19 is calculated as p [i] according to the directionality of interpolation as follows.

0°:hp=(2*p[10] - p[9] - p[11]+1)/2;
22.5°:hp=(2*p[10] - p[7] - p[13]+1)/2;
45°:hp=(2*p[10] - p[7] - p[13]+1)/2;
67.5°:hp=(2*p[10] - p[7] - p[13]+1)/2;
90°:hp=(2*p[10] - p[3] - p[17]+1)/2;
112.5°:hp=(2*p[10] - p[6] - p[14]+1)/2;
135°:hp=(2*p[10] - p[6] - p[14]+1)/2;
157.5°:hp=(2*p[10] - p[6] - p[14]+1)/2;
また、方向性に基づく補間が行われない場合は、補間を最近傍の線形補間で行うこととしているので、線形フィルタを使用して近傍画素の値から高域成分を取得する。
0 °: hp = (2 * p [10]-p [9]-p [11] +1) / 2;
22.5 °: hp = (2 * p [10]-p [7]-p [13] +1) / 2;
45 °: hp = (2 * p [10]-p [7]-p [13] +1) / 2;
67.5 °: hp = (2 * p [10]-p [7]-p [13] +1) / 2;
90 °: hp = (2 * p [10]-p [3]-p [17] +1) / 2;
112.5 °: hp = (2 * p [10]-p [6]-p [14] +1) / 2;
135 °: hp = (2 * p [10]-p [6]-p [14] +1) / 2;
157.5 °: hp = (2 * p [10]-p [6]-p [14] +1) / 2;
Further, when interpolation based on directionality is not performed, interpolation is performed by nearest-neighbor linear interpolation, and therefore, a high frequency component is acquired from the values of neighboring pixels using a linear filter.

次に、無彩色化処理部20〜23によるステップS46、S57、S64、S71での偽色の無彩色化処理について説明する。   Next, the false color achromatic color processing in steps S46, S57, S64, and S71 by the achromatic color processing units 20 to 23 will be described.

G信号中にはベイヤ配列の色フィルタにより水平・垂直方向の縞状の高域変動パターンが含まれ、これが偽色発生の原因になるので、無彩色化処理部20〜23はこの偽色を無彩色化する処理を行う。   The G signal includes a horizontal and vertical stripe-shaped high-frequency variation pattern due to the Bayer array color filter, which causes generation of false colors. Perform a neutralization process.

具体的には、無彩色化処理部20、21は、図20(A)、(B)に示すように、中心画素がR画素又はB画素であるときの演算子型OP2におけるG画素につき水平方向、垂直方向にそれぞれ3画素分又は4画素分の画素値の平均値を算出する。また、無彩色化処理部22、23は、図20(C)、(D)に示すように、中心画素がG_a画素又はG_b画素であるときの演算子型OP5におけるG画素につき水平方向、垂直方向にそれぞれ3画素分又は4画素分の画素値の平均値を算出する。そして、無彩色化処理部20〜23は、これらの平均値の変動が一定値を超える場合に、縞状の高域変動パターンであると判断して既に定まっているR画素、G画素、B画素の各画素値R,G,Bをもとに、次式
Y=0.715G+0.0772B+0.2126R (3)
により信号Yを生成し、これを新しいR画素、G画素、B画素の画素値とすることで無彩色化する。
Specifically, as shown in FIGS. 20A and 20B, the achromatic color processing units 20 and 21 perform horizontal processing for G pixels in the operator type OP2 when the central pixel is an R pixel or a B pixel. Average values of pixel values for three pixels or four pixels are calculated in the vertical direction and the vertical direction, respectively. Further, as shown in FIGS. 20C and 20D, the achromatic color processing units 22 and 23 are arranged in the horizontal and vertical directions for G pixels in the operator type OP5 when the central pixel is a G_a pixel or a G_b pixel. Average values of pixel values for three pixels or four pixels are calculated in the direction. Then, the achromatic color processing units 20 to 23 determine that the average value variation exceeds a certain value and determine that the pattern is a striped high-frequency variation pattern, R pixels, G pixels, and B pixels that have already been determined. Based on the pixel values R, G, and B of the pixel, the following equation Y = 0.715G + 0.0772B + 0.2126R (3)
The signal Y is generated by the above, and this is used as the pixel value of the new R pixel, G pixel, and B pixel, thereby achromatic.

このようにして、無彩色化処理部20は、無彩色化処理して得たG信号及びB信号を出力する(図2BのステップS47)。また、無彩色化処理部21は、無彩色化処理して得たG信号及びR信号を出力する(図2BのステップS58)。また、無彩色化処理部22及び23は、それぞれ無彩色化処理して得たR信号及びB信号を出力する(図2BのステップS65、S72)。   In this way, the achromatic color processing unit 20 outputs the G signal and the B signal obtained by the achromatic color processing (step S47 in FIG. 2B). Further, the achromatic color processing unit 21 outputs the G signal and the R signal obtained by the achromatic color processing (step S58 in FIG. 2B). Further, the achromatic color processing units 22 and 23 respectively output the R signal and the B signal obtained by the achromatic color processing (Steps S65 and S72 in FIG. 2B).

以上説明したように、本実施の形態によれば、ベイヤ配列の色フィルタを受光面上に設けた固体撮像素子から出力した撮像信号を4種類の画素タイプに分類し、分類したそれぞれの画素タイプの撮像信号におけるR画素、G画素、B画素とその周囲の近傍画素の各値に基づいてそれぞれの信号の輪郭の方向性を検出し、検出した方向性につき、R信号またはB信号の方向性がG信号の方向性と大きく矛盾しない場合には、G信号の方向性を優先的に採用し、更にR信号またはB信号の画素値補間にも利用することによって、R信号、G信号、B信号の相関性を生かしたジャギーが少なく滑らかで、かつ、偽色の少ない画素補間を行うことができる。   As described above, according to the present embodiment, the imaging signals output from the solid-state imaging device in which the Bayer array color filter is provided on the light receiving surface are classified into four types of pixel types, and the classified pixel types are classified. The directionality of the contour of each signal is detected based on the values of the R pixel, G pixel, B pixel and surrounding pixels in the image pickup signal, and the directionality of the R signal or B signal for the detected directionality If there is no significant contradiction with the directionality of the G signal, the directionality of the G signal is preferentially adopted, and further used for pixel value interpolation of the R signal or B signal, so that the R signal, G signal, B It is possible to perform pixel interpolation with little jaggy and little false color utilizing the correlation of signals.

また、本実施の形態によれば、輪郭の有無・方向性評価部12〜15によって検出された方向性に沿った画素値の加重平均を補間値とする際に、加重平均値を所定の複数の周辺画素の最大値と最小値により決定される値によって制限することにより補間値の突出を避けることができる。   Further, according to the present embodiment, when the weighted average of pixel values along the directionality detected by the presence / absence / directionality evaluation units 12 to 15 of the contour is used as the interpolation value, the weighted average value is set to a predetermined number. By limiting by the value determined by the maximum value and the minimum value of the surrounding pixels, it is possible to avoid protruding interpolation values.

更に、本実施の形態によれば、(2)式の計算結果の符号がそれぞれ同一な場合は、輪郭が一致し、かつ、輪郭に垂直な方向についての画素値の増減が一致している順相関とし、符号が不一致な場合は逆相関とする。そして、方向性が示す方向に配置された中心画素と同一の色要素の画素領域内の複数の画素の値に基づいて算出した、中心画素と同一の色要素の原色信号の高周波数成分を、B画素、R画素の画素値に、順相関の場合は、これらの画素値を求める際に用いた加算し、逆相関の場合は減算し、画素補間に際して順相関・無相関・逆相関に配慮することで補間後の映像信号の広帯域化を実現することができる。   Furthermore, according to the present embodiment, when the signs of the calculation results of equation (2) are the same, the contours match and the pixel values increase and decrease in the direction perpendicular to the contours match. Correlation is assumed, and if the codes do not match, the inverse correlation is assumed. And the high frequency component of the primary color signal of the same color element as the center pixel, calculated based on the values of a plurality of pixels in the pixel area of the same color element as the center pixel arranged in the direction indicated by the directionality, In the case of forward correlation, the pixel values of the B pixel and R pixel are added when obtaining these pixel values, and in the case of inverse correlation, they are subtracted. By doing so, it is possible to realize a wide band of the video signal after interpolation.

なお、本発明は、ハードウェアのベイヤ画像補間処理を行う画像処理装置に限定されるものでなく、図1のブロック図の構成をコンピュータにより実行させるコンピュータプログラムも包含する。この場合のコンビュータプログラムである画像処理プログラムは、記録媒体からコンピュータに取り込まれてもよいし、ネットワーク経由でコンピュータに取り込まれてもよい。   The present invention is not limited to an image processing apparatus that performs hardware Bayer image interpolation processing, and includes a computer program that causes a computer to execute the configuration of the block diagram of FIG. The image processing program which is a computer program in this case may be taken into a computer from a recording medium, or may be taken into a computer via a network.

本発明の画像処理装置の一実施の形態のブロック図である。It is a block diagram of one embodiment of an image processing device of the present invention. 図1の動作説明用フローチャート(その1)である。FIG. 3 is a flowchart (part 1) for explaining the operation of FIG. 1; FIG. 図1の動作説明用フローチャート(その2)である。FIG. 3 is a flowchart (part 2) for explaining the operation of FIG. 1; 本発明における画素タイプの分類を説明する図である。It is a figure explaining classification of a pixel type in the present invention. R画素近傍のRGB画素配置を示す図である。It is a figure which shows RGB pixel arrangement | positioning of R pixel vicinity. B画素近傍のRGB画素配置を示す図である。It is a figure which shows RGB pixel arrangement | positioning of B pixel vicinity. G_a画素近傍のRGB画素配置を示す図である。It is a figure which shows RGB pixel arrangement | positioning of G_a pixel vicinity. G_b画素近傍のRGB画素配置を示す図である。It is a figure which shows the RGB pixel arrangement | positioning of G_b pixel vicinity. 演算子型OP0の画素配置を示す図である。It is a figure which shows pixel arrangement | positioning of operator type OP0. 演算子型OP1の画素配置を示す図である。It is a figure which shows pixel arrangement | positioning of operator type OP1. 演算子型OP2の画素配置を示す図である。It is a figure which shows pixel arrangement | positioning of operator type OP2. 演算子型OP3の画素配置を示す図である。It is a figure which shows pixel arrangement | positioning of operator type OP3. 演算子型OP4の画素配置を示す図である。It is a figure which shows pixel arrangement | positioning of operator type OP4. 演算子型OP5の画素配置を示す図である。It is a figure which shows pixel arrangement | positioning of operator type OP5. 演算子型OP0の画素配置を用いた微分演算子型群を示す図である。It is a figure which shows the differential operator type group using the pixel arrangement | positioning of operator type OP0. 演算子型OP1の画素配置を用いた微分演算子型群を示す図である。It is a figure which shows the differential operator type group using the pixel arrangement | positioning of operator type OP1. 演算子型OP2の画素配置を用いた微分演算子型群を示す図である。It is a figure which shows the differential operator type group using the pixel arrangement | positioning of operator type OP2. 演算子型OP3の画素配置を用いた微分演算子型群を示す図である。It is a figure which shows the differential operator type group using the pixel arrangement | positioning of operator type OP3. 演算子型OP4の画素配置を用いた微分演算子型群を示す図である。It is a figure which shows the differential operator type group using the pixel arrangement | positioning of operator type OP4. 演算子型OP5の画素配置を用いた微分演算子型群を示す図である。It is a figure which shows the differential operator type group using the pixel arrangement | positioning of operator type OP5. 偽色除去のための無彩色化すべき領域の検出法を説明するための図である。It is a figure for demonstrating the detection method of the area | region which should be achromatic for the false color removal. ベイヤ配列の一例を示す図である。It is a figure which shows an example of a Bayer arrangement | sequence.

符号の説明Explanation of symbols

10 画像処理装置
11 中心画素タイプ分類部
12〜15 輪郭の有無・方向性評価部
16〜19 内挿・高域加算処理部
20〜23 無彩色化処理部
DESCRIPTION OF SYMBOLS 10 Image processing apparatus 11 Center pixel type classification | category part 12-15 Presence / absence / directivity evaluation part 16-19 Interpolation / high-frequency addition processing part 20-23 Achromatic processing part

Claims (3)

ベイヤ配列の色フィルタを受光面上に設けた固体撮像素子から出力された撮像信号を構成する色要素が三原色のうちいずれか一の原色である既知の各画素において、その既知の色要素以外の他の二つの色要素の原色信号を周辺の画素の値に基づいて補間する画像処理を行う画像処理装置であって、
入力された前記撮像信号の画素が、その画素の色要素とその画素を中心画素としたときの周辺の画素の色要素との関係に基づいて、複数の種類の画素タイプのいずれであるかを分類する画素タイプ分類手段と、
前記画素タイプ分類手段により分類された各画素タイプ毎に、入力された前記撮像信号の画素と、その画素を中心とする複数の周辺の画素とからなる予め設定された画素領域内で、その画素領域内の各画素の値を用いて所定の演算式により演算して、三原色全ての原色信号について輪郭成分の最も大きな方向性を検出する検出手段と、
前記画素タイプ分類手段により分類された各画素タイプのそれぞれについて、前記検出手段により検出した前記方向性が示す方向のうち、基準とする所定の一の原色信号の方向性が示す方向と残りの二つの原色信号の方向性が示す方向との間の角度が予め設定した角度範囲以内にあるか否かをそれぞれ判定する判定手段と、
前記判定手段により前記角度が前記角度範囲以内にあると判定したときは、判定された方向性が示す方向に配置された前記中心画素におけるその中心画素とは異なる二つの色要素の複数の画素の値に基づいて、該二つの色要素の補間値を算出し、この算出した二つの色要素の補間値を前記中心画素における前記二つの色要素の画素値として内挿する内挿手段と、
を有することを特徴とする画像処理装置。
In each known pixel in which the color element constituting the imaging signal output from the solid-state imaging device having the Bayer array color filter provided on the light receiving surface is one of the three primary colors, other than the known color element An image processing apparatus for performing image processing for interpolating primary color signals of other two color elements based on the values of surrounding pixels,
Based on the relationship between the color element of the input image signal and the color element of the surrounding pixel when the pixel is the central pixel, which of the plurality of types of pixel types is determined Pixel type classification means for classifying;
For each pixel type classified by the pixel type classifying means, the pixel within a preset pixel area consisting of a pixel of the input imaging signal and a plurality of peripheral pixels centered on the pixel. Detection means for calculating the maximum directionality of the contour component for all the primary color signals, using a value of each pixel in the region and calculating with a predetermined calculation formula;
For each of the pixel types classified by the pixel type classification unit, among the directions indicated by the directionality detected by the detection unit, the direction indicated by the direction of a predetermined primary color signal as a reference and the remaining two Determination means for respectively determining whether or not the angle between the directions indicated by the directions of the two primary color signals is within a preset angle range;
When the determination unit determines that the angle is within the angle range, a plurality of pixels having two color elements different from the central pixel in the central pixel arranged in the direction indicated by the determined directionality. Interpolation means for calculating an interpolation value of the two color elements based on the value, and interpolating the calculated interpolation values of the two color elements as pixel values of the two color elements in the central pixel;
An image processing apparatus comprising:
前記角度が前記角度範囲以内にあると判定した前記方向性が示す方向に配置された前記中心画素と同一の色要素の前記画素領域内の複数の画素の値に基づいて算出した、該中心画素と同一の色要素の原色信号の高周波数成分を、前記二つの色要素の画素値に、これらの画素値を求める際に用いた前記所定の演算式の演算結果に付される正負いずれかの符号に応じて加減算する加減算手段を更に有することを特徴とする請求項1記載の画像処理装置。   The center pixel calculated based on the values of a plurality of pixels in the pixel area of the same color element as the center pixel arranged in the direction indicated by the directionality determined that the angle is within the angle range The high frequency component of the primary color signal of the same color element as the pixel value of the two color elements is either positive or negative attached to the calculation result of the predetermined arithmetic expression used when obtaining these pixel values 2. The image processing apparatus according to claim 1, further comprising addition / subtraction means for adding / subtracting according to the sign. 前記内挿手段は、
前記判定手段により前記角度が前記角度範囲以内にあると判定した方向性が示す方向に配置された前記中心画素におけるその中心画素とは異なる二つの色要素の複数の画素の値を、該二つの色要素毎に加重平均し、更にその加重平均値を前記中心画素の周辺のその色要素の複数の画素の最大値と最小値により決定される値によって制限した値を前記補間値とする手段であることを特徴とする請求項1又は2記載の画像処理装置。
The interpolating means includes
The values of a plurality of pixels of two color elements different from the central pixel in the central pixel arranged in the direction indicated by the directionality determined by the determination means that the angle is within the angular range are calculated by the two values. Means for calculating a weighted average for each color element, and further using the weighted average value as a value obtained by limiting the weighted average value by a value determined by a maximum value and a minimum value of a plurality of pixels of the color element around the center pixel. The image processing apparatus according to claim 1, wherein the image processing apparatus is provided.
JP2008044002A 2008-02-26 2008-02-26 Image processing apparatus Pending JP2009206552A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008044002A JP2009206552A (en) 2008-02-26 2008-02-26 Image processing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008044002A JP2009206552A (en) 2008-02-26 2008-02-26 Image processing apparatus

Publications (1)

Publication Number Publication Date
JP2009206552A true JP2009206552A (en) 2009-09-10

Family

ID=41148439

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008044002A Pending JP2009206552A (en) 2008-02-26 2008-02-26 Image processing apparatus

Country Status (1)

Country Link
JP (1) JP2009206552A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012044617A (en) * 2010-08-23 2012-03-01 Toshiba Corp Image processing device
WO2014002811A1 (en) * 2012-06-25 2014-01-03 コニカミノルタ株式会社 Image-processing device, image-processing method, and image-processing program
JP2015122577A (en) * 2013-12-20 2015-07-02 株式会社メガチップス Pixel interpolation processing apparatus, imaging apparatus, program and integrated circuit
CN109104595A (en) * 2018-06-07 2018-12-28 中国科学院西安光学精密机械研究所 The FPGA implementation method of Hamilton adaptive-interpolation in scan picture

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012044617A (en) * 2010-08-23 2012-03-01 Toshiba Corp Image processing device
US8559747B2 (en) 2010-08-23 2013-10-15 Kabushiki Kaisha Toshiba Image processing apparatus, image processing method, and camera module
WO2014002811A1 (en) * 2012-06-25 2014-01-03 コニカミノルタ株式会社 Image-processing device, image-processing method, and image-processing program
US9336575B2 (en) 2012-06-25 2016-05-10 Konica Minolta, Inc. Image processing apparatus, image processing method, and image processing program
JPWO2014002811A1 (en) * 2012-06-25 2016-05-30 コニカミノルタ株式会社 Image processing apparatus, image processing method, and image processing program
JP2015122577A (en) * 2013-12-20 2015-07-02 株式会社メガチップス Pixel interpolation processing apparatus, imaging apparatus, program and integrated circuit
CN109104595A (en) * 2018-06-07 2018-12-28 中国科学院西安光学精密机械研究所 The FPGA implementation method of Hamilton adaptive-interpolation in scan picture
CN109104595B (en) * 2018-06-07 2019-09-20 中国科学院西安光学精密机械研究所 The FPGA implementation method of Hamilton adaptive-interpolation in scan picture

Similar Documents

Publication Publication Date Title
JP4097815B2 (en) Image processing apparatus and image processing method
US9104938B2 (en) Registration of separations
US7916937B2 (en) Image processing device having color shift correcting function, image processing program and electronic camera
US8588521B2 (en) Image processing apparatus and control method therefor
US20080062409A1 (en) Image Processing Device for Detecting Chromatic Difference of Magnification from Raw Data, Image Processing Program, and Electronic Camera
JP2005159957A (en) Color interpolation method
US7489822B2 (en) Image processing apparatus and method for detecting a direction of an edge in the vicinity of a pixel of interest and generating all color signals for each pixel by interpolation using color signals of a pixel of interest and its neighbor pixels, and a recording medium having a program recorded thereon for causing the apparatus to perform the method
JP2009206552A (en) Image processing apparatus
US8559762B2 (en) Image processing method and apparatus for interpolating defective pixels
JP3905708B2 (en) Image interpolation device
KR100915970B1 (en) Method and apparatus for compensating color of image obtained from image sensor
US7433512B2 (en) Method and apparatus for finding and correcting single-pixel noise defects in a two-dimensional camera pixel field and a camera provided with such an apparatus
JP2013232829A (en) Image processing device and image processing method and program
JP5673186B2 (en) Imaging apparatus and interpolation processing method of imaging apparatus
KR101327790B1 (en) Image interpolation method and apparatus
TWI450594B (en) Cross-color image processing systems and methods for sharpness enhancement
JP3899144B2 (en) Image processing device
JP4334150B2 (en) Image interpolation device
JP5139350B2 (en) Image processing apparatus, image processing method, and imaging apparatus
JP4596066B2 (en) Image processing apparatus, image processing method, and image display apparatus
JP2006115408A (en) Pixel signal generation device, imaging device, and pixel signal generation method
JP4666786B2 (en) Image interpolation device
CN117041745A (en) Color interpolation method and device in intelligent image processing
JP2009206553A (en) Image processing apparatus and program for image processing
JP2021040212A (en) Image processing device, image processing method, and program