WO2006033084A2 - Interpolation d'images - Google Patents

Interpolation d'images Download PDF

Info

Publication number
WO2006033084A2
WO2006033084A2 PCT/IB2005/053150 IB2005053150W WO2006033084A2 WO 2006033084 A2 WO2006033084 A2 WO 2006033084A2 IB 2005053150 W IB2005053150 W IB 2005053150W WO 2006033084 A2 WO2006033084 A2 WO 2006033084A2
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
image
computing
sets
pixel
Prior art date
Application number
PCT/IB2005/053150
Other languages
English (en)
Other versions
WO2006033084A3 (fr
Inventor
Henricus Wilhelm Peter Van Der Heijden
Erwin Ben Bellers
Robert Jan Schutten
Haiyan He
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to JP2007533051A priority Critical patent/JP2008515266A/ja
Priority to EP05784295A priority patent/EP1794715A2/fr
Publication of WO2006033084A2 publication Critical patent/WO2006033084A2/fr
Publication of WO2006033084A3 publication Critical patent/WO2006033084A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/403Edge-driven scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/20Circuitry for controlling amplitude response
    • H04N5/205Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic
    • H04N5/208Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic for compensating for attenuation of high frequency components, e.g. crispening, aperture distortion correction

Definitions

  • the invention relates to an image interpolation unit for creating an output image on basis of an input image by means of interpolation of pixel values of the input image, the image interpolation unit comprising: selecting means for selecting sets of evaluation pixels for respective orientations (50, 60, 70) in the input image, related to a particular pixel (10) of the output image to be interpolated; computing means for computing match errors for the respective orientations by comparing values of subsets (400, 410, 420) of the sets of evaluation pixels selected; selecting means for selecting a particular orientation based on the match errors computed; creating means for creating a set of interpolation pixels on basis of the particular orientation; and computing means for computing a pixel value of the particular pixel on basis of the set of interpolation pixels created.
  • the invention further relates to an image processing apparatus comprising such an image interpolation unit.
  • the invention further relates to a method of creating an output image on basis of an input image by means of interpolation of pixel values of the input image the method comprising : - selecting sets of evaluation pixels for respective orientations in the input image, related to a particular pixel of the output image to be interpolated; computing match errors for the respective orientations by comparing values of subsets of the sets of evaluation pixels selected; selecting a particular orientation based on the match errors computed; - creating a set of interpolation pixels on basis of the particular orientation; and computing a pixel value of the particular pixel on basis of the set of interpolation pixels being created.
  • a particular interpolated pixel value is obtained from a group of pixel values in the neighbourhood of the particular pixel.
  • a group of pixels belonging to a particular neighbourhood from set of neighbourhoods of various orientations is selected. The decision of selection is on basis of presence of an edge structure within the particular neighbourhood.
  • This object of the invention is achieved in that the computing means of the image interpolation unit as mentioned in the opening paragraphs for computing the match errors are arranged to compute a first difference between first values of a first subset of a first one of the sets of evaluation pixels and a second difference between second values of a second subset of the first one of the sets of evaluation pixels and for computing a further difference between the first difference and the second difference.
  • the unit according to the invention is thus computing the difference of • difference of pixel values. A set containing a minimum number of three pixels is considered for evaluation. Constant values of adjacent pixels and constant gradients are duly considered to avoid false indications of orientation.
  • edges of single pixel width while being compared in the orientation of edge structure will give minimum variance even when there is a constant gradient in the pixel values.
  • the novel measure prescribed by the invention advantageously gives maximum variance in the direction perpendicular to the direction of the edge in case of thin edges of one or two pixel widths.
  • true orientation of edge structure can be unambiguously identified by the invention. Since there are more than two subsets of pixels in the evaluation of orientations, the proposed method is more robust in the presence of noise.
  • a group of pixels belonging to a spatial neighbourhood form a set of evaluation pixels for detecting the presence of edge structure.
  • the spatial neighbourhood may be fixed in a number of orientations and for each orientation of the neighbourhood, a group of pixels form the candidates for evaluation. Match error for each orientation is calculated for each one of the sets of evaluation pixels.
  • the size of the neighbourhood may vary depending upon the size of edges expected in the image.
  • a further embodiment of the image interpolation unit according to the invention is characterised in that a first subset of a first one of the sets of evaluation pixels comprises pixels from a first pair of rows, a second subset of the first one of the sets of evaluation pixels comprises pixels from a second pair of rows, the first and the second pairs of rows having a common row, adjacent to the particular pixel of the image to be interpolated.
  • This embodiment is advantageous in that the comparison of pixel values is made in the immediate neighbourhood of the particular pixel to be interpolated to ascertain the orientation of the edge structure. Comparison is carried out within respective pixels belonging to a pair of rows in the neighbourhood of the particular pixel to be interpolated. Comparing the subsets of rows that have overlapping row ensures the continuity of the edge structure. In this embodiment, two rows of pixels above the pixel to be interpolated and one row of pixel below the pixel to be interpolated are considered for evaluation.
  • a further embodiment of the image interpolation unit according to the invention is characterised in that a first one of match errors is computed on basis of the second subset of the first one of the sets of evaluation pixels and a third subset of the first one of the sets of evaluation pixels that comprises pixels from a third pair of rows, the second and third pairs of rows having a further common row, adjacent to the particular pixel of the image to be interpolated.
  • each subset may contain a pair of rows.
  • two such subsets may be required.
  • Each match error may be the difference of difference measure obtained from three rows of pixels that have a common row adjacent to the particular pixel of the image to be interpolated.
  • two rows of pixels below the pixel to be interpolated and one row of pixel above the pixel to be interpolated are considered for evaluation.
  • a further embodiment of the image interpolation unit according to the invention is characterised in that, first one of match errors is a sum of a first intermediate match error computed between the first and second subsets and a second intermediate match error computed between the second and the third subsets.
  • a further embodiment of the image interpolation unit according to the invention is characterised in that the first one of the match error is a difference of a first intermediate match error computed between the first and second subsets and a second intermediate match error computed between the second and third subsets.
  • a match error may be a further difference of difference when a four pixels comparison is made.
  • a further embodiment of the image interpolation unit according to the invention is characterised in that the selecting means for selecting a particular orientation are arranged to select the particular orientation based on the minimum of the match errors being computed for respective orientations. In order to ascertain the orientation of edge structure, it is advantageous to select the minimum match error that corresponds to the orientation.
  • a further embodiment of the image interpolation unit according to the invention is characterised in that the computing means for computing the pixel value of the particular pixel are arranged to compute the pixel value of the particular pixel by averaging the pixel values of a set of interpolation pixels.
  • interpolation is carried out from the set.
  • Value of the particular pixel is computed by well-known methods from the set of interpolation pixels.
  • the computing means for computing the match errors are arranged to compute a first difference between first values of a first subset of a first one of the sets of evaluation pixels and a second difference between second values of a second subset of the first one of the sets of evaluation pixels and for computing a further difference between the first difference and the second difference.
  • the image processing apparatus might support one or more of the following types of image processing:
  • Video compression i.e., encoding, decoding and transcoding, Resolution conversion and format conversion, Interlaced scan to progressive scan conversion, Image zoom in/out.
  • the image processing apparatus may comprise additional units, e.g. a receiving unit, a processing unit and a display unit.
  • the image processing apparatus might for e.g. be a television, PC, set top box, VCR/VCP (Video Cassette Recorder/ Player), satellite tuner, or DVD (digital Versatile Disk) player or recorder.
  • VCR/VCP Video Cassette Recorder/ Player
  • satellite tuner or DVD (digital Versatile Disk) player or recorder.
  • computing a first one of the match errors comprises computing a first difference between first values of a first subset of a first subset of a first one of the sets of evaluation pixels and a second difference between second values of a second subset of the first one of the sets of evaluation pixels and computing a further difference between the first difference and the second difference.
  • Fig. 1 is a functional block diagram illustrating components of one embodiment of an image interpolation device according to the present invention
  • Fig. 2 shows a flow chart illustrating steps of the method according to the present invention
  • Fig. 3 A shows a schematic representation of a portion of an image comprising pixels arranged in rows, a particular pixel to be interpolated and a first set of pixels in a neighbourhood of first orientation;
  • Fig. 3B shows a schematic representation of a portion of an image comprising pixels arranged in rows, a particular pixel to be interpolated and a second set of pixels in a neighbourhood of second orientation;
  • Fig. 3C shows a schematic representation of a portion of an image comprising pixels arranged in rows, a particular pixel to be interpolated and a third set of pixels in a neighbourhood of third orientation
  • Fig. 4 shows a schematic representation of a portion of an image comprising ' ⁇ pixels arranged in rows, a particular pixel to be interpolated and subsets of pixels used in the evaluation of respective match errors
  • Fig. 5 shows a representation of an embodiment of an image processing apparatus according to the invention. • .
  • Fig. 1 is a functional block diagram illustrating components of one embodiment of an image interpolation device according to the present invention.
  • Selector 100 receives the image 1 from an external source. The image may also be internally generated for, e.g. in cases of video cassette players or DVD players.
  • Selector 100 selects sets of evaluation pixels for respective orientations in the input image relative to a particular pixel to be interpolated. In order to obtain an edge-dependent interpolation, sets of evaluation pixels are selected in different neighbourhoods in respective orientations with reference to the particular pixel to be interpolated.
  • the selected sets 101 are used by computer 110 to compute match errors for the respective orientations by comparing pixel values of subsets of the sets of evaluation pixels. A function of pixel values is used for calculation of match error.
  • a plurality of such match errors is computed with respect to groups of pixels belonging to different neighbourhoods corresponding to different orientation of the particular pixel to be interpolated.
  • the computed match errors 111 are sent to selector 120.
  • the function for match error could be chosen such that the minimum match error corresponds to the orientation of edge structure within the chosen neighbourhood.
  • Selector 120 selects a particular orientation based on the match errors.
  • Creator 130 creates a set of interpolation pixels 131 using the selected orientation 121. Best quality of output image may be obtained by adapting the interpolation by selecting a group of pixels in the neighbourhood along a predicted orientation of an edge structure.
  • Interpolator 140 uses the set of interpolation pixels 131 to compute the particular pixel 141 to be interpolated.
  • One of the several standard techniques may be used for interpolation.
  • Fig. 2 depicts the steps involved in the method of image interpolation according to the invention.
  • An input image 1 is received in first step 200 and reassigned to a grid with blank spaces where the pixels have to be interpolated.
  • a particular pixel to be interpolated is selected in step 210 and a neighbourhood is defined and sets of evaluation pixels are defined.
  • a set of match errors corresponding to each set of evaluation pixels is carried out in step 220. From the set of match errors, a particular match error and a particular orientation corresponding to that match error is selected in step 230.
  • a set of interpolation • pixels of a particular orientation is chosen in step 240 and the new pixel is calculated based on the selected set of interpolation pixels in step 250.
  • step 200 The process is repeated from the first , step 200 for each pixel to be interpolated.
  • step 260 the process terminates in step 270.
  • Figs. 3 A, 3B and 3C schematic representations of a portion of an image comprising pixels arranged in rows, a particular pixel to be interpolated and sets of pixels in three different neighbourhoods in three different orientations are depicted.
  • Fig.3A shows a representation of a particular pixel 10 to be interpolated in a first neighbourhood 300 in a first orientation 50. Pixels belonging to the first neighbourhood may be known as a first set of interpolation pixels.
  • Fig 3B shows a representation of a particular pixel 10 to be interpolated in a second neighbourhood 310 in a second orientation 60.
  • the pixels belonging to second neighbourhood 310 may be known as second set of evaluation pixels.
  • Fig 3 C shows a representation of a particular pixel 10 to be interpolated in a third neighbourhood 320 in a third orientation 70.
  • the pixels belonging to the third neighbourhood 320 may be known as third set of evaluation pixels.
  • Figs. 3A, 3B and 3C show a portion of an example image wherein the coordinates (x,y) of a Cartesian coordinate system may be assumed to coincide with the pixel 10 to be interpolated.
  • Squares with solid line denote the original pixels and the squares with dotted lines denote the pixels that are to be interpolated.
  • the pixels belonging to a first pair of adjacent rows 350 and 360 are denoted by ordinates (y-3) and(y- ⁇ ) respectively.
  • the pixels belonging to second pair of rows 370 and 380 are denoted by ordinates (y + 1) and (y + 3) respectively.
  • the pixels to the right of (x,y) have increasing abscissa values and pixels to the left of (x,y) have decreasing abscissa values.
  • the value of a single pixel at position (x,y) is represented by F(x,y) .
  • F ⁇ x,y) may be a single luminance value in case of a monochrome image or any one of the various representations of a colour image for example RGB,(red, blue, green) or HSI,(hue, saturation, intensity).
  • the orientation of pixels is represented by a variable L that may have one of the values .. -2,-l,0,+l,+2..
  • the neighbourhood may comprise of a strip of (x + j) pixels in the x direction, where j may be an integer value within the range of - M to + M , thereby covering a strip of (2M + 1) pixels.
  • Fig. 4 shows a schematic representation of a portion of an image in which a particular pixel to be interpolated, subsets of pixels and arrangement of rows of pixels for evaluation of respective match errors.
  • Fig 4 shows additional details of Fig 3 A.
  • the sets of evaluation pixels falling within the neighbourhood 300 are further divided into a first 400, a second 410 and a third • 420 of the subsets.
  • the subsets comprise a first pair of adjacent rows 350, and 360, a second pair of adjacent rows 360, and 370, and a third pair of adjacent rows 370, 380 respectively.
  • Respective pairs of pixel values within subsets are compared for computing match errors.
  • Comparison values within subsets 400, 410 and 420 may be used in various ways for computation of match errors.
  • Match errors may be computed in a number of ways by comparing respective pairs of pixels of first one of the pairs of adjacent rows 350,360, second one of the pair of adjacent rows 360,370, and third one of the pairs of adjacent rows 370,380.
  • a first way of computing a match error is by computing a first difference between respective pixel values of pairs of pixels within first subset 400 and computing a second difference between respective pixel values of pairs of pixels within second subset 410 and computing a further difference between the first and the second differences.
  • an absolute value of the difference (
  • the final difference values thus obtained may be summed up to obtain a single value for ease of comparison.
  • Equation for computing match error ME(L) computing the first difference of the values of pixels of first subset 400 and second difference of the values of pixels of second subset 410 and a further difference of the first and the second differences may be given by
  • a second way of computing a match error is by computing a first difference between respective pixel values of pairs of pixels within second subset 410, computing a second difference between respective pixel values of pairs of pixels within third subset 420 and computing a further difference between the first and the second difference.
  • the match error obtained by such a computation is given by
  • a third way of computing a match error is by summing the first match error obtained from the first and second subsets 400, 410 and second match error obtained from the third and fourth subsets 410, 420.
  • the final match error is the sum of two intermediate match errors obtained by considering three pixels differences at a time. The final match error in such computations is given by
  • Both equations (4) and (5) represent the summation of two intermediate match errors obtained from first and second subsets (400, 410) and second and third subsets(410, 420).
  • the line of orientation which is the axis of chosen neighbourhood coincides with the line of orientation in which the evaluation is carried out and passes through the particular pixel to be interpolated.
  • the axis of the chosen neighbourhood does not coincide with but is parallel to the line of orientation in which the evaluation is carried out.
  • the line of orientation is offset by a width of one pixel with respect to the axis of the chosen neighbourhood in equation (5). Any number of such variations that are possible may be used.
  • a fourth way of computing a match error is by computing a further difference of the match errors obtained from first and second subsets (400, 410) and third and fourth subsets (410, 420). Sets of four pixels along a orientation are thus taken into account in the computation. In essence, the measure for a neighbourhood of four pixels length gives rise to a further difference of the differences of differences.
  • Match error ME(L) in such a case is given by
  • the axis of the chosen neighbourhood coincides with the line of orientation in which the evaluation is carried out. It is possible to have the variations mentioned in equations (4) and (5) in the four pixels case as well.
  • a comparison is made within the match errors to select the local minimum.
  • Interpolation of the pixel value may be calculated exclusively from the set of pixels in the identified orientation or it may be calculated from a mix of pixels from the identified orientation and pixels from the vertical direction.
  • a typical equation of interpolation in such a case is given by
  • F new a-(F(x + D,y-l) + F(x-D,y + l))/2+(L-cc)(F(x,y-l)+F(x,y + l))/2..O) where F(x + D, •) refers to the pixel values in orientation D and F(x,») refers to pixel values in the vertical direction.
  • a direction confidence ⁇ as shown in equation (7) may be used for the ratio of mix.
  • the direction confidence ⁇ may be determined from a measure of confidence function with variables D , ME(D) and ME(O) where D is the identified orientation, ME(D) and ME(O) are the match errors in the identified and vertical directions respectively.
  • the proposed method can, unlike the prior art method detect and correctly interpolate along a constant gradient.
  • the interpolation unit is capable of handling both constant pixel value and constant pixel value gradient cases satisfactorily.
  • the averaged Mean Square Error (MSE) values were found to be superior to prior art methods.
  • FIG. 5 schematically shows an image processing apparatus 500 according to the invention comprising: receiving means 510 for receiving a signal 501 representing input images; and the image interpolation unit 520 described in connection with Fig 1 ; - an optional display means 530 for displaying the output images
  • the signal 501 may be a broadcast signal received via an antenna or cable but may also be a signal from a storage device like the VCR (video cassette recorder) or digital ⁇ versatile disk (DVD).
  • the signal is provided at the input connector of 510.
  • the image processing apparatus 500 might be e.g. a set top box, satellite tuner, a VCR player, a DVD , player or recorder.
  • the image processing apparatus comprises storage means, like 1 hard-disk means or means for storage on removable media, e.g., optical disks.
  • the output image may be displayed or transmitted out to another apparatus for e.g. a cable or wireless or internet broadcast system.
  • the image processing apparatus 500 might also be a system being applied by a film-studio or broadcaster.

Abstract

L'invention porte sur un dispositif, un procédé et un appareil de traitement d'images créant une image de sortie sur la base d'une image d'entrée par interpolation des valeurs de pixels de l'image d'entrée. L'invention propose un schéma d'interpolation dépendant des bords selon lequel: on calcule en premier les erreurs de correspondance entre les ensembles de pixels voisins et d'orientations différentes par rapport à un pixel à interpoler; on utilise la comparaison entre les sous-ensembles (400, 410, 420) de valeurs de pixels dans une fonction de calcul des erreurs de correspondance; on sélectionne une erreur de correspondance et l'orientation correspondante; et on utilise les pixels de l'orientation sélectionnée pour l'interpolation du pixel en question.
PCT/IB2005/053150 2004-09-24 2005-09-23 Interpolation d'images WO2006033084A2 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2007533051A JP2008515266A (ja) 2004-09-24 2005-09-23 画像補間
EP05784295A EP1794715A2 (fr) 2004-09-24 2005-09-23 Interpolation d'image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP04104667 2004-09-24
EP04104667.3 2004-09-24

Publications (2)

Publication Number Publication Date
WO2006033084A2 true WO2006033084A2 (fr) 2006-03-30
WO2006033084A3 WO2006033084A3 (fr) 2006-06-01

Family

ID=35985873

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2005/053150 WO2006033084A2 (fr) 2004-09-24 2005-09-23 Interpolation d'images

Country Status (5)

Country Link
EP (1) EP1794715A2 (fr)
JP (1) JP2008515266A (fr)
KR (1) KR20070068409A (fr)
CN (1) CN101027691A (fr)
WO (1) WO2006033084A2 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008067734A1 (fr) * 2006-12-05 2008-06-12 Huawei Technologies Co., Ltd. Procédé et dispositif de codage/décodage, et procédé et dispositif de traitement d'interpolation de pixels fractionnaires
JP2009543444A (ja) * 2006-06-29 2009-12-03 トムソン ライセンシング 適応ピクセルベースのフィルタリング
US20150146994A1 (en) * 2013-11-28 2015-05-28 Canon Kabushiki Kaisha Method, system and apparatus for determining a depth value of a pixel

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101229376B1 (ko) * 2012-04-25 2013-02-05 대한민국 렌즈 왜곡 보정에 사용된 보간 패턴 검출을 통한 촬영 카메라 식별 방법 및 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5351305A (en) * 1991-11-14 1994-09-27 Picker International, Inc. Concurrent smoothing and edge enhancement of medical diagnostic images
EP0717557A2 (fr) * 1994-12-15 1996-06-19 NOKIA TECHNOLOGY GmbH Procédé et dispositif d'accentuation de flancs dans une image vidéo
EP0746157A2 (fr) * 1995-05-31 1996-12-04 Sony Corporation Dispositif et méthode pour la conversion d'un signal
US5742348A (en) * 1994-03-15 1998-04-21 Matsushita Electric Industrial Co., Ltd. Spatial frequency adaptive field--interpolation method and apparatus therefor
US20040120605A1 (en) * 2002-12-24 2004-06-24 Wen-Kuo Lin Edge-oriented interpolation method for deinterlacing with sub-pixel accuracy

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5351305A (en) * 1991-11-14 1994-09-27 Picker International, Inc. Concurrent smoothing and edge enhancement of medical diagnostic images
US5742348A (en) * 1994-03-15 1998-04-21 Matsushita Electric Industrial Co., Ltd. Spatial frequency adaptive field--interpolation method and apparatus therefor
EP0717557A2 (fr) * 1994-12-15 1996-06-19 NOKIA TECHNOLOGY GmbH Procédé et dispositif d'accentuation de flancs dans une image vidéo
EP0746157A2 (fr) * 1995-05-31 1996-12-04 Sony Corporation Dispositif et méthode pour la conversion d'un signal
US20040120605A1 (en) * 2002-12-24 2004-06-24 Wen-Kuo Lin Edge-oriented interpolation method for deinterlacing with sub-pixel accuracy

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
THURNHOFER S ET AL: "EDGE-ENHANCED IMAGE ZOOMING" OPTICAL ENGINEERING, SOC. OF PHOTO-OPTICAL INSTRUMENTATION ENGINEERS. BELLINGHAM, US, vol. 35, no. 7, 1 July 1996 (1996-07-01), pages 1862-1870, XP000614599 ISSN: 0091-3286 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009543444A (ja) * 2006-06-29 2009-12-03 トムソン ライセンシング 適応ピクセルベースのフィルタリング
US8204334B2 (en) 2006-06-29 2012-06-19 Thomson Licensing Adaptive pixel-based filtering
WO2008067734A1 (fr) * 2006-12-05 2008-06-12 Huawei Technologies Co., Ltd. Procédé et dispositif de codage/décodage, et procédé et dispositif de traitement d'interpolation de pixels fractionnaires
EP2091258A1 (fr) * 2006-12-05 2009-08-19 Huawei Technologies Co Ltd Procédé et dispositif de codage/décodage, et procédé et dispositif de traitement d'interpolation de pixels fractionnaires
EP2091258A4 (fr) * 2006-12-05 2010-03-17 Huawei Tech Co Ltd Procédé et dispositif de codage/décodage, et procédé et dispositif de traitement d'interpolation de pixels fractionnaires
US20150146994A1 (en) * 2013-11-28 2015-05-28 Canon Kabushiki Kaisha Method, system and apparatus for determining a depth value of a pixel
US10019810B2 (en) * 2013-11-28 2018-07-10 Canon Kabushiki Kaisha Method, system and apparatus for determining a depth value of a pixel

Also Published As

Publication number Publication date
EP1794715A2 (fr) 2007-06-13
KR20070068409A (ko) 2007-06-29
WO2006033084A3 (fr) 2006-06-01
CN101027691A (zh) 2007-08-29
JP2008515266A (ja) 2008-05-08

Similar Documents

Publication Publication Date Title
US7259794B2 (en) De-interlacing device and method therefor
KR101135454B1 (ko) 특정 이미지의 특정 픽셀 값 결정 방법, 픽셀 값 결정 유닛, 이미지 처리 장치 및 컴퓨터 판독 가능한 저장 매체
US7414671B1 (en) Systems and methods for display object edge detection and pixel data interpolation in video processing systems
US8331689B2 (en) Detecting a border region in an image
KR101725167B1 (ko) 이미지 레지스트레이션을 위한 방법 및 시스템
US6295083B1 (en) High precision image alignment detection
US8325196B2 (en) Up-scaling
WO2006033084A2 (fr) Interpolation d'images
US9147257B2 (en) Consecutive thin edge detection system and method for enhancing a color filter array image
JP2007501561A (ja) ブロックアーティファクトの検出
CN101202829B (zh) 自运动适应图像处理的装置和方法
JP4650683B2 (ja) 画像処理装置および方法、プログラム並びに記録媒体
US7233363B2 (en) De-interlacing method, apparatus, video decoder and reproducing apparatus thereof
US20070036466A1 (en) Estimating an edge orientation
KR20130001626A (ko) 위너필터 기반의 가중치를 부여하여 화소를 보간하는 디인터레이싱 장치 및 방법
GB2249906A (en) Motion compensated interpolation of images
US20060257029A1 (en) Estimating an edge orientation
JP4650684B2 (ja) 画像処理装置および方法、プログラム並びに記録媒体
US20050036072A1 (en) Method and device for determining the spacing between a first and a second signal sequence
JP4597282B2 (ja) 画像情報変換装置、変換方法および表示装置
WO2006082542A1 (fr) Decoupage

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2005784295

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2007533051

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 200580032513.X

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 1020077009288

Country of ref document: KR

WWW Wipo information: withdrawn in national office

Ref document number: 2005784295

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2005784295

Country of ref document: EP