EP1762092A1 - Processeur d'images et procede de traitement d'images par conversion de la cadence de balayage - Google Patents

Processeur d'images et procede de traitement d'images par conversion de la cadence de balayage

Info

Publication number
EP1762092A1
EP1762092A1 EP05750971A EP05750971A EP1762092A1 EP 1762092 A1 EP1762092 A1 EP 1762092A1 EP 05750971 A EP05750971 A EP 05750971A EP 05750971 A EP05750971 A EP 05750971A EP 1762092 A1 EP1762092 A1 EP 1762092A1
Authority
EP
European Patent Office
Prior art keywords
motion
scan rate
rate conversion
image processor
compensated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05750971A
Other languages
German (de)
English (en)
Inventor
Shaori Guo
Abraham K. Riemens
Chris Lee
Robert J. Schutten
Selliah Rathnam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics NV filed Critical Koninklijke Philips Electronics NV
Publication of EP1762092A1 publication Critical patent/EP1762092A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0125Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards being a high definition standard
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • H04N7/012Conversion between an interlaced and a progressive signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/014Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors

Definitions

  • the present invention relates to an image processor for performing scan rate conversion on a video data signal.
  • the present invention relates to an image processing method and an image receiving apparatus.
  • Such an image processing method and image processor or co-processor are known from PCT publication WO-A-02102058.
  • the publication discloses a motion- compensated up-conversion of the frame rate (scan rate conversion) of video sequences.
  • the disclosed method selects one of a plurality of scan rate conversion algorithms to obtain a best possible visual quality or to use the available resources as good as possible.
  • the known image processing method and image processor are arranged to provide a scan rate conversion that is optimized for a specific type of video source data.
  • the present invention seeks to provide an image processing method and image processor which allow to perform high quality scan rate conversion for different types of digital video data.
  • the invention is defined by the independent claims.
  • the dependent claims define advantageous embodiments.
  • the present invention provides an image processor connectable to a system bus for exchanging data with an external device.
  • the image processor comprises a memory unit, a motion estimator, and a motion compensation unit for outputting a scan rate converted output image frame.
  • the image processor is arranged to perform motion-compensated scan rate conversion when the video data signal is a standard-definition TV signal and to perform non-motion-compensated scan rate conversion when the video data signal is a high-definition TV signal.
  • the standard-definition TV signal has a lower resolution than the high- definition TV signal, e.g. 1024 x 576 pixels against 1920 x 1020 pixels.
  • the non-motion- compensated scan rate conversion requires less resources (memory access, processing), thus allowing adding functionality for HDTV signals to a high quality image processor for SDTV signals.
  • the mentioned input images related to the video data signal may cover direct scan rate conversion operation, i.e. storing actual input video images only in the memory unit, or recursive operation, in which also images are stored in the memory unit which are calculated from previous images.
  • the motion compensation unit comprises a de- interlacing sub-unit. This is a particularly suitable implementation for scan rate conversion.
  • the image processor is arranged to perform the motion-compensated scan rate conversion in a multi-pass processing mode, and to perform the non-motion-compensated scan rate conversion in a single pass processing mode while disabling the motion estimator and using a zero motion vector field as input to the motion compensation unit.
  • the multi-pass processing mode comprises a motion estimation pass and a motion compensation pass in a further embodiment.
  • the motion compensation pass includes de- interlacing and up-conversion.
  • the motion compensation unit may in a further embodiment comprise an up- conversion sub-unit for performing temporal interpolation between successive video images in order to increase the video field or frame rate.
  • This up-conversion sub-unit may be used for both the SDTV and HDTV signal processing.
  • the memory unit comprises a first local memory unit and a second local memory unit each being arranged to store a 256 x 48 pixel image area.
  • an image processing method for performing scan rate conversion on a video data signal is provided.
  • the present method is very effective in providing additional HDTV signal processing functionality to an existing SDTV signal processing function, at only marginal (software and/or hardware) cost.
  • the motion-compensated scan rate conversion mechanism may be executed in a multi-pass processing mode, and the non-motion-compensated scan rate conversion mechanism may be executed in a single-pass processing mode using a zero motion vector field.
  • the multi-pass processing mode comprises a motion estimation pass and a motion compensation pass.
  • the motion compensation pass includes de- interlacing and up-conversion. Furthermore, it is possible to improve the output of the motion estimation vectors by an external (host) CPU before use in the motion compensation pass.
  • the image processor and/or image processing method may be advantageously used in all kinds of image receiving apparatus, such as a television set or a video recorder (using tape, optical disc or hard disk media), comprising a receiver for receiving a video data signal, and an image processor according to the present invention. This allows a more smooth transition from SDTV to HDTV broadcast reception, as the image receiving apparatus is able to process both types of signals, without any substantial extra cost.
  • Fig. 1 shows a block diagram of an image processor architecture for scan rate conversion
  • Fig. 2 shows a graphic representation of the local memories used in the image processor architecture of Fig. 1
  • Fig. 3 shows a conceptual data flow of the image processor architecture of Fig. 1 in motion-compensated scan rate conversion
  • Fig. 4 shows a conceptual data flow of the image processor architecture of Fig. 1 in non-motion-compensated scan rate conversion.
  • Scan-rate-conversion refers to one or more of the following processes: de- interlacing, up-conversion, horizontal and vertical scaling. While de- interlacing is the process of converting an image field into an image frame by increasing the vertical sampling density, up-conversion is to increase the number of image pictures in a video sequence, normally by means of interpolation. And horizontal and vertical scaling is the process of increasing or decreasing the number of pixels in either horizontal or vertical direction. Depending on whether motion information is used, scan-rate-conversion techniques are largely divided into two categories: motion-compensated and non-motion- compensated.
  • Motion-compensated scan-rate-conversion is a technique in which motion information is first extracted, often in the form of motion vectors, from the input video sequence, and the motion information is then applied to the creation of output video pictures with the desired scan rate.
  • Non-motion-compensated scan-rate-conversion generates output video pictures with the desired scan rate without making use of the motion information embedded in the input video sequence.
  • Fig. 1 shows a block diagram and architecture of an image processor 100 (or co-processor) for scan-rate-conversion according to an embodiment of the present invention.
  • the image processor 100 is connectable to a data bus 112 that is designed to exchange e.g. data of input and output images and motion vectors.
  • An external memory device 110 is connected to the data bus 112 and is arranged to store e.g. data of input and output images and motion vectors.
  • the data bus 112 (or system bus) is used to connect the image processor 100 according to this embodiment with external memory device 110. It is however imaginable that other communication means are used to connect the image processor 100 and its components to further external devices.
  • the image processor 100 comprises a number of sub-units: Two local memories 102, 103.
  • the co-processor 100 contains two local memories 102, 103 for temporarily storing pixels that are loaded from system memory 110 and to be used for motion estimation, de-interlacing, and up-conversion.
  • Local memory 102 is typically used to buffer image data of the previous image, whereas local memory 103 typically stores data from the current field (and optionally also data from the next field). This way, the memories contain all required data for de-interlacing the current field and for temporal interpolation between the time instances of the previous and the current images.
  • the image data stored in local memory 102 is typically calculated by image processor 100 based on a previous input image from the video signal. Hence, this architecture supports temporal recursive video algorithms.
  • a motion estimation unit 104 Motion estimation is the first of the two-stage- process of motion-compensated scan-rate-conversion. It computes the motion vector of each 8 x 8 block of a video picture. The motion vectors are subsequently used for the motion- compensated de-interlacing and up-conversion sub-units (106, and 108 respectively, see below).
  • a motion compensation unit 107 comprising two sub-units 106, 108: A de-interlacing sub-unit 106.
  • De-interlacing sub-unit 106 converts a video field (consists of either even or odd lines of an image) into a video frame by increasing vertical sampling density. In motion-compensated scan-rate conversion, de-interlacing is achieved by means of interpolation of the missing field lines based on motion-compensated image data.
  • An up-conversion sub-unit 108 The up-conversion sub-unit 108 of the co- processor 100 performs temporal interpolation between successive video pictures of a video sequence so as to increase the video field or frame rate.
  • a temporal noise reduction sub-unit 116 The temporal noise reduction sub- unit 116 performs noise reduction on de-interlaced image data. This is achieved iteratively by motion-detected information from temporal image data, making use of the recursive nature of the architecture.
  • a spatial noise reduction sub-unit 114 The spatial noise reduction sub-unit performs noise reduction on field-based sequences. Because of the recursive nature of our architecture, spatial noise reduction is only used to filter the input images of the current and next fields, i.e. video data that is stored in the second local memory 103.
  • a vertical processing sub-unit 118 A vertical processing sub-unit 118.
  • the vertical processing sub-unit 118 comprises two operations on video image data in vertical direction. These operations are vertical peaking and scaling.
  • the vertical peaking is to compensate the information loss after up conversion processing and increase the gain for high frequencies of signals, controlled by the programmable peaking coefficients to present different filter characteristics resulting in peaking or averaging signal.
  • the vertical scaling operation performs the expansion or compression of a video image in vertical direction.
  • the vertical scaling function can also optionally generate interlaced output, taking care of proper low pass filtering to avoid excessive aliasing. According to the required operating mode, one or more of the above- mentioned units may be disabled or enabled. For example, in the embodiment of Fig. 4 only a few of the above-mentioned units are used.
  • the motion-compensation unit 107 does not need to comprise both the de- interlacing unit 106 and the up-conversion unit 108.
  • the motion estimator 104 and the motion compensation unit 107 advantageously operate according to algorithms as described in the article "IC for motion- compensated de-interlacing, noise reduction, and picture rate conversion", by G. de Haan, in IEEE Transactions on Consumer Electronics, Vol. 45, No. 3, August 1999, which is incorporated herein by reference.
  • the de-interlacing is performed in accordance with another method as described in "De-interlacing - An Overview" by G.
  • Fig. 3 shows a logical diagram that represents the data-flow of the motion- compensated scan-rate-conversion process, which according to the present invention is executed for standard-definition TV (SDTV) signals.
  • the system bus 112 provides pixel inputs to local memory 102 and spatial noise reduction unit 114.
  • the system bus provides motion vectors input MVI to motion estimation unit 104 and three de-interlacing units 106.
  • the system bus 112 receives recursive outputs RO from one of three temporal noise reduction units 116.
  • the two other temporal noise reduction units 116 are coupled to respective vertical scaling units 118 that provide progressive outputs PO to the system bus 112.
  • the whole process consists of two passes: motion estimation pass and motion compensation pass. In the first pass, only the motion estimation unit 104 is enabled, all other sub- units are disabled.
  • the motion estimation runs on every luminance input field, always using the previous frame and the current field of video image data. No motion estimation should be done on chrominance data, although the motion vectors obtained from the motion estimation of luminance data are used for the de-interlacing and up-conversion of the chrominance data.
  • the output of the motion estimation pass is a field of motion vectors and SAD (sum of absolute difference) values. These may e.g. be temporarily stored in the external memory 110 via the system bus 112.
  • motion estimation unit 104 is disabled, while other units are enabled or disabled as required, although de- interlacing sub-unit 106 is normally enabled for this pass of processing.
  • Per execution one or two output pictures are generated. Since the de-interlacing process is recursive, three pictures are generated per execution in the worst case: the de-interlaced and noise reduced current picture is required for the recursive de- interlacing and noise reduction sub-units; two up-converted images are required as output pictures, which are vertically scaled and peaked at the proper temporal position.
  • the co-processor 100 processes one vertical "stripe" of the picture.
  • the width of this stripe is 16 blocks, i.e., 128 pixels.
  • the height is equal to the picture height.
  • the bandwidth overhead of reading motion-compensated data is two: in order to process 128 bytes horizontally, 256 bytes are read into the local memory.
  • the input video sequences targeted by the co-processor 100 are dual- input-stream with a total size of 1024 x 576 pixels per picture and a frequency of 50 Hz, this results in a maximum local memory bandwidth requirement to be about 118 Mbytes per second. This is for motion compensation only, however, also motion estimation is required.
  • the total input bandwidth during motion compensation is 236 Mbyte/s. With a processing speed of two input samples per clock cycle, this would require the co-processor 100 to operate at a clock speed of 118 MHz. In a realistic implementation, some time will be required for pipeline latency and host-CPU interaction, so a design target of 140 MHz is appropriate.
  • the two input samples per clock cycle is a compromise between clock speed and silicon area, and the indicated speed of 140 MHz is a good choice using current IC technology using a standard cell design technology.
  • the purpose of incorporating non-motion-compensated scan-rate conversion in the co-processor 100 is to process high-definition TV (HDTV) signals. According to the present invention, HDTV signals are subjected to scan-rate-conversion with adding only marginal additional hardware cost and little design complexity.
  • HDTV high-definition TV
  • Fig. 4 shows a dataflow of the co-processor 100 when it operates in non- motion-compensated scan-rate-conversion mode for HDTV signal processing.
  • The. system bus 112 provides pixel inputs PI to the local memories 102 and 103, a zero motion vector MVI to the de-interlacing unit 106.
  • the system bus 112 receives a recursive output RO from the de-interlacing unit 106.
  • the HDTV scan-rate-conversion uses exactly the same hardware (local memories 102, 103, de-interlacing unit 106, system bus 112) as the motion- compensated SDTV scan-rate conversion. With the present architecture arrangement, no additional hardware is needed for HDTV non-motion-compensated scan-rate-conversion.
  • the motion estimation unit 104 is disabled and the whole processing has only one pass, i.e. de-interlacing (in de-interlacing sub-unit 106).
  • de-interlacing in de-interlacing sub-unit 106.
  • the scan-rate-conversion has only one output, i.e. the recursive output.
  • the recursive output is reused for the display outputs, by properly timing the output signals to the external display via the system bus 112. So there is no added local memory access for generating display outputs.
  • the co-processor 100 processes one vertical "stripe" of the picture.
  • the width of this stripe is 16 blocks, i.e., 128 pixels.
  • the height is equal to the picture height.
  • the bandwidth overhead of reading motion-compensated data is one: in order to process 128 bytes horizontally, exactly 128 bytes are read into the local memory. Assuming that HDTV has a size of 1920 x 1020 pixels per picture, with a frequency of 60 Hz, the maximum local memory bandwidth is 117.5 Mbytes per second. Again a factor of two is required as both previous frame and current & next field are required, however, no memory overhead factor of two is present in this case.
  • the image processor 100 or the image processing method as described in relation to the image processor 100, can be used in high-end media processors, multimedia processors, and digital display processor etc. Examples are television sets, set-top-boxes, and video recorders (tape, disc or hard disk recorders).
  • the individual components of the co-processor 100 may be enabled or disabled, depending on operating mode. This flexibility makes it possible to perform scan- rate-conversion for high-definition TV video signals with exactly the same hardware as used for standard-definition TV video signals. In other words, the HDTV scan-rate-conversion function is incorporated with only marginal additional hardware cost.
  • the video processing is separated into the motion estimation pass and the motion compensation. Therefore, the host-CPU can further process the generated motion vectors and possibly make the motion vectors more accurate.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word “comprising” does not exclude the presence of elements or steps other than those listed in a claim.
  • the word "a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
  • the invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Television Systems (AREA)

Abstract

Processeur d'images (100) pouvant se brancher sur un bus système (112) pour l'échange de données avec un dispositif extérieur (110). Le processeur d'images (100) comprend une unité mémoire (102, 103), un dispositif d'estimation de mouvement (104), et un dispositif de compensation de mouvement (107) produisant une trame d'image à conversion de la cadence de balayage. Le processeur d'images (100) est conçu pour exécuter une conversion de cadence de balayage avec compensation de mouvement lorsque le signal de données vidéo est un signal de télévision à définition normale, et une conversion de cadence de balayage sans compensation de mouvement lorsque le signal de données vidéo est un signal de télévision à haute définition.
EP05750971A 2004-06-21 2005-06-20 Processeur d'images et procede de traitement d'images par conversion de la cadence de balayage Withdrawn EP1762092A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US58158204P 2004-06-21 2004-06-21
PCT/IB2005/052014 WO2006000977A1 (fr) 2004-06-21 2005-06-20 Processeur d'images et procede de traitement d'images par conversion de la cadence de balayage

Publications (1)

Publication Number Publication Date
EP1762092A1 true EP1762092A1 (fr) 2007-03-14

Family

ID=34970687

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05750971A Withdrawn EP1762092A1 (fr) 2004-06-21 2005-06-20 Processeur d'images et procede de traitement d'images par conversion de la cadence de balayage

Country Status (4)

Country Link
EP (1) EP1762092A1 (fr)
JP (1) JP2008509576A (fr)
CN (1) CN1973541A (fr)
WO (1) WO2006000977A1 (fr)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8284322B2 (en) * 2006-04-18 2012-10-09 Marvell World Trade Ltd. Shared memory multi video channel display apparatus and methods
US8218091B2 (en) 2006-04-18 2012-07-10 Marvell World Trade Ltd. Shared memory multi video channel display apparatus and methods
US8264610B2 (en) 2006-04-18 2012-09-11 Marvell World Trade Ltd. Shared memory multi video channel display apparatus and methods
US20100135395A1 (en) * 2008-12-03 2010-06-03 Marc Paul Servais Efficient spatio-temporal video up-scaling
US9258517B2 (en) * 2012-12-31 2016-02-09 Magnum Semiconductor, Inc. Methods and apparatuses for adaptively filtering video signals
KR102278978B1 (ko) * 2013-08-23 2021-07-19 스미또모 가가꾸 가부시끼가이샤 망막 조직 및 망막 관련 세포의 제조 방법

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549577B2 (en) * 1997-09-26 2003-04-15 Sarnoff Corporation Computational resource allocation in an information stream decoder
US6847406B2 (en) * 2000-12-06 2005-01-25 Koninklijke Philips Electronics N.V. High quality, cost-effective film-to-video converter for high definition television
US6810081B2 (en) * 2000-12-15 2004-10-26 Koninklijke Philips Electronics N.V. Method for improving accuracy of block based motion compensation
KR20030024839A (ko) * 2001-06-08 2003-03-26 코닌클리케 필립스 일렉트로닉스 엔.브이. 비디오 프레임 디스플레이 방법 및 시스템

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2006000977A1 *

Also Published As

Publication number Publication date
CN1973541A (zh) 2007-05-30
JP2008509576A (ja) 2008-03-27
WO2006000977A1 (fr) 2006-01-05

Similar Documents

Publication Publication Date Title
US5793435A (en) Deinterlacing of video using a variable coefficient spatio-temporal filter
US6118488A (en) Method and apparatus for adaptive edge-based scan line interpolation using 1-D pixel array motion detection
JP3291247B2 (ja) プログレッシブビデオ信号をインタレースビデオ信号に変換するコンバータおよびその方法
US5784115A (en) System and method for motion compensated de-interlacing of video frames
De Haan et al. IC for motion-compensated 100 Hz TV with natural-motion movie-mode
JP5008826B2 (ja) 高精細度デインタレース/フレーム倍増回路およびその方法
EP1164792A2 (fr) Convertisseur de format utilisant des vecteurs de mouvement bidirectionnels et méthode correspondante
JP3845456B2 (ja) 動き補正ビデオ信号処理方式
EP1143712A2 (fr) Méthode et appareil de calcul de vecteurs de mouvement
US8031267B2 (en) Motion adaptive upsampling of chroma video signals
KR960005943B1 (ko) 텔레비젼 화상 정지 방법 및 장치
JP2000500318A (ja) 適応画像遅延装置
US20060077292A1 (en) Image processing apparatus using judder-map and method thereof
WO2006000977A1 (fr) Processeur d'images et procede de traitement d'images par conversion de la cadence de balayage
EP1964395B1 (fr) Procedes et appareil pour le balayage progressif d'une video entrelacee
JP2007525703A (ja) 画像の動きを補償する装置及び方法
US7010042B2 (en) Image processor and image display apparatus provided with such image processor
EP1460847B1 (fr) Appareil de traitement de signal d'image et procede de traitement
JPH11298861A (ja) 画像信号のフレーム数変換方法および装置
KR20040078690A (ko) 오클루전을 고려하여 일군의 화소들의 움직임 벡터를 추정
WO1993015586A1 (fr) Procede et appareil d'interpolation partielle pour la conversion de cadence d'images
JP4179089B2 (ja) 動画像補間用動き推定方法及び動画像補間用動き推定装置
EP1399883B1 (fr) Unite de conversion et procede et dispositif de traitement d'images
JP2000092455A (ja) 画像情報変換装置および画像情報変換方法
JP2005026885A (ja) テレビジョン受信装置及びその制御方法

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070122

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20070521