EP1051839A2 - Procede et appareil de codage et decodage de signaux de television evolues - Google Patents

Procede et appareil de codage et decodage de signaux de television evolues

Info

Publication number
EP1051839A2
EP1051839A2 EP99903316A EP99903316A EP1051839A2 EP 1051839 A2 EP1051839 A2 EP 1051839A2 EP 99903316 A EP99903316 A EP 99903316A EP 99903316 A EP99903316 A EP 99903316A EP 1051839 A2 EP1051839 A2 EP 1051839A2
Authority
EP
European Patent Office
Prior art keywords
stream
video
region
encoding
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP99903316A
Other languages
German (de)
English (en)
Inventor
Yendo Hu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tiernan Communications Inc
Original Assignee
Tiernan Communications Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tiernan Communications Inc filed Critical Tiernan Communications Inc
Publication of EP1051839A2 publication Critical patent/EP1051839A2/fr
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the MPEG-2 standard applies five compression techniques to achieve a high compression ratio: discrete cosine transform (DCT) , difference encoding, quantization, entropy encoding and motion compensation.
  • DCT discrete cosine transform
  • a DCT is applied to blocks of 8 x 8 pixels to provide 64 coefficients that represent spatial frequencies. For blocks without much detail, the high frequency coefficients have small values that can be set to zero.
  • Video frames are encoded into intra frames (I frames) which do not rely on information from other frames to reconstruct the current frame, and inter frames, P and B, which rely on information from other frames to reconstruct the current frame .
  • P frames rely on the previous P or I frame while B frames rely on the previous I or P and the future I or P to construct the current frame .
  • These previous or future I and P frames are referred to as reference frames .
  • the P and B frames include only the differences between the current frame and the adjacent frames. For low motion video sequences, the P and B frames will have very little information content .
  • the DCT coefficients of each block are weighted and quantized based on a quantization matrix that matches the response of the human eye.
  • the results are combined with the motion vectors and then encoded using variable length encoding to provide a stream for transmission.
  • the MPEG-2 standard defines algorithmic tools known as profiles and sets of constraints on parameter values (e.g., picture size, bit rate) known as levels.
  • the known MPEG-2 compression engines noted above have been designed to meet the main profile @ main level portion of the standard for conventional broadcast television signals such as NTSC and PAL.
  • the main level is specified as 720 pixels by 480 active lines at 30 frames per second.
  • the DTV signal is specified as 1920 pixels by 1080 active lines at 30 frames per second. This is known as the MPEG-2 high level.
  • the computational demand needed for the DTV signal specified as main profile @ high level is approximately six times that needed for existing standard television signals specified as main profile @ main level.
  • the method and apparatus of the present invention provides an architecture capable of addressing the computational demand required for high-definition video signals, such as a DTV signal compliant with MPEG-2 main profile @ high level, using standard MPEG-2 compression engines operating in the main profile @ main level mode.
  • the invention provides parallel processing using such standard MPEG-2 compression engines in an overlapping arrangement that does not sacrifice compression performance.
  • the regional processors each include an image selection unit for selecting a particular image region from each of the video images.
  • a compression engine compresses the selected image region to provide a compressed image region stream of macroblocks .
  • a macroblock remover removes certain macroblocks from the compressed image region stream that correspond to the overlapping portions.
  • a stream concatenation unit concatenates the compressed image region stream with such streams from each regional processor to provide an output video stream.
  • the preferred embodiment includes multiple regional processors for processing the overlapping regions
  • the present invention encompasses single processor embodiments in which each region is processed successively.
  • FIG. 4 is a diagram illustrating a first processor arrangement in accordance with the present invention.
  • FIG. 5 is a diagram illustrating a preferred processor arrangement in accordance with the present invention.
  • FIG. 7 is a schematic block diagram of a video compression engine of the video subsystem of FIG. 6.
  • FIG. 8 is a block diagram illustrating a synchronization configuration for the compression engine of FIG. 7.
  • FIG. 10 is a diagram illustrating raw and active regions of the global image of FIG. 9.
  • FIG. 11 is a diagram illustrating an active region within a raw region of the image of FIG. 10.
  • FIG. 12 is a block diagram of a token passing arrangement for a 10801 video processing configuration.
  • FIG. 13 is a block diagram of a token passing arrangement for a 72Op video processing configuration.
  • FIG. 14 is a block diagram illustrating allocation of reference images in reference buffers of a local memory of the system of FIG. 7.
  • FIG. 15 is a diagram illustrating the reference image updating arrangement of the present invention.
  • FIG. 18 is a block diagram of a local manager of the reference image manager of FIG. 17.
  • FIG. 19 is a diagram illustrating the decoding arrangement of the present invention.
  • FIG. 21 is a diagram illustrating motion compensation in the decoder system of FIG. 20.
  • FIG. 22 is a block diagram of the reference frame store of the decoder system of FIG. 20.
  • the present invention employs a parallel processing arrangement that takes advantage of known MPEG-2 main profile at main level (mp/ml) compression engines to provide a highly efficient compression engine for encoding high definition television signals such as the DTV signal that is compliant with MPEG-2 main profile at high level .
  • a first approach to using MPEG-2 compression engines in a parallel arrangement is shown in FIG. 4.
  • a total of nine MPEG-2 mp/ml compression engines are configured to process contiguous regions encompassing an ATSC DTV video image 142 (1920 pixels by 1080 lines) .
  • Each MPEG-2 mp/ml engine is capable of processing a region 144 equivalent to an NTSC video image (720 pixels by 480 lines) .
  • FIG. 4 A first approach to using MPEG-2 compression engines in a parallel arrangement. 4.
  • a total of nine MPEG-2 mp/ml compression engines are configured to process contiguous regions encompassing an ATSC DTV video image 142 (1920 pixels by 1080 lines) .
  • engines 3, 6, 7, 8 and 9 encode regions smaller than NTSC images.
  • the compression provided by this first approach is less than optimal.
  • the motion compensation performed within each engine is naturally constrained to not search beyond its NTSC image format boundaries. As a result, macroblocks along the boundaries between assigned engine areas may not necessarily benefit from motion compensation.
  • the preferred approach of the present invention shown in FIG. 5 provides a parallel arrangement of MPEG-2 compression engines in which the engines are configured to process overlapping regions 146, 148, 150, 152 of an ATSC DTV video image 142.
  • motion compensation performed by a particular engine for its particular region is extended into adjacent regions.
  • motion compensation uses a reference image (I or P frame) for predicting the current frame in the frame encoding process .
  • the preferred approach extends motion compensation into adjacent regions by updating the reference images at the end of the frame encoding process with information from reference frames of adjacent engines.
  • each engine stores at most two reference frames in memory. If at the end of a frame encoding process either of the two reference frames have been updated, then that reference frame is further updated to reflect the frame encoding results from adjacent engines.
  • the preferred embodiment of the video compression engine subsystem 162 shown in FIG. 7 includes a video input connector 200, a system manager 202, a bit allocation processor 204, several regional processors 206 and a PES header generator 208.
  • Each regional processor 206 includes a local image selection unit 210, an MPEG-2 compression unit 212, a macroblock remover and stream concatenation unit 214/216, and a local memory 218.
  • the compression subsystem 162 also includes one o more reference image managers (RIMs) 220. In the arrangement of FIG. 7, there are four RIMs 220. The RIM 220 is described further herein.
  • the MPEG-2 compression unit 212 is preferably an IBM model MM30 single package, three chip unit, though any standard MPEG-2 compression engine capable of main profile @ main level operation can be used.
  • the system manager 202 synchronizes the frame encoding process over the nine regional processors 206.
  • the tasks required by the system manager to synchronize the parallel processors are described.
  • Each MPEG-2 compression unit must update the internal reference image using information from the reference image in the adjacent processors before it can properly encode the next image .
  • each MPEG-2 compression unit generates a current image compression complete (CICC) signal 250 after each encoding process.
  • CICC current image compression complete
  • the system manager 202 triggers the reference image manager 220 to update the internal reference images of each MPEG-2 compression unit using a common reference image update (RIU) signal 252.
  • ROU common reference image update
  • local image location registers These registers specify the location of a local field image within a global field image 300 (FIG. 9) .
  • the registers specify points within the field image, not the reconstructed progressive image.
  • the 72Op video has only one field image per frame, whereas the 10801 video has two field images per frame.
  • Hstart register Pixel index of the first active pixel in local image 302. First pixel in global image 300 will have an index value of 1.
  • Vstop register Line index of the first non-active line after the local image.
  • Each MPEG-2 compression unit is responsible for compressing a specific region of the target image 300 called an active-region 310.
  • the target picture 300 is covered by the active regions 310 without overlapping.
  • Figure 10 shows raw-regions 310B and active-regions 310 for 10801.
  • raw_height the height of the raw-region 310B.
  • raw_width the width of the raw-region 310B.
  • left_alignment the mark where the active-region 310 macroblocks 320 start horizontally, thus macroblocks to the left of this mark in the raw-region need to be removed .
  • right_alignment the mark where the active-region macroblocks ends horizontally, thus macroblocks to the right of this mark in the raw region need to be removed
  • top_alignment the mark where the active-region macroblocks start vertically, thus macroblocks above this mark in the raw region need to be removed.
  • the MRBC unit 214/216 scans and edits the coded bit streams for slices on a row basis. Horizontally, macroblocks in the area between the top of the raw-region and top_alignment , between the bottom_alignment and the bottom of the raw-region should be removed. For each row in the raw-region, macroblocks in the area between the left start of the raw-region and left_alignment , between right_alignment and the right end of the raw-region should be removed. The resulting bit stream is called an mr-processed row. Since each MPEG-2 unit uses a single slice for each row, an mr-processed row is also called an mr-processed slice in this context.
  • Updates quant_trace until left_alignment A check is made that the macroblock_quant flag is set in the first non-skipped macroblock in the active-region. If not, the macroblock_quant flag is set and the value quantiser_scale__code is set to the value of quant_trace, and a macroblock header is rebuilt accordingly (specific to MRBC units on the 2 nd and 3 rd column for 10801 encoding) . Forms the mr-processed slice by only preserving macroblocks in the active-region in the process of scanning.
  • MRBC unit #9 sends a token to MRBC unit #1 after the last mr- processed slice.
  • RIMs reference image managers
  • Each RIM 220 transfers information from the local memory 218 within one regional processor 206 to the local memory of adjacent processors.
  • the reference images within each MPEG-2 unit are updated by the compression engine during the frame encoding process .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé et un appareil permettant de coder et décoder des signaux de télévision évolués au moyen de moteurs de compression MPEG-2 standards tout en maintenant l'efficacité de compression de ces moteurs de compression. L'architecture permet d'effectuer un traitement parallèle au moyen de moteurs de compression MPEG-2 standards dans un montage superposé n'altérant pas les performances de compression. Un codeur vidéo comprend plusieurs processeurs de zone destinés au codage d'un flux d'entrée d'images vidéo. Chaque image vidéo est divisée en zones comportant des parties superposées, chaque processeur codant une zone particulière d'une image vidéo courante dans le flux. Chaque processeur de zone mémorise une trame de référence dans une mémoire locale sur la base d'une image vidéo antérieure dans le flux afin de l'utiliser dans la compensation de mouvement du procédé de codage. Un processeur de trame de référence couplé aux multiples mémoires locales met à jour chaque trame de référence à l'aide d'informations provenant de trames de référence mémorisées dans les mémoires locales contiguës. Les images vidéo codées sont formées de macroblocs et chaque processeur de zone comprend un dispositif d'élimination de certains macroblocs des images vidéo codées correspondant aux parties superposées et de concaténation des images vidéo codées obtenues avec celles des autres processeurs de zone pour former un flux vidéo de sortie.
EP99903316A 1998-01-26 1999-01-21 Procede et appareil de codage et decodage de signaux de television evolues Ceased EP1051839A2 (fr)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US7243698P 1998-01-26 1998-01-26
US72436P 1998-01-26
US5442798A 1998-04-03 1998-04-03
US54427 1998-04-03
PCT/US1999/001410 WO1999038316A2 (fr) 1998-01-26 1999-01-21 Procede et appareil de codage et decodage de signaux de television evolues

Publications (1)

Publication Number Publication Date
EP1051839A2 true EP1051839A2 (fr) 2000-11-15

Family

ID=26733026

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99903316A Ceased EP1051839A2 (fr) 1998-01-26 1999-01-21 Procede et appareil de codage et decodage de signaux de television evolues

Country Status (5)

Country Link
EP (1) EP1051839A2 (fr)
JP (1) JP2002502159A (fr)
AU (1) AU2337099A (fr)
CA (1) CA2318272A1 (fr)
WO (1) WO1999038316A2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7451797B2 (en) 2003-08-01 2008-11-18 Llaza, S.A. Articulated arm for awnings

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3393595B2 (ja) * 1998-09-25 2003-04-07 日本電信電話株式会社 動画像符号化装置および動画像符号化方法
JP2001285876A (ja) * 2000-03-30 2001-10-12 Sony Corp 画像符号化装置とその方法、ビデオカメラ、画像記録装置、画像伝送装置
EP1368786B1 (fr) * 2001-02-09 2006-01-25 Koninklijke Philips Electronics N.V. Systeme logiciel destine a mettre en oeuvre des fonctions de traitement d'images sur une plateforme programmable d'environnements de processeurs repartis
KR100605746B1 (ko) 2003-06-16 2006-07-31 삼성전자주식회사 블럭 기반의 움직임 보상 장치 및 방법
US7881546B2 (en) 2004-09-08 2011-02-01 Inlet Technologies, Inc. Slab-based processing engine for motion video
JP2010183305A (ja) 2009-02-05 2010-08-19 Sony Corp 信号処理装置及び信号処理方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5212742A (en) * 1991-05-24 1993-05-18 Apple Computer, Inc. Method and apparatus for encoding/decoding image data
EP0577310B1 (fr) * 1992-06-29 2001-11-21 Canon Kabushiki Kaisha Dispositif de traitement d'image
JPH0837662A (ja) * 1994-07-22 1996-02-06 Hitachi Ltd 画像符号化復号化装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9938316A3 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7451797B2 (en) 2003-08-01 2008-11-18 Llaza, S.A. Articulated arm for awnings

Also Published As

Publication number Publication date
WO1999038316A2 (fr) 1999-07-29
JP2002502159A (ja) 2002-01-22
CA2318272A1 (fr) 1999-07-29
AU2337099A (en) 1999-08-09
WO1999038316A3 (fr) 2000-01-20

Similar Documents

Publication Publication Date Title
EP0862835B1 (fr) Procédé permettant de modifier des signaux numeriques codés de video pour une utilisation ameliorée des canaux avec réduction de données des images B codées
US6141059A (en) Method and apparatus for processing previously encoded video data involving data re-encoding.
EP0895694B1 (fr) Systeme et procede pour creer des trains d'information video en mode de reproduction rapide a partir d'un train de bits video comprime pour une reproduction normale
KR100341055B1 (ko) 비디오감압축프로세서를위한신택스분석기
US8817885B2 (en) Method and apparatus for skipping pictures
US5862140A (en) Method and apparatus for multiplexing video programs for improved channel utilization
US5623308A (en) Multiple resolution, multi-stream video system using a single standard coder
US7023924B1 (en) Method of pausing an MPEG coded video stream
US5825419A (en) Coding device and decoding device of digital image signal
US20100271463A1 (en) System and method for encoding 3d stereoscopic digital video
EP1161097B1 (fr) Décodeur MPEG
US20100118982A1 (en) Method and apparatus for transrating compressed digital video
JP2001169292A (ja) 情報処理装置および方法、並びに記録媒体
WO1997019561A9 (fr) Procede et dispositif pour multiplexer les programmes video
WO1997019559A9 (fr) Procede et appareil permettant de modifier des signaux numeriques codes de video pour une utilisation amelioree des canaux
CN102438139A (zh) 本地宏块信息缓冲器
KR100710290B1 (ko) 비디오 디코딩 장치 및 방법
EP1596603B1 (fr) Encodeur vidéo et méthode pour détecter et encoder du bruit
JP3649729B2 (ja) ディジタル・ビデオ処理システムにおけるエラーを隠す装置
US20100104015A1 (en) Method and apparatus for transrating compressed digital video
EP1051839A2 (fr) Procede et appareil de codage et decodage de signaux de television evolues
EP2352296A1 (fr) Appareil de codage d'image animée et appareil de décodage d'image animée
JP2001169278A (ja) ストリーム生成装置および方法、ストリーム伝送装置および方法、符号化装置および方法、並びに記録媒体
JP4906197B2 (ja) 復号装置および方法、並びに記録媒体
KR100988622B1 (ko) 영상 부호화 방법, 복호화 방법, 영상 표시 장치 및 그기록 매체

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20000724

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

RIN1 Information on inventor provided before grant (corrected)

Inventor name: HU, YENDO

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

RIC1 Information provided on ipc code assigned before grant

Free format text: 7H 04N 1/00 A, 7H 04N 7/36 B, 7H 04N 7/50 B

RTI1 Title (correction)

Free format text: METHOD AND APPARATUS FOR MOTION COMPENSATED COMPRESSION OF A DIGITAL TELEVISION SIGNAL

17Q First examination report despatched

Effective date: 20010709

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20020125