EP1483918A2 - Method and system for layered video encoding - Google Patents

Method and system for layered video encoding

Info

Publication number
EP1483918A2
EP1483918A2 EP03706790A EP03706790A EP1483918A2 EP 1483918 A2 EP1483918 A2 EP 1483918A2 EP 03706790 A EP03706790 A EP 03706790A EP 03706790 A EP03706790 A EP 03706790A EP 1483918 A2 EP1483918 A2 EP 1483918A2
Authority
EP
European Patent Office
Prior art keywords
significance
block
level
layer
recited
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP03706790A
Other languages
German (de)
English (en)
French (fr)
Inventor
Mihaela Van Der Schaar
Rama Kalluri
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics NV filed Critical Koninklijke Philips Electronics NV
Publication of EP1483918A2 publication Critical patent/EP1483918A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/34Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process

Definitions

  • the present invention relates to video image encoding and more specifically to fractionally encoding enhancement layers of layer encoded video images.
  • FGS encoding such as Fine Granular Scalar (FGS), and wavelet encoding
  • FGS encoding encodes video images into a base-layer and an enhancement layer.
  • the base layer represents the minimum image that that may be transmitted over a network with an acceptable quality.
  • the enhancement layer represents additional image details that may be transmitted over the network when sufficient residual bandwidth is available.
  • Enhancement layers are encoded in a bit-plane format wherein the most significant bits of each enhancement layer value are stored in a first bit plane and each succeeding bit of each enhancement layer value is stored in a corresponding bit plane. During transmission of the enhancement layer, the values in each bit plane are successively transmitted until the available bandwidth is occupied.
  • Figure 1 illustrates an FGS fractional bit plane encoder in accordance with the principles of the present invention
  • Figure 2 illustrates a significance mapped enhancement layer bit plane
  • Figure 3 a illustrates a flow chart of an exemplary block diagram for identifying significant image areas within an image in accordance with the principles of the invention
  • Figure 3b illustrates a flow chart of an exemplary process for generating a significance map in accordance with the principles of the invention.
  • Figure 4 illustrates a system for determining significance mapped enhancement layer bit planes in accordance with the principles of the invention.
  • a method for encoding a video image composed of a plurality of pixel blocks containing at least one area determined to be significant within a corresponding sub-layer.
  • the method comprises the steps of associating a level of significance with each block of a known size within the at least one significant area, associating a level of significance with each successively larger block dependent upon the level of significance of at least one of the blocks of a known size contained within a successively larger block, and mapping each of the associated level of significance.
  • the significance map is transmitted and corresponding image layers may be reconstructed using the significance map.
  • FIG. 1 illustrates a block diagram of an exemplary fractional bit plane encoder 100 in accordance with the principles of the present invention.
  • input signal 110 is applied to summer 115, which is mixed with motion compensated images, as will be further discussed.
  • the combined signal is then applied to Discrete Coefficient Transformation (DCT) 120 to convert pixel values into coefficients.
  • DCT coefficients are next applied to quantizer 125 for quantization.
  • quantized DCT coefficients are then applied to a Variable Length Coder 130 and combiner 175.
  • DCT Discrete Coefficient Transformation
  • the quantized DCT coefficients are also applied to inverse quantizer 135 to restore the DCT coefficients.
  • the restored DCT coefficient are not exactly the same as the original DCT values as some information is lost in the quantization process.
  • the inverse quantized coefficients are next applied to inverse DCT 140 to recover the original pixel element after DCT and quantization processing. Similarly, a known difference between the original pixel elements and the restored pixel elements exists because some information is lost in the quantization process.
  • the recovered pixel elements are applied to motion estimator/motion compensator 145.
  • the motion estimated/compensated signal is then applied to summing device 115 to be combined with the original image 110.
  • the summed image 150 is also applied to summing device 155 along with the recovered pixel elements output from inverse DCT 140.
  • the output of summing device is a residual element between the original signal 110 and recovered base layer image.
  • the residual image is concurrently applied to enhancement layer encoder 160 and significance map encoder 165.
  • the results of significance map encoder 165 are further applied to enhancement encoder 170 for mapping the bit planes as will be more fully described.
  • the outputs of enhancement layer 170 and sigmficance map 165 are applied to combiner 180 and the combined output applied to combiner 175.
  • the output 190 of combiner 175 may then be transmitted over a network or stored for subsequent transmission.
  • Figure 2a illustrates an image frame 200 containing significant information, such as changes in boundaries, color or texture.
  • Significant images areas 210, 215, 220 may be identified using known methods.
  • areas that exhibit little or no change in textual may be identified as non-significant. Consequently, little or no information regarding these areas need be transmitted.
  • the determination of significant areas may be done by reviewing each pixel element.
  • the determination of significant areas may be done by reviewing corresponding DCT coefficients.
  • Figure 2b illustrates another aspect of the present invention, wherein a significant image area, for example 210, is associated with a plurality of blocks, corresponding macroblocks, and corresponding super-macroblocks.
  • image area 210 is composed of super-macroblocks 222, 224, 226, 228, 230 and 232.
  • Each super- macroblock may be partitioned into macroblocks.
  • super-macroblock 222 is shown partitioned into macroblocks 240, 242, 244 and 246.
  • Each macroblock 240, 242, 244 and 246 may be further partitioned into a mini-macroblock.
  • macroblock 240 is shown partitioned into mini-macroblocks 250, 252, 254, and 256.
  • Each mini-macroblock may be further partitioned into a block.
  • mini-macroblock 250 is shown partitioned in to blocks 260, 262, 264 and 266.
  • each super- macroblock may be similarly partitioned, identified and associated with macro-, mini-macro-, and blocks.
  • block 260 contains information associated with an 8x8 configuration of pixel elements. Furthermore, mini-macroblock 250 is associated with a 16x16 configuration of pixel elements, macroblock 240 is associated with a 32x32 configuration of pixel elements and super-macroblock 222 is associated with a 64x64 configuration of pixel elements.
  • block 260 is analogous with the DCT encoding of a corresponding block of pixel elements.
  • Figure 2c illustrates the bit-plane mapping 270 of the identified significant area 210 in bit planes 272, 274, and 276 in accordance with the preferred embodiment of the invention. In this case the enhancement layer is encoded using a three-bit-bitplane.
  • bit-planes may be any number and there is no intention to limit the bit-plane depth to that shown herein.
  • area 210 and associated super- macroblocks, macroblocks, mini-macro blocks, and blocks may be readily identified.
  • Figure 3a illustrates a flow chart of an exemplary process 300 for significance mapping in accordance with the principles of the invention.
  • significance mapping is initiated at an arbitrarily selected bit plane associated with the image or picture.
  • the bit-plane associated with the most-significant bits i.e, bit-plane 0 is selected at block 305.
  • a significance map associated with the selected bit plane is determined.
  • the significance map associated with the bit- plane is coded.
  • the texture of the blocks identified as being significant are coded and a bit- wise representation of the significance map is generated. This bit- wise representation of the significance map can be decoded at the receiving device to understand the significance map.
  • FIG. 325 a determination is made whether all the bit planes associated with the image have been processed. If the answer is negative, then a next/subsequent bit plane is selected at block 332 and the significance mapping process continues for selected next/subsequent bit plane. If, however, the answer is in the affirmative, then a determination is made at block 330 whether all the images have been processed. If the answer is negative, then a next/subsequent image or picture is selected at block 334. The significance mapping process then continues for each bit plane in the selected next/subsequent image or picture.
  • Figure 3b illustrates a flow chart of an exemplary significance mapping process 310. In this exemplary process an initial block size and associated minimum and maximum block sizes are determined at block 340.
  • an initial block size associated with the preferred block size is depicted.
  • the block is marked or identified as being insignificant at block 370.
  • a determination is made at block 360 whether the last block has been reached. If the answer is negative, then a next/subsequent block in the bit plane is selected at block.365. Processing continues on the selected next/subsequent block at block 345.
  • Processing then continues on each of the successively larger block until the block size exceeds a maximum block size at block 375.
  • FIG. 4 illustrates an exemplary embodiment of a system 400 that may be used for implementing the principles of the present invention.
  • System 400 may represent a TV transmitter or receiving system, a desktop, laptop or palmtop computer, a personal digital assistant (PDA), a video/image storage apparatus such as a video cassette recorder (VCR), a digital video recorder (DNR), a TiNO apparatus, etc., as well as portions or combinations of these and other devices.
  • System 400 may contain one or more input/output devices 402, processors 403, and memories 404, which may access one or more sources 401 that contain video images.
  • Sources 401 may be stored in permanent or semi-permanent media such as a television receiver (SDTV or HDTV), a VCR, RAM, ROM, hard disk drive, optical disk drive or other video image storage devices. Sources 401 may alternatively be accessed over one or more network connections 410 for receiving video from a server or servers over, for example a global computer communications network such as the Internet, a wide area network, a metropolitan area network, a local area network, a terrestrial broadcast system, a cable network, a satellite network, a wireless network, or a telephone network, as well as portions or combinations of these and other types of networks.
  • a global computer communications network such as the Internet, a wide area network, a metropolitan area network, a local area network, a terrestrial broadcast system, a cable network, a satellite network, a wireless network, or a telephone network, as well as portions or combinations of these and other types of networks.
  • Input/output devices 402, processors 403, and memories 404 may communicate over a communication medium 406.
  • Communication medium 406 may represent for example, a bus, a communication network, one or more internal connections of a circuit, circuit card or other apparatus, as well as portions and combinations of these and other communication media.
  • Input data from the sources 401 is processed in accordance with one or more software programs that may be stored in memories 404 and executed by processors 403 in order to supply fractionally encoded video images to network 420.
  • the fractionally encoded vided images may be transmitted to a storage device, or may be transmitted to a display system for real-time viewing of the encoded video image.
  • Processors 403 may be any means, such as general purpose or special purpose computing system, or may be a hardware configuration, such as a laptop computer, desktop computer, handheld computer, dedicated logic circuit, integrated circuit, Programmable Array Logic (PAL), Application Specific Integrated Circuit (ASIC), etc., that provides a known output in response to known inputs.
  • PAL Programmable Array Logic
  • ASIC Application Specific Integrated Circuit
  • the coding and decoding employing the principles of the present invention may be implemented by computer readable code executed by processor 403.
  • the code may be stored in the memory 404 or read/downloaded from a memory medium such as a CD-ROM or floppy disk (not shown).
  • hardware circuitry may be used in place of, or in combination with, software instructions to implement the invention.
  • the elements illustrated herein may also be implemented as discrete hardware elements.
  • the term processor may represent one or more processing units or computing units in communication with one or more memory units and other devices, e.g., peripherals, connected electronically to and communicating with the at least one processing unit.
  • the devices may be electronically connected to the one or more processing units via internal busses, e.g., ISA bus, microchannel bus, PCI bus, PCMCIA bus, etc., or one or more internal connections of a circuit, circuit card or other device, as well as portions and combinations of these and other communication media or an external network, e.g., the Internet and Intranet.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
EP03706790A 2002-03-05 2003-03-04 Method and system for layered video encoding Withdrawn EP1483918A2 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US36259202P 2002-03-05 2002-03-05
US362592P 2002-03-05
US43405502P 2002-12-17 2002-12-17
US434055P 2002-12-17
PCT/IB2003/000789 WO2003075579A2 (en) 2002-03-05 2003-03-04 Method and system for layered video encoding

Publications (1)

Publication Number Publication Date
EP1483918A2 true EP1483918A2 (en) 2004-12-08

Family

ID=27791716

Family Applications (1)

Application Number Title Priority Date Filing Date
EP03706790A Withdrawn EP1483918A2 (en) 2002-03-05 2003-03-04 Method and system for layered video encoding

Country Status (6)

Country Link
EP (1) EP1483918A2 (zh)
JP (1) JP2005519543A (zh)
KR (1) KR20040091682A (zh)
CN (1) CN1640146A (zh)
AU (1) AU2003208500A1 (zh)
WO (1) WO2003075579A2 (zh)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100931871B1 (ko) * 2005-04-13 2009-12-15 노키아 코포레이션 비디오 데이터의 효과적인 fgs 부호화 및 복호화를 위한방법, 장치, 시스템
KR100834757B1 (ko) * 2006-03-28 2008-06-05 삼성전자주식회사 엔트로피 부호화 효율을 향상시키는 방법 및 그 방법을이용한 비디오 인코더 및 비디오 디코더
US7536395B2 (en) 2006-06-06 2009-05-19 International Business Machines Corporation Efficient dynamic register file design for multiple simultaneous bit encodings
KR100856064B1 (ko) * 2006-06-12 2008-09-02 경희대학교 산학협력단 Fgs코딩에서 우선적인 인코딩/디코딩 방법 및 장치
EP1873055A1 (en) * 2006-06-30 2008-01-02 Technische Universiteit Delft Ship with bow control surface
DE602007010835D1 (de) * 2007-01-18 2011-01-05 Fraunhofer Ges Forschung Qualitätsskalierbarer videodatenstrom
US8503527B2 (en) 2008-10-03 2013-08-06 Qualcomm Incorporated Video coding with large macroblocks
US8483285B2 (en) 2008-10-03 2013-07-09 Qualcomm Incorporated Video coding using transforms bigger than 4×4 and 8×8
US8619856B2 (en) 2008-10-03 2013-12-31 Qualcomm Incorporated Video coding with large macroblocks
US8634456B2 (en) * 2008-10-03 2014-01-21 Qualcomm Incorporated Video coding with large macroblocks
CN101527786B (zh) * 2009-03-31 2011-06-01 西安交通大学 一种增强网络视频中视觉重要区域清晰度的方法
PT2449782T (pt) 2009-07-01 2018-02-06 Thomson Licensing Métodos e aparelhos para sinalizar a intra predição para grandes blocos para codificadores e descodificadores de vídeo
KR101624649B1 (ko) 2009-08-14 2016-05-26 삼성전자주식회사 계층적인 부호화 블록 패턴 정보를 이용한 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치
EP2993904B1 (en) * 2009-10-01 2017-03-22 SK Telecom Co., Ltd. Apparatus for decoding image using split layer
NO331356B1 (no) * 2009-10-16 2011-12-12 Cisco Systems Int Sarl Fremgangsmater, dataprogrammer og anordninger for koding og dekoding av video
ES2496365T3 (es) * 2011-10-24 2014-09-18 Blackberry Limited Codificación y descodificación de mapas significativos usando selección de la partición

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1062625A4 (en) * 1998-03-20 2005-08-31 Mitsubishi Electric Corp IMAGE CODING WITH LOSS / WITHOUT LOSS OF INTEREST REGIONS
US6754266B2 (en) * 1998-10-09 2004-06-22 Microsoft Corporation Method and apparatus for use in transmitting video information over a communication network
US7245663B2 (en) * 1999-07-06 2007-07-17 Koninklijke Philips Electronis N.V. Method and apparatus for improved efficiency in transmission of fine granular scalable selective enhanced images
US6501397B1 (en) * 2000-05-25 2002-12-31 Koninklijke Philips Electronics N.V. Bit-plane dependent signal compression
US20020080878A1 (en) * 2000-10-12 2002-06-27 Webcast Technologies, Inc. Video apparatus and method for digital video enhancement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO03075579A2 *

Also Published As

Publication number Publication date
KR20040091682A (ko) 2004-10-28
WO2003075579A3 (en) 2003-12-31
WO2003075579A2 (en) 2003-09-12
AU2003208500A1 (en) 2003-09-16
AU2003208500A8 (en) 2003-09-16
JP2005519543A (ja) 2005-06-30
CN1640146A (zh) 2005-07-13

Similar Documents

Publication Publication Date Title
KR101196975B1 (ko) 비디오 색 인핸스먼트 데이터를 인코딩하기 위한 방법 및 장치, 그리고 비디오 색 인핸스먼트 데이터를 디코딩하기 위한 방법 및 장치
EP1290868B1 (en) Bit-plane dependent signal compression
EP1483918A2 (en) Method and system for layered video encoding
US20070065005A1 (en) Color space scalable video coding and decoding method and apparatus for the same
US7245663B2 (en) Method and apparatus for improved efficiency in transmission of fine granular scalable selective enhanced images
JPH11513205A (ja) ビデオ符号化装置
Li Image compression: The mathematics of JPEG 2000
EP1401208A1 (en) Fine granularity scalability encoding/decoding apparatus and method
EP1961235A1 (en) Method and apparatus for encoding and decoding video signals on group basis
US7406203B2 (en) Image processing method, system, and apparatus for facilitating data transmission
US6760479B1 (en) Super predictive-transform coding
US20060133483A1 (en) Method for encoding and decoding video signal
KR100603592B1 (ko) 영상 화질 향상 인자를 이용한 지능형 파문 스캔 장치 및 그 방법과 그를 이용한 영상 코딩/디코딩 장치 및 그 방법
US20050213831A1 (en) Method and system for encoding fractional bitplanes
EP1479246A1 (en) Memory-bandwidth efficient fine granular scalability (fgs) encoder
JP2004048607A (ja) ディジタル画像符号化装置およびディジタル画像符号化方法
US7016541B2 (en) Image processing method for facilitating data transmission
US20090074059A1 (en) Encoding method and device for image data
US20040066849A1 (en) Method and system for significance-based embedded motion-compensation wavelet video coding and transmission
Li Image Compression-the Mechanics of the JPEG 2000
Lu et al. Polynomial approximation coding for progressive image transmission
JP2003244443A (ja) 画像符号化装置及び画像復号装置
US7519520B2 (en) Compact signal coding method and apparatus
JPH11136521A (ja) 画像データ処理装置
JPH0937250A (ja) 画像データ復号装置及び画像データ復号方法

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20041005

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20070827