WO2009136681A1 - Method for encoding and decoding image, and apparatus for displaying image - Google Patents

Method for encoding and decoding image, and apparatus for displaying image Download PDF

Info

Publication number
WO2009136681A1
WO2009136681A1 PCT/KR2008/005453 KR2008005453W WO2009136681A1 WO 2009136681 A1 WO2009136681 A1 WO 2009136681A1 KR 2008005453 W KR2008005453 W KR 2008005453W WO 2009136681 A1 WO2009136681 A1 WO 2009136681A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
slice groups
slice
images
decoding
Prior art date
Application number
PCT/KR2008/005453
Other languages
French (fr)
Inventor
Seung Kyun Oh
Jin Seok Im
Seung Jong Choi
Original Assignee
Lg Electronics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020080042837A external-priority patent/KR100988622B1/en
Priority claimed from KR1020080042836A external-priority patent/KR100951465B1/en
Application filed by Lg Electronics Inc. filed Critical Lg Electronics Inc.
Publication of WO2009136681A1 publication Critical patent/WO2009136681A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234327Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2383Channel coding or modulation of digital bit-stream, e.g. QPSK modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42607Internal components of the client ; Characteristics thereof for processing the incoming bitstream
    • H04N21/4263Internal components of the client ; Characteristics thereof for processing the incoming bitstream involving specific tuning arrangements, e.g. two tuners
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving encoded video stream packets from an IP network
    • H04N21/4382Demodulation or channel decoding, e.g. QPSK demodulation

Definitions

  • the present invention relates to a method of encoding an image, a method of decoding an image, and an apparatus for displaying an image, and more particularly, to a method of encoding an image, a method of decoding an image, and an apparatus for displaying an image, in which a plurality of images are respective mapped to a plurality of slice groups and the slice groups are encoded and decoded.
  • UHD images contain more information than ordinary images
  • UHDTV are nowadays deemed one of the most important elements for realizing realistic broadcasting, the ultimate goal of digital broadcasting, by providing viewers with improved realism and vividness.
  • Uncompressed UHD video data is generally processed at a speed of 3-8 Gbps, and compressed UHD video data is generally processed at a speed of 100-600 Mbps. Thus, in order to properly process, transmit and store UHD video data, it is necessary to compress UHD video data.
  • moving image data is generally very large in size.
  • various moving image compression standards such as H.261, VC-I, which is the Society of Motion Picture and Television Engineers (SMPTE) video codec standard, and H.264/AVC, which is the ITU-T and ISO/IEC video codec standard, have been suggested.
  • SMPTE Society of Motion Picture and Television Engineers
  • H.264/AVC which is the ITU-T and ISO/IEC video codec standard
  • FIG. 1 illustrates block diagrams of two conventional systems for receiving and decoding a plurality of images.
  • two tuners 101 and 102 may respectively receive two bitstreams transmitted thereto through different frequency bands. Thereafter, two decoders 105 and 106 may respectively decode the two received bitstreams, and an image signal processor 110 may appropriately process the decoded bitstreams provided by the decoders 105 and 106.
  • a tuner 103 may receive a bitstream having a uniform modulation frequency. Then, a demultiplexer 104 may demultiplex the bitstream and may transmit the demultiplexed bitstream to each of decoders 105 and 106. The decoders 105 and 106 may decode the demultiplexed bitstream and may thus restore a plurality of images. Thereafter, an image signal processor 110 may process the restored images.
  • FIG. l(a) If multiple video contents are transmitted through more than one channel, as shown in FIG. l(a), more than one tuner may be required to properly process the multiple video contents.
  • more than one decoder may be required to extract and restore the individual bitstreams from the multiplexed bitstream.
  • the present invention provides a method of encoding an image, a method of decoding an image, and an apparatus for displaying an image, which can improve encoding/decoding efficiency by generating a single bitstream using a high-definition (HD) image.
  • HD high-definition
  • a method of encoding an image including respectively mapping a plurality of first images to a plurality of slice groups; and encoding the slice groups.
  • a method of decoding an image including extracting a plurality of encoded slice groups from an input bitstream; decoding the extracted slice groups; and extracting a plurality of first images respectively mapped to the decoded slice groups.
  • an apparatus for displaying an image including one or more display units displaying an image; a decoder extracting a plurality of slice groups from an input bitstream, decoding the extracted slice groups and extracting a plurality of first images respectively mapped to the decoded slice groups; and an image signal processing unit synthesizing the extracted first images into a single image.
  • a single bitstream is generated by respectively mapping a plurality of images to a plurality of slice groups and encoding the slice groups, it is possible to increase encoding/decoding efficiency. In addition, it is possible to effectively process a single bitstream without a requirement of a plurality of tuners and/or a plurality of decoders and thus to simplify the structure of a system.
  • a single bitstream is generated by dividing a high-definition (HD) image into a plurality of partition images, respectively mapping the partition images to a plurality of slice groups and encoding the slice groups, it is possible to increase encoding/decoding efficiency.
  • HD high-definition
  • an HD image is divided into a plurality of partition images and an expanded image including at least parts of some of the partition images is added to each of the partition images, it is possible to prevent the deterioration of the picture quality along the boundaries between the partition images.
  • a plurality of images may be appropriately numbered and may thus be able to be selectively chosen.
  • whichever of the images are not chosen may not necessarily have to be decoded, it is possible to reduce power consumption.
  • FIG. 1 illustrates block diagrams of conventional systems for receiving and decoding a plurality of images
  • FIG. 2 illustrates a flowchart of a method of encoding an image according to an exemplary embodiment of the present invention
  • FIG. 3 illustrates diagrams for explaining the method shown in FIG. 1;
  • FIG. 4 illustrates diagrams of various slice groups
  • FIG. 5 illustrates a diagram showing syntax for representing type-2 slice group position information
  • FIGS. 6 through 8 illustrate diagrams showing the correspondence between a plurality of parameter sets and a plurality of slice groups
  • FIG. 9 illustrates a diagram showing the arrangement of a plurality of partition images of an ultra high-definition (UHD) image
  • FIG. 10 illustrates a diagram showing syntax for identifying the type of parameter set
  • FIG. 11 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention
  • FIG. 12 illustrates diagrams for explaining the method shown in FIG. 11;
  • FIG. 13 illustrates a diagram showing syntax for representing slice group position information
  • FIG. 14 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention
  • FIG. 15 illustrates diagrams for explaining the method shown in FIG. 14;
  • FIG. 16 illustrates a diagram showing syntax that can be applied to the method shown in FIG. 14;
  • FIG. 17 illustrates a diagram showing the syntax of cropping information that can be used in the method shown in FIG. 14;
  • FIG. 18 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention.
  • FIG. 19 illustrates diagrams for explaining the method shown in FIG. 18;
  • FIG. 20 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention
  • FIG. 21 illustrates a diagram showing the syntax of cropping information that can be used in the method shown in FIG. 20
  • FIG. 22 illustrates a flowchart of a method of decoding an image according to an exemplary embodiment of the present invention
  • FIG. 23 illustrates a block diagram of an apparatus for displaying an image according to an exemplary embodiment of the present invention
  • FIG. 24 illustrates a diagram for explaining the operation of the apparatus shown in
  • FIG. 23; [48] FIG. 25 illustrates a block diagram of an apparatus for displaying an image according to another exemplary embodiment of the present invention.
  • FIG. 26 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention.
  • FIG. 27 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention
  • FIG. 28 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention.
  • FIG. 29 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention.
  • FIG. 2 illustrates a flowchart of a method of encoding an image according to an exemplary embodiment of the present invention
  • FIG. 3 illustrates diagrams for explaining the method shown in FIG. 1
  • FIG. 4 illustrates diagrams of various slice groups
  • FIG. 5 illustrates a diagram showing syntax for representing type-2 slice group position information.
  • a plurality of images may be respectively mapped to a plurality of slice groups (S210).
  • the images may be independent video contents regarding movies, dramas, sports, and shopping, as shown in FIG. 3(a).
  • the images may represent different viewing angles, as shown in FIG. 3(b).
  • an ultra high-definition (UHD) image 320 or 325 may be divided into four partition images, i.e., high-definition (HD) images 0 through 1 (310 or 315), but the present invention is not restricted to this.
  • a sequence may be divided into a number of pictures, each of the pictures may include a plurality of slice groups, and each of the slice groups may include a plurality of slices.
  • FIG. 4 illustrates a diagram showing various types of slice group maps provided by the H.264/AVC standard.
  • slice group map types type 0 (interleaved), type 1 (dispersed), type 2 (foreground with leftover), type 3 (box-out), type 4 (raster scan), type 5 (wipe), and type 6 (explicit).
  • slice groups have originally been designed for restoring an erroneous image by correcting errors in the image. However, in the exemplary embodiment of FIG. 12, slice groups may be used for processing each of a plurality of partition images of an image. [63] Referring to FIG. 3, the HD images 0 through 3 may be respectively mapped to a plurality of type-2 slice groups (i.e., a plurality of slice groups 0 through 3), respectively.
  • top_left indicates a top left portion of each slice group
  • bottom_right indicates a bottom right portion of each slice group. More specifically, “topjeft” corresponds to MBAOT, MBAlT, MBA2T, and MBA3T of the slice groups 0 through 3 of FIG. 3, and “bottom_right” corresponds to MBAOB, MBAlB, MBA2B, and MBA3B of the slice groups 0 through 3 of FIG. 3.
  • the slice groups may be encoded (S220). More specifically, the encoding of the slice groups may involve performing frequency conversion, quantization, entropy encoding, deblocking filtering and motion estimation and compensation. Frequency transform and quantization may be performed as a single process.
  • a plurality of images may be respectively mapped to a plurality of slice groups, and the slice groups may be encoded, thereby generating a single bitstream using existing syntax elements without a requirement of additional syntax elements.
  • a bitstream obtained by encoding the slice groups 0 through 3 respectively corresponding to the HD images 0 through 3 (310 or 315) may constitute the UHD image 320 or 325.
  • the UHD image 320 or 325 may have a definition of 4K (e.g., 4096x2096) or 8K (e.g., 7680x4320).
  • the UHD image 320 or 325 may be displayed by extracting the HD images 0 through 3 (310 or 315).
  • one or more parameter sets may also be encoded.
  • the HD images 0 through 3 (310) of the UHD image 320 may have different contents.
  • the HD images 0 through 3 (310) may be converted to a raw format, and may then be encoded so as to have the same parameter sets.
  • the parameter sets that can be encoded in operation S220 may include a sequence parameter set (SPS) and a picture parameter set (PPS).
  • this type of encoding may not be able to properly reflect the properties of each of the HD images 0 through 3 (310).
  • the HD images 0 through 3 (310) may need to be decoded first and may then need to be encoded so as to have the same parameter set.
  • the slice groups 0 through 3 respectively corresponding to the HD images 0 through 3 (310) may share the same parameter set with each other or may reference each other's parameter set. That is, the number of parameter sets that can be used to decode a picture of a UHD image may range from a minimum of 1 to a maximum of a number of slice groups of the UHD image.
  • FIG. 6 illustrates a diagram showing the correspondence between a plurality of parameter sets and a plurality of slice groups, according to an exemplary embodiment of the present invention.
  • a plurality of slice groups may respectively correspond to a plurality of parameter sets. More specifically, a plurality of images having different contents, as shown in FIG. 3 (a), may correspond to different parameter sets according to their properties and their providers. Therefore, a plurality of slice groups respectively corresponding to the images may reference different parameter sets, as shown in FIG. 6.
  • FIG. 6 illustrates the case where a plurality of images have different contents and thus have different parameter sets.
  • the number of parameter sets that constitute a UHD image may vary according to the properties of the UHD image.
  • a plurality of parameter sets referenced by a plurality of slice groups may need to be transmitted before they are needed by the slice groups.
  • a plurality of slice groups i.e., slice group#l through slice group#4
  • FIG. 6 the order in which slice group#l through slice group#4 are transmitted is not restricted to the order in which slice group#l through slice group#4 are arranged. That is, slice group#l through slice group#4 may be transmitted in any order as long as they can be decoded before they are needed.
  • the identifiers (ID) of a plurality of PPSs respectively referenced by a plurality of slice groups of a UHD image may be specified in a header, as prescribed in the H.264/AVC standard. In this manner, the slice groups of the UHD image may reference a plurality of SPSs respectively corresponding to the PPSs.
  • slice group#l designates PPS#1, and PPS#1 designates SPS#1.
  • slice group#l references both SPS#1 and PPS#1.
  • slice group#l may designate both PPS#1 and SPS#1 at the same time.
  • a plurality of slice groups reference different parameter sets (such as SPS#1 through SPS#4 and PPS#1 through PPS#4), as shown in FIG. 6, additional information regarding an image constituted by the slice groups may be required.
  • the additional information may include a plurality of synthesized parameter sets.
  • the synthesized parameter sets will hereinafter be described in detail with reference to FIG. 7.
  • FIG. 7 illustrates a diagram showing the correspondence between a plurality of parameter sets and a plurality of slice groups, according to another exemplary embodiment of the present invention.
  • the exemplary embodiment of FIG. 7 is almost the same as the exemplary embodiment of FIG. 6 except that a pair of synthesized parameter sets (i.e., SPS#0 and PPS#0) for controlling, for example, a UHD image, are additionally provided.
  • SPS#0 and PPS#0 synthesized parameter sets for controlling, for example, a UHD image
  • HD images i.e., slice group#l through slice group#4, may reference PPS#1 through PPS#4, respectively, and SPS#1 through SPS#4, respectively. Since SPS#1 through SPS#4 are individual parameter sets exclusively for slice group#l through slice group#4, respectively, SPS#1 through SPS#4 may be insufficient to properly constitute a UHD image based on a plurality of HD images. Therefore, the synthesized parameter sets, i.e., SPS#0 and PPS#0, may be additionally provided. SPS#O and PPS#0 may include information for controlling a UHD image and may not be referenced by any of slice group#l through slice group#4.
  • SPS#0 may include various information such as the size of a UHD image.
  • SPS#0 may include first macroblock quantity information indicating the number of macroblocks that lie along the latitudinal direction of a UHD image and second macroblock quantity information indicating the number of macroblocks that lie along the longitudinal direction of the UHD image.
  • the first macroblock quantity information may be labeled as
  • a UHD image is a progressively-encoded image having a definition of 3840x2176 and each macroblock of the UHD image has a size of 16x16
  • "pic_width_in_mbs_minusl” of SPS#0 may have a value of 239 and "pic_height_in_map_units_minusl” of SPS#0 may have a value of 135.
  • a UHD image is divided into a plurality of HD images having the same size, as shown in FIG. 3(a), and the HD images are respectively mapped to slice group#l through slice group#4, "pic_width_in_mbs_minusl" of each of SPS#1 through SPS#4 may have a value of 59, and "pic_height_in_mbs_minusl” of each of SPS#1 through SPS#4 may have a value of 33.
  • SPS#0 may include various information necessary for controlling a UHD image, for example, the length and width of the UHD image. Therefore, it is possible to easily determine how many slice groups constitute a UHD image based on SPS#0.
  • PPS#1 through PPS#4 may be referenced by a plurality of partition images.
  • parameter information of PPS#0 may not be used in connection with any one of PPS#1 through PPS#4.
  • a plurality of images i.e., a plurality of HD images
  • a UHD image are separate images, as shown in FIG. 3 (a)
  • macroblock coordinate information or motion vector information may be classified into HD image information, rather than UHD image information.
  • FIGS. 6 and 7 can be applied not only to a plurality of video contents that are independent from one another but also to a plurality of multiple viewpoint images that are synchronized and can thus be encoded using the same parameters. That is, a plurality of individual parameter sets may be respectively used for a plurality of slice groups.
  • FIG. 8 illustrates a diagram showing the correspondence between a plurality of parameter sets and a plurality of slice groups, according to another exemplary embodiment of the present invention.
  • the exemplary embodiment of FIG. 8 is different from the exemplary embodiment of FIG. 7 in that a plurality of slice groups correspond to the same parameter set. That is, referring to FIG. 8, a plurality of slice groups, i.e., slice group#l through slice group#4, all correspond to PPS#1, and PPS#1 corresponds to SPS#1.
  • FIG. 9 illustrates a diagram showing the arrangement of a plurality of partition images of a UHD image.
  • MBHO indicates the number of macroblocks that lie along the latitudinal direction of a UHD image
  • MBVO indicates the number of macroblocks that lie along the longitudinal direction of the UHD image.
  • An UHD image may include a plurality of partition images 0 through 7.
  • the partition images 0 through 7 may be respectively mapped to a plurality of slice groups, as described above.
  • the partition images 0 through 7 may be sequentially arranged in a direction from the left to the right and then in a direction from the top to the bottom.
  • the sum of the lengths (in macroblocks) of a number of partition images arranged side by side in a row direction may be the same as the length (in macroblocks) of a UHD image, i.e., MBHO.
  • the sum of the widths (in macroblocks) of a number of partition images arranged side by side in a column direction may be the same as the width (in macroblocks) of a UHD image, i.e., MBVO.
  • the partition image 0 may be decoded and may then be disposed at the upper left corner of a UHD image.
  • MBH2 MBH2
  • the partition image 1 may be disposed next to the partition image 0.
  • the partition image 2 having the same length as the partition image 1 may be disposed below the partition 1 because the width (i.e., MBH2) of the partition image 1 is less than the width (i.e., MBVl) of the partition image 0.
  • the partition image 3 may be disposed below the partition image 2. Since the sum of the widths of the partition images 1 through 3, i.e., the sum of MBV2, MBV3 and MBV4, is the same as the width of the partition image 0, i.e., MBVl, the partition images 4 through 7 may all be disposed below the partition image 0 or 3.
  • FIG. 10 illustrates a diagram showing syntax for identifying the type of parameter set.
  • a given parameter set is a parameter set for a UHD image or a parameter set for a partition image of a UHD image based on information labeled as "reserved_zero_4bits". More specifically, it is possible to conclude that a given SPS is not a parameter set for a partition image by decoding "reserved_zero_4bits", which is yet to be specified in the H.264/AVC standard.
  • a PPS referencing an SPS is a parameter set for configuring a UHD image.
  • a given parameter set is a synthesized parameter set for a UHD image or an individual parameter set for a partition image of a UHD image based on whether the given parameter set is referenced by a slice group and based on parameter values indicating the size of an image, as described above with reference to FIG. 9.
  • FIG. 11 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention
  • FIG. 12 illustrates diagrams for explaining the method shown in FIG. 11
  • FIG. 13 illustrates a diagram showing syntax for representing slice group position information.
  • an image may be divided into a plurality of partition images
  • the image may be a UHD image having a definition of 4 K (e.g., 4096x2096) or 8K (e.g., 7680x4320).
  • a UHD image 1210 may be divided into a plurality of partition images, i.e., a plurality of HD images 0 through 3 (1220).
  • the UHD image 1210 is illustrated in FIG. 12 as being divided into four HD images, but the present invention is not restricted to this.
  • the partition images may be respectively mapped to a plurality of slice groups (Sl 120).
  • the HD images 0 through 3 may be mapped to a plurality of slice groups 0 through 3, respectively.
  • the partition images 0 through 3 may be respectively mapped to a plurality of type-2 slice groups.
  • top_left indicates a top left portion of each slice group
  • bottom_right indicates a bottom right portion of each slice group.
  • topjeft corresponds to MBAOT, MBAlT, MBA2T, and MBA3T of the slice groups 0 through 3 of FIG. 12
  • bottom_right corresponds to MBAOB, MBAlB, MBA2B, and MBA3B of the slice groups 0 through 3 of FIG. 12.
  • a UHD image may be divided into a plurality of partition images, and the partition images may be respectively mapped to a plurality of slice groups. In this manner, it is possible to encode the UHD image using existing syntax elements without a requirement of additional syntax elements.
  • bitstreams corresponding to the number of partition images are generated. Thus, it is necessary to multiplex the bitstream. In addition, more than one processor is required to process the bitstreams.
  • a plurality of partition images are respectively mapped to a plurality of slice groups, and the slice groups are encoded, thereby generating a single bitstream.
  • the slice groups may be encoded (Sl 130).
  • the encoding of the slice groups may involve performing frequency conversion, quantization, entropy encoding, deblocking filtering and motion estimation and compensation. Frequency transform and quantization may be performed as a single process.
  • the encoded slice groups may be synthesized, thereby obtaining the UHD image 1240 of FIG. 12.
  • a plurality of partition images of an image are respectively mapped to a plurality of slice groups, and the slice groups are encoded, thereby generating a single bitstream.
  • FIG. 14 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention
  • FIG. 15 illustrates diagrams for explaining the method shown in FIG. 14
  • FIG. 16 illustrates a diagram showing syntax that can be applied to the method shown in FIG. 14,
  • FIG. 17 illustrates a diagram showing the syntax of cropping information that can be used in the method shown in FIG. 14.
  • FIG. 14 The method shown in FIG. 14 is almost the same as the method shown in FIG. 11 except that an expanded image is added to each of a plurality of partition images obtained by dividing an image.
  • a UHD image 1510 may be divided into a plurality of partition images 1520 (i.e., a plurality of HD images 0 through 3), and an expanded image 1530 may be added to each of the partition images 1520 (S 1410).
  • the expanded image 1530 may include at least parts of some of the partition images 1520. More specifically, the expanded image 1530 may be added to each of the partition images 1520 and may be disposed along the boundaries between the partition images 1520. Referring to FIG. 15, the expanded image 1530 may be added to, for example, the upper left partition image 1520, i.e., the HD image 0, so as to surround the right and lower sides of the HD image 0.
  • the expanded image 1530 may include one or more macroblocks.
  • the number of macroblocks included in the expanded image 1530 may vary according to the color and the format of the UHD image 1510.
  • a UHD image is divided into a plurality of partition images, and the partition images are encoded. Thereafter, the encoded images are synthesized, thereby restoring the UHD image.
  • the boundaries between a pair of adjacent partition images in the restored UHD image may appear prominently compared to other portions of the restored UHD image due to the differences between the values of pixels on one side of the partition image boundary and the values of pixels on the other side of the partition image boundary.
  • the expanded image 1530 may be added to each of the partition images 1520 (S 1410).
  • the addition of the expanded image 1730 to each of the partition images 1520 may be performed in consideration of whether it is necessary to perform deblocking filtering.
  • the partition images may be respectively mapped to a plurality of slice groups (S 1420). More specifically, the partition images 1520 may be respectively mapped to a plurality of slice groups. If the same expanded image 1530 is added to each of the partition images 1520, the slice groups respectively corresponding to the partition images 1520 may have the same expanded image 1730 in common. That is, the slice groups respectively corresponding to the partition images 1520 may be allowed to have an overlap therebetween and may be able to be processed independently from one another.
  • the slice groups may be encoded (S 1430).
  • the encoding of the slice groups may involve performing frequency conversion, quantization, entropy encoding, deblocking filtering and motion estimation and compensation.
  • the encoding of the slice groups may also involve encoding information indicating whether the slice groups have the same expanded image in common.
  • the information indicating whether the slice groups have the same expanded image in common may be labeled as "slice_group_overlap_flag".
  • "slice_group_overlap_flag” may be included in an H.264/AVC picture layer as a new syntax element. If “slice_group_overlap_flag” has a value of 1, it may be determined that there is an overlap between a plurality of slice groups, i.e., it may be determined that the slice groups have the same expanded image in common. On the other hand, if “slice_group_overlap_flag” has a value of 0, it may be determined that the slice groups have no expanded image in common. This is clearly distinguished from the H.264/AVC standard stating that a macroblock belongs to only one slice group.
  • cropping information necessary for cropping the expanded image 1530 may also be encoded.
  • the cropping information may be labeled as “slice_cropping_top_left” or “slice_cropping_bottom_right”, as shown in FIG. 16.
  • “slice_cropping_top_left” and “slice_cropping_bottom_right” may be added to an H.264/AVC picture layer as new syntax elements.
  • “slice_cropping_top_left” may indicate a top left boundary portion of each slice group excluding an expanded image
  • “slice_cropping_bottom_right” may indicate a right bottom boundary portion of each slice group excluding an expanded image.
  • “slice_cropping_top_left” may correspond to MBAOTC, MBAlTC, MBA2TC, and MBA3TC of a plurality of slice groups 0 through 3
  • “slice_cropping_bottom_right” may correspond to MBAOBC, MBAlBC, MBA2BC, and MBA3BC of the slice groups 0 through 3.
  • the cropping information may indicate the boundaries of a partition image excluding an expanded image.
  • a plurality of offset values indicating how much the boundaries of a partition image are distant from the boundaries of a slice group i.e., "frame_crop_left_offset, "frame_crop_right_offset”, “frame_crop_top_offset”, and “frame_crop_bottom_offset”, may be used as the cropping information, as described above with reference to FIG. 17. That is, “left_offset, “right_offset”, “top_offset”, and “bottom_offset” may be used as the cropping information.
  • a motion vector in one slice group indicates another slice group and motion vector estimation is confined within each slice group, pixel filling may be performed.
  • a motion vector in one slice group indicates another slice group and motion vector estimation can be performed on a slice group with reference to other slice groups, the slice group indicated by the motion vector except an expanded image (i.e., an image to be cropped) may be referenced.
  • a UHD image may be divided into a plurality of partition images, an expanded image may be added to each of the partition images, the partition images may be respectively mapped to a plurality of slice groups, and each of the slice groups may be encoded.
  • an expanded image may be added to each of the partition images
  • the partition images may be respectively mapped to a plurality of slice groups
  • each of the slice groups may be encoded.
  • FIG. 18 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention
  • FIG. 19 illustrates a diagram for explaining the method shown in FIG. 18.
  • an image may be divided into a number of layers, and each of the layers may be divided into a plurality of partition images (S 1810).
  • Scalable encoding is a method of scalably encoding a plurality of layers in consideration of time, space or signal-to-noise ratio (SNR). Scalable encoding, and particularly, spatial scalable encoding method, will hereinafter be described in detail.
  • SNR signal-to-noise ratio
  • Spatial scalable encoding is characterized by dividing an image into a number of layers, compressing the layers, extracting some of the layers that can be restored during a decoding operation, and restoring an image having an appropriate definition based on the extracted layers.
  • FIG. 19 illustrates a diagram showing how to perform spatial scalable coding on a UHD.
  • a UHD image may be divided into two layers: an enhancement layer and a base layer.
  • the enhancement layer may be divided into four slice groups.
  • the base layer may not include any slice group or may include only one slice group.
  • the definition of the enhancement layer and the base layer may be about 1/4 of the definition of the UHD image.
  • each of the layers of the UHD image may be divided into one or more partition images.
  • an enhancement layer may be divided into four partition images, and a base layer may be divided into one partition image.
  • the partition images of each of the layers of the UHD image may be respectively mapped to a plurality of slice groups (S 1820). Thereafter, each of the slice groups may be encoded (S 1830).
  • the mapping of the partition images of each of the layers of the UHD image to slice groups and the encoding of the slice groups have already been described above with reference to FIG. 11, and thus, detailed descriptions thereof will be omitted.
  • FIG. 20 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention
  • FIG. 21 illustrates a diagram showing syntax that can be applied to the method shown in FIG. 20.
  • the method shown in FIG. 20 is almost the same as the method shown in FIG. 18 except that each of a plurality of partition images obtained by dividing a UHD image has an expanded image added thereto.
  • an image may be divided into a number of layers, each of the layers may be divided into one or more partition images, and an expanded image may be added to each of the partition images of each of the layers (S2010).
  • the expanded image may include at least parts of some of the partition images.
  • the expanded image may be added to each of the partition images of each of the layers and may be disposed along the boundaries between the partition images of each of the layers.
  • each of the partition images of each of the layers may have an expanded image added thereto, and the expanded image may include at least parts of some of the partition images.
  • the expanded image may include one or more macroblocks. The number of macroblocks included in the expanded image may vary according to the color and the format of the original image.
  • the partition images of each of the layers may be respectively mapped to a plurality of slice groups (S2020). If the same expanded image is added to each of the partition images of each of the layers, the slice groups may have the same expanded image in common. That is, the slice groups may be allowed to have an overlap therebetween and may thus be able to be processed independently from one another.
  • the slice groups may be encoded (S2030).
  • the encoding of the slice groups may involve performing frequency conversion, quantization, entropy encoding, deblocking filtering and motion estimation and compensation.
  • the encoding of each of the slice groups may also involve encoding information indicating whether the slice groups have the same expanded image in common.
  • the information indicating whether the slice groups have the same expanded image in common may be labeled as "slice_group_overlap_flag", as shown in FIG. 21.
  • cropping information necessary for cropping the expanded image may also be encoded.
  • the cropping information may be labeled as "slice_cropping_top_left” or “slice_cropping_bottom_right”.
  • “slice_cropping_bottom_right” may be added to “picture_layer_svc_extension” as new syntax elements or may be added to an H.264/AVC picture layer as new syntax elements.
  • the cropping information may indicate the boundaries of a partition image excluding an expanded image.
  • “left_offset, “right_offset”, “top_offset”, and “bottom_offset” may be used as the cropping information, as described above.
  • a plurality of slice groups of an upper layer may correspond to and reference a plurality of slice groups of a lower layer.
  • information indicating the correspondence between the slice groups of the upper layer and the slice groups of the lower layer may be additionally provided in order for the slice groups of the upper layer to efficiently use the slice groups of the lower layer. That is, information regarding the hierarchy among a plurality of slice groups of each layer may be additionally provided, and may thus be used to restore an image.
  • information indicating whether a predetermined layer references at least one slice group of its underlying layer. If “no_inter_layer_pred_flag” has a value of 1, it may be determined that the predetermined layer does not reference any slice group of its underlying layer. On the other hand, if “no_inter_layer_pred_flag” has a value of 0, it may be determined that the predetermined layer references at least one slice group of its underlying layer. In this case, information (i.e., "lower_layer_slice_group_id” ) indicating the slice group of the underlying layer of the predetermined layer referenced by the predetermined layer may be encoded.
  • FIG. 22 illustrates a flowchart of a method of decoding an image according to an exemplary embodiment of the present invention.
  • a plurality of encoded slice groups may be extracted from an input bitstream (S2210).
  • the input bitstream may be a bitstream obtained by the method shown in FIG. 2 and may include the encoded slice groups.
  • the extraction of the encoded slice groups may be performed with reference to a plurality of parameter sets (such as SPS#0 through SPS#4 and PPS#0 through PPS#4) respectively corresponding to the encoded slice groups and information (i.e., "reserved_zero_4bits" of FIG. 10) indicating whether each of the parameter sets is a synthesized parameter set for all the encoded slice groups or an individual parameter set exclusively for a corresponding encoded slice group.
  • the parameter sets respectively corresponding to the encoded slice groups and the information "reserved_zero_4bits" may be decoded first. Thereafter, the encoded slice groups may be extracted from the input bitstream. Thereafter, the extracted slice groups may be decoded (S2220).
  • the decoding of the extracted slice groups may involve performing entropy decoding, inverse quantization, inverse transform, deblocking filtering, and motion compensation and decoding slice group position information such as "top_left” and "bottom_right” of FIG. 5.
  • Inverse quantization and inverse transform may be performed as a single process.
  • a plurality of images respectively mapped to the decoded slice groups may be extracted (S2230).
  • the extraction of the images respectively mapped to the decoded slice groups may be performed by using MBAOT, MBAlT, MBA2T, MBA3T, MBAOB, MBAlB, MBA2B, and MBA3B of FIG. 3, and referencing decoded slice group position information (e.g., "top_left” and "bottom_right”.
  • the extracted images may be synthesized into a single image.
  • the extracted images may be separate images having different contents, as shown in FIG. 3 (a).
  • the extracted mages may be partition images of the same image and may represent different viewpoints, as shown in FIG. 3(b).
  • a plurality of partition images of an image may be numbered, and this will be described later in detail with reference to FIG. 24.
  • FIG. 23 illustrates a block diagram of an apparatus 2300 for displaying an image according to an exemplary embodiment of the present invention
  • FIG. 24 illustrates a diagram for explaining the operation of the apparatus 2300.
  • the apparatus 2300 may include a tuner 2310, a decoder 2320, an image signal processing unit 2330 and a display unit 2340.
  • the tuner 2310 may receive an input bitstream.
  • the input bitstream may be a bitstream obtained by the method shown in FIG. 2. That is, the input bitstream may be a bitstream obtained by respectively mapping a plurality of images to a plurality of slice groups and encoding the slice groups.
  • the decoder 2320 may extract a plurality of slice groups from the input bitstream received by the tuner 2310, may decode the extracted slice groups and may extract a plurality of images respectively mapped to the decoded slice groups.
  • the extraction of a plurality of slice groups from the input bitstream may be performed with reference to a plurality of parameter sets (such as SPS#0 through SPS#4 and PPS#0 through PPS#4) respectively corresponding to the slice groups and information (i.e., "reserved_zero_4bits" of FIG. 10) indicating whether each of the parameter sets is a synthesized parameter set for all the slice groups or an individual parameter set exclusively for a corresponding slice group.
  • the decoder 2320 may decode the parameter sets respectively corresponding to the slice groups and the information "reserved_zero_4bits". Thereafter, the slice groups may be extracted from the input bitstream. Thereafter, the extracted slice groups may be decoded (S2220).
  • the decoding of the extracted slice groups may involve performing entropy decoding, inverse quantization, inverse transform, deblocking filtering, and motion compensation and decoding slice group position information such as "top_left” and "bottom_right” of FIG. 5.
  • Inverse quantization and inverse transform may be performed as a single process.
  • the extraction of the images respectively mapped to the decoded slice groups may be performed by using MBAOT, MBAlT, MBA2T, MBA3T, MBAOB, MBAlB, MBA2B, and MBA3B of FIG. 3 and referencing decoded slice group position information such as "top_left” and "bottom_right”.
  • the decoder 2320 may receive a plurality of SPSs and a plurality of PPSs of the input bitstream. If the received SPSs do not use "reserved_zero_4bits", the values of "pic_width_in_mbs_minusl” and “pic_height_in_map_units_minusl" of each of the received SPSs, and thus, it may be determined whether each of the received SPSs is a synthesized parameter set for a UHD image or an individual parameter set for a partition image of a UHD image based on the results of the comparison.
  • each of the received SPSs is a synthesized parameter set for a UHD image or an individual parameter set for a partition image of a UHD image based on the results of the interpretation.
  • the decoder 2320 may decode the partition images using the individual parameter sets of the partition images. Thereafter, a UHD image may be restored by synthesizing the decoded partition images using a number of synthesized parameter sets for a UHD image. Thereafter, the restored UHD image may be displayed on a screen.
  • a UHD image may be restored by synthesizing the partition images with reference to slice group position information included in one of the synthesized parameter sets, and the restored UHD image may be displayed on a screen.
  • the decoder 2320 may decode an image using various decoding methods, other than an H.264/AVC decoding method, such as a Moving Picture Experts Group (MPEG) decoding method and VC-I, which is the Society of Motion Picture and Television Engineers (SMPTE) video codec standard.
  • MPEG Moving Picture Experts Group
  • VC-I which is the Society of Motion Picture and Television Engineers (SMPTE) video codec standard.
  • the image signal processing unit 2330 may perform various signal processing operations on an image so that the image can be properly displayed by the display unit 2340. More specifically, the image signal processing unit 2330 may synthesize a plurality of images provided by the decoder 2320 into a single image.
  • the image signal processing unit 2330 may include a sealer (not shown), an onscreen display (OSD, not shown), a memory (not shown), an image transmitter (not shown) and a controller (not shown).
  • a sealer not shown
  • OSD onscreen display
  • memory not shown
  • an image transmitter not shown
  • controller not shown
  • the sealer may increase or decrease the size of an image so as to be compatible with the size of the screen of the display unit 2340.
  • the OSD may control graphic data, text data and various menus to be displayed on the screen of the display unit 2340.
  • the memory may store various reference pictures for use in a decoding operation performed by the decoder 2320 or may store graphic data or text data to be displayed on the screen of the display unit 2340.
  • the image transmitter may process the graphic data or the text data present in the memory and may transmit an image having the processed graphic data or the processed text data attached thereto to the display unit 2340.
  • the controller may control the operation of the image signal processing unit 2330.
  • the controller may also control the operation of the decoder 2320.
  • the display unit 2340 may display an image provided by the image signal processing unit 2330.
  • the image provided by the image signal processing unit 2330 may be a UHD image including a plurality of independent images such as the UHD image 320 of FIG. 3(a).
  • a plurality of partition images of a UHD image may all be displayed together, as shown in FIG. 23.
  • the partition images may be numbered by the OSD of the image signal processing unit 2330 of FIG. 23, as indicated by reference numeral 2410 of FIG. 24, so as to be able to be easily differentiated from one another.
  • the image processing unit 2330 may control the chosen partition image to be displayed by the display unit 2340.
  • FIG. 24 illustrates the case in which one of four partition images is chosen and is then displayed at a resolution compatible with the image display unit 2340, but the present invention is not restricted to this. That is, the image display unit 2340 may display more than one partition image at the same time at various resolutions.
  • the decoder 2320 may not decode the slice groups respectively corresponding to the non-chosen images.
  • the image processing unit 2330 may transmit a control signal for enabling the decoder 2320 to perform selective decoding. In this manner, it is possible to reduce power consumption by skipping unnecessary decoding procedures.
  • FIG. 25 illustrates a block diagram of an apparatus 2500 for displaying an image according to another exemplary embodiment of the present invention.
  • the exemplary embodiment of FIG. 25 is almost the same as the exemplary embodiment of FIG. 23 except that a plurality of images provided by an image signal processing unit 2530 are respectively displayed by a plurality of display units 2540- 1 through 2540-4.
  • the image signal processing unit 2330 may be connected to the display unit 2340 through a wired or wireless interface.
  • the image signal processing unit 2530 may be connected to each of the display units 2540- 1 through 2540-4 through a wired or wireless interface.
  • FIG. 26 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention.
  • the method shown in FIG. 28 corresponds to the method shown in FIG. 11.
  • the method shown in FIG. 26 will hereinafter be described, focusing mainly on differences with the method shown in FIG. 11.
  • a plurality of encoded slice groups may be extracted from an input bitstream (S2610).
  • the input bitstream may be a bitstream obtained by the method shown in FIG. 11 and may include the encoded slice groups.
  • the extracted slice groups may be decoded (S2620).
  • the decoding of the extracted slice groups may involve performing entropy decoding, inverse quantization, inverse transform, deblocking filtering, and motion compensation and decoding slice group position information such as "top_left” and "bottom_right” of FIG. 13. Inverse quantization and inverse transform may be performed as a single process.
  • a plurality of partition images respectively mapped to the decoded slice groups may be extracted (S2640).
  • MBAOT, MBAlT, MBA2T, MBA3T, MBAOB, MBAlB, MBA2B, and MBA3B may be extracted from the slice groups 0 through 3 with reference to slice group position information (e.g., "top_left” and "bottom_right") of slice groups 0 through 3.
  • FIG. 27 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention.
  • the method shown in FIG. 30 corresponds to the method shown in FIG. 14, and is almost the same as the method shown in FIG. 26 except that an expanded image added to each of a plurality of partition images is cropped.
  • the method shown in FIG. 26 will hereinafter be described, focusing mainly on differences with the method shown in FIG. 14 and the method shown in FIG. 26.
  • a plurality of slice groups may be extracted from an input bitstream (S2710). Thereafter, the extracted slice groups may be decoded (S2720).
  • the decoding of the extracted slice groups may involve performing entropy decoding, inverse quantization, inverse transform, deblocking filtering, and motion compensation, decoding "top_left” and “bottom_right” of FIG. 13, decoding "slice_group_overlap_flag”, “slice_cropping_top_left” and
  • an expanded image, if any, added to each of a plurality of partition images respectively mapped to the decoded slice groups may be cropped, and the partition images may be extracted (S2730).
  • each of the partition images may be determined whether each of the partition images has an expanded image added thereto based on "slice_group_overlap_flag". If each of the partition images has an expanded image added thereto, the expanded image added may be cropped with reference to cropping information such as “slice_cropping_top_left” corresponding to MBAOTC, MBAlTC, MBA2TC, and MBA3TC of FIG. 15 and “slice_cropping_bottom_right” corresponding to MBAOBC, MBAlBC, MBA2BC, and MBA3BC of FIG. 15. Alternatively, the expanded image may be cropped with reference to "left_offset, "right_offset", “top_offset", and “bottom_offset”.
  • partition images may be extracted with reference to, for example, "topjeft” and “bottom_right” and using MBAOT, MBAlT, MBA2T, MBA3T, MBAOB, MBAlB, MBA2B, and MBA3B of FIG. 15.
  • the extracted partition images may be synthesized into a single image (S2740).
  • FIG. 28 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention.
  • the method shown in FIG. 28 corresponds to the method shown in FIG. 18 and is almost the same as the method shown in FIG. 26 except that the method shown in FIG. 28 is a scalable decoding method.
  • the method shown in FIG. 28 will hereinafter be described, focusing mainly on differences with the method shown in FIG. 18 and the method shown in FIG. 26.
  • one or more encoded slice groups may be extracted from each of a number of layers of an input bitstream (S2810). More specifically, referring to FIG. 19, four encoded slice groups may be extracted from an enhancement layer, and one encoded slice group may be extracted from a base layer.
  • the extracted slice groups may be decoded (S2820). Thereafter, a plurality of partition images respectively mapped to the decoded slice groups may be synthesized into a single image (S2840).
  • FIG. 29 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention.
  • the method shown in FIG. 29 corresponds to the method shown in FIG. 20 and is almost the same as the method shown in FIG. 28 except that an expanded image added to each of a plurality of partition images is cropped.
  • the method shown in FIG. 29 will hereinafter be described, focusing mainly on differences with the method shown in FIG. 20 and the method shown in FIG. 28.
  • a number of slice groups may be extracted from each of a number of layers of an input bitstream (S2910). Thereafter, the extracted slice groups may be decoded (S2920).
  • the decoding of the extracted slice groups may involve performing entropy decoding, inverse quantization, inverse transform, deblocking filtering and motion compensation, decoding "top_left” and “bottom_right” of FIG. 13, decoding "slice_group_overlap_flag”, “slice_cropping_top_left”, and
  • an expanded image added to each of a plurality of partition images respectively mapped to the decoded slice groups may be cropped, and then the partition images may be extracted (S2930).
  • each of the partition images may be determined whether each of the partition images has an expanded image added thereto based on "slice_group_overlap_flag". If each of the partition images has an expanded image added thereto, the expanded image may be cropped with reference to cropping information such as "slice_cropping_top_left” corresponding to MBAOTC, MBAlTC, MBA2TC, and MBA3TC of FIG. 15 and "slice_cropping_bottom_right” corresponding to MBAOBC, MBAlBC, MBA2BC, and MBA3BC of FIG. 15.
  • partition images may be extracted with reference to, for example, "topjeft” and “bottom_right” and using MBAOT, MBAlT, MBA2T, MBA3T, MBAOB, MBAlB, MBA2B, and MBA3B of FIG. 15.
  • the partition images may be extracted with reference to a number of slice groups indicated by "no_inter_layer_pred_flag” and "lower_layer_slice_group_id".
  • the extracted partition images may be synthesized into a single image (S2940).
  • the present invention can be realized as computer-readable code written on a computer-readable recording medium.
  • the computer-readable recording medium may be any type of recording device in which data is stored in a computer-readable manner. Examples of the computer-readable recording medium include a ROM, a RAM, a CD- ROM, a magnetic tape, a floppy disc, an optical data storage, and a carrier wave (e.g., data transmission through the Internet).
  • the computer-readable recording medium can be distributed over a plurality of computer systems connected to a network so that computer-readable code is written thereto and executed therefrom in a decentralized manner. Functional programs, code, and code segments needed for realizing the present invention can be easily construed by one of ordinary skill in the art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method of encoding an image, a method of decoding an image and an apparatus for displaying an image are provided. The method of encoding an image includes respectively mapping a plurality of first images to a plurality of slice groups; and encoding the slice groups. Therefore, it is possible to generate a single bitstream and improve encoding/decoding ef ficiency.

Description

Description
METHOD FOR ENCODING AND DECODING IMAGE, AND APPARATUS FOR DISPLAYING IMAGE
Technical Field
[1] The present invention relates to a method of encoding an image, a method of decoding an image, and an apparatus for displaying an image, and more particularly, to a method of encoding an image, a method of decoding an image, and an apparatus for displaying an image, in which a plurality of images are respective mapped to a plurality of slice groups and the slice groups are encoded and decoded.
[2]
Background Art
[3] Research has recently been conducted on digital cinema and ultra-high definition television (UDTV), which are digital video techniques capable of providing a higher definition than HD (HD) broadcasting providing a definition of 2K (e.g., 1920x1080). Digital cinema can provide a definition of 4K (a maximum of 4096x2096), and UHDTV can provide a definition of 4K-8K (a maximum of 7680x4320).
[4] Since UHD images contain more information than ordinary images, UHDTV are nowadays deemed one of the most important elements for realizing realistic broadcasting, the ultimate goal of digital broadcasting, by providing viewers with improved realism and vividness.
[5] Uncompressed UHD video data is generally processed at a speed of 3-8 Gbps, and compressed UHD video data is generally processed at a speed of 100-600 Mbps. Thus, in order to properly process, transmit and store UHD video data, it is necessary to compress UHD video data.
[6] Due to recent developments in communication technology, various electronic products such as mobile phones, personal digital assistants (PDAs), digital TVs and digital multimedia broadcast (DMB) devices have been developed. Such electronic devices use various types of data including audio data, still image data and moving image data.
[7] In particular, moving image data is generally very large in size. Thus, various moving image compression standards such as H.261, VC-I, which is the Society of Motion Picture and Television Engineers (SMPTE) video codec standard, and H.264/AVC, which is the ITU-T and ISO/IEC video codec standard, have been suggested.
[8] FIG. 1 illustrates block diagrams of two conventional systems for receiving and decoding a plurality of images. [9] Referring to FIG. l(a), two tuners 101 and 102 may respectively receive two bitstreams transmitted thereto through different frequency bands. Thereafter, two decoders 105 and 106 may respectively decode the two received bitstreams, and an image signal processor 110 may appropriately process the decoded bitstreams provided by the decoders 105 and 106.
[10] Referring to FIG. l(b), a tuner 103 may receive a bitstream having a uniform modulation frequency. Then, a demultiplexer 104 may demultiplex the bitstream and may transmit the demultiplexed bitstream to each of decoders 105 and 106. The decoders 105 and 106 may decode the demultiplexed bitstream and may thus restore a plurality of images. Thereafter, an image signal processor 110 may process the restored images.
[11] If multiple video contents are transmitted through more than one channel, as shown in FIG. l(a), more than one tuner may be required to properly process the multiple video contents. On the other hand, if a multiplexed bitstream obtained by multiplexing a plurality of individual bitstreams is transmitted through a single frequency band, as shown in FIG. l(b), more than one decoder may be required to extract and restore the individual bitstreams from the multiplexed bitstream. In addition, it is necessary to acquire or estimate information specifying the relationship between the individual bitstreams.
[12]
Disclosure of Invention
Technical Problem
[13] The present invention provides a method of encoding an image, a method of decoding an image, and an apparatus for displaying an image, which can improve encoding/decoding efficiency by generating a single bitstream using a high-definition (HD) image.
[14]
Technical Solution
[15] According to an aspect of the present invention, there is provided a method of encoding an image, the method including respectively mapping a plurality of first images to a plurality of slice groups; and encoding the slice groups.
[16] According to another aspect of the present invention, there is provided a method of decoding an image, the method including extracting a plurality of encoded slice groups from an input bitstream; decoding the extracted slice groups; and extracting a plurality of first images respectively mapped to the decoded slice groups.
[17] According to another aspect of the present invention, there is provided an apparatus for displaying an image, the apparatus including one or more display units displaying an image; a decoder extracting a plurality of slice groups from an input bitstream, decoding the extracted slice groups and extracting a plurality of first images respectively mapped to the decoded slice groups; and an image signal processing unit synthesizing the extracted first images into a single image. [18]
Advantageous Effects
[19] According to the present invention, since a single bitstream is generated by respectively mapping a plurality of images to a plurality of slice groups and encoding the slice groups, it is possible to increase encoding/decoding efficiency. In addition, it is possible to effectively process a single bitstream without a requirement of a plurality of tuners and/or a plurality of decoders and thus to simplify the structure of a system.
[20] According to the present invention, it is possible to minimize modifications to am existing system simply adding new syntax elements to existing syntax. In addition, it is possible to support the use of information regarding neighboring partition images within the scope of existing video compression standards.
[21] According to the present invention, since a single bitstream is generated by dividing a high-definition (HD) image into a plurality of partition images, respectively mapping the partition images to a plurality of slice groups and encoding the slice groups, it is possible to increase encoding/decoding efficiency.
[22] According to the present invention, since an HD image is divided into a plurality of partition images and an expanded image including at least parts of some of the partition images is added to each of the partition images, it is possible to prevent the deterioration of the picture quality along the boundaries between the partition images.
[23] According to the present invention, since a scalable encoding/decoding method is used, it is possible to adaptively decode an HD image according to the capability of a decoder. In addition, it is possible to effectively extract independent contents from a bitstream with reference to information indicating the correlation between a plurality of slice groups.
[24] According to the present invention, a plurality of images may be appropriately numbered and may thus be able to be selectively chosen. Thus, it is possible to improve user convenience. In addition, since whichever of the images are not chosen may not necessarily have to be decoded, it is possible to reduce power consumption.
[25]
Brief Description of Drawings
[26] FIG. 1 illustrates block diagrams of conventional systems for receiving and decoding a plurality of images;
[27] FIG. 2 illustrates a flowchart of a method of encoding an image according to an exemplary embodiment of the present invention;
[28] FIG. 3 illustrates diagrams for explaining the method shown in FIG. 1;
[29] FIG. 4 illustrates diagrams of various slice groups;
[30] FIG. 5 illustrates a diagram showing syntax for representing type-2 slice group position information; [31] FIGS. 6 through 8 illustrate diagrams showing the correspondence between a plurality of parameter sets and a plurality of slice groups; [32] FIG. 9 illustrates a diagram showing the arrangement of a plurality of partition images of an ultra high-definition (UHD) image; [33] FIG. 10 illustrates a diagram showing syntax for identifying the type of parameter set; [34] FIG. 11 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention;
[35] FIG. 12 illustrates diagrams for explaining the method shown in FIG. 11;
[36] FIG. 13 illustrates a diagram showing syntax for representing slice group position information; [37] FIG. 14 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention;
[38] FIG. 15 illustrates diagrams for explaining the method shown in FIG. 14;
[39] FIG. 16 illustrates a diagram showing syntax that can be applied to the method shown in FIG. 14; [40] FIG. 17 illustrates a diagram showing the syntax of cropping information that can be used in the method shown in FIG. 14; [41] FIG. 18 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention;
[42] FIG. 19 illustrates diagrams for explaining the method shown in FIG. 18;
[43] FIG. 20 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention; [44] FIG. 21 illustrates a diagram showing the syntax of cropping information that can be used in the method shown in FIG. 20; [45] FIG. 22 illustrates a flowchart of a method of decoding an image according to an exemplary embodiment of the present invention; [46] FIG. 23 illustrates a block diagram of an apparatus for displaying an image according to an exemplary embodiment of the present invention; [47] FIG. 24 illustrates a diagram for explaining the operation of the apparatus shown in
FIG. 23; [48] FIG. 25 illustrates a block diagram of an apparatus for displaying an image according to another exemplary embodiment of the present invention; and [49] FIG. 26 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention;
[50] FIG. 27 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention;
[51] FIG. 28 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention; and
[52] FIG. 29 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention.
[53]
Best Mode for Carrying out the Invention
[54] The present invention will hereinafter be described in detail with reference to the accompanying drawings in which exemplary embodiments of the invention are shown.
[55] FIG. 2 illustrates a flowchart of a method of encoding an image according to an exemplary embodiment of the present invention, FIG. 3 illustrates diagrams for explaining the method shown in FIG. 1, FIG. 4 illustrates diagrams of various slice groups, and FIG. 5 illustrates a diagram showing syntax for representing type-2 slice group position information.
[56] Referring to FIG. 2, a plurality of images may be respectively mapped to a plurality of slice groups (S210).
[57] The images may be independent video contents regarding movies, dramas, sports, and shopping, as shown in FIG. 3(a). Alternatively, the images may represent different viewing angles, as shown in FIG. 3(b).
[58] Referring to FIG. 3, an ultra high-definition (UHD) image 320 or 325 may be divided into four partition images, i.e., high-definition (HD) images 0 through 1 (310 or 315), but the present invention is not restricted to this.
[59] Slice groups will hereinafter be described in detail with reference to FIG. 4.
[60] According to H.264/AVC, which is a video compression standard, a sequence may be divided into a number of pictures, each of the pictures may include a plurality of slice groups, and each of the slice groups may include a plurality of slices.
[61] FIG. 4 illustrates a diagram showing various types of slice group maps provided by the H.264/AVC standard. Referring to FIG. 4, there are seven slice group map types: type 0 (interleaved), type 1 (dispersed), type 2 (foreground with leftover), type 3 (box-out), type 4 (raster scan), type 5 (wipe), and type 6 (explicit).
[62] Slice groups have originally been designed for restoring an erroneous image by correcting errors in the image. However, in the exemplary embodiment of FIG. 12, slice groups may be used for processing each of a plurality of partition images of an image. [63] Referring to FIG. 3, the HD images 0 through 3 may be respectively mapped to a plurality of type-2 slice groups (i.e., a plurality of slice groups 0 through 3), respectively.
[64] Referring to FIG. 5, "top_left" indicates a top left portion of each slice group, and
"bottom_right" indicates a bottom right portion of each slice group. More specifically, "topjeft" corresponds to MBAOT, MBAlT, MBA2T, and MBA3T of the slice groups 0 through 3 of FIG. 3, and "bottom_right" corresponds to MBAOB, MBAlB, MBA2B, and MBA3B of the slice groups 0 through 3 of FIG. 3.
[65] Thereafter, referring to FIG. 2, the slice groups may be encoded (S220). More specifically, the encoding of the slice groups may involve performing frequency conversion, quantization, entropy encoding, deblocking filtering and motion estimation and compensation. Frequency transform and quantization may be performed as a single process.
[66] In short, a plurality of images may be respectively mapped to a plurality of slice groups, and the slice groups may be encoded, thereby generating a single bitstream using existing syntax elements without a requirement of additional syntax elements.
[67] Conventionally, a plurality of tuners and/or a plurality of demultiplexers are required to properly process a plurality of images, as described above with reference to FIG. l(a). On the other hand, in the exemplary embodiment of FIG. 2, since a bitstream is generated by encoding a plurality of images, there is no need to use a plurality of processors.
[68] Referring to FIG. 3, a bitstream obtained by encoding the slice groups 0 through 3 respectively corresponding to the HD images 0 through 3 (310 or 315) may constitute the UHD image 320 or 325. The UHD image 320 or 325 may have a definition of 4K (e.g., 4096x2096) or 8K (e.g., 7680x4320). The UHD image 320 or 325 may be displayed by extracting the HD images 0 through 3 (310 or 315).
[69] Referring to FIG. 2, in operation S220, one or more parameter sets may also be encoded. Referring to FIG. 3 (a), the HD images 0 through 3 (310) of the UHD image 320 may have different contents. In order to generate a bitstream based on different types of contents, the HD images 0 through 3 (310) may be converted to a raw format, and may then be encoded so as to have the same parameter sets. The parameter sets that can be encoded in operation S220 may include a sequence parameter set (SPS) and a picture parameter set (PPS).
[70] However, this type of encoding may not be able to properly reflect the properties of each of the HD images 0 through 3 (310). In addition, if the HD images 0 through 3 (310) are already compressed, the HD images 0 through 3 (310) may need to be decoded first and may then need to be encoded so as to have the same parameter set.
[71] Therefore, if the HD images 0 through 3 (310) form separate bitstreams, the slice groups 0 through 3 respectively corresponding to the HD images 0 through 3 (310) may share the same parameter set with each other or may reference each other's parameter set. That is, the number of parameter sets that can be used to decode a picture of a UHD image may range from a minimum of 1 to a maximum of a number of slice groups of the UHD image.
[72] The correspondence between a plurality of parameter sets and a plurality of slice groups will hereinafter be described with reference to FIGS. 6 through 8.
[73] FIG. 6 illustrates a diagram showing the correspondence between a plurality of parameter sets and a plurality of slice groups, according to an exemplary embodiment of the present invention. Referring to FIG. 6, a plurality of slice groups may respectively correspond to a plurality of parameter sets. More specifically, a plurality of images having different contents, as shown in FIG. 3 (a), may correspond to different parameter sets according to their properties and their providers. Therefore, a plurality of slice groups respectively corresponding to the images may reference different parameter sets, as shown in FIG. 6.
[74] FIG. 6 illustrates the case where a plurality of images have different contents and thus have different parameter sets. However, the number of parameter sets that constitute a UHD image may vary according to the properties of the UHD image.
[75] A plurality of parameter sets referenced by a plurality of slice groups may need to be transmitted before they are needed by the slice groups. Even though a plurality of slice groups, i.e., slice group#l through slice group#4, are illustrated in FIG. 6 as being sequentially arranged in a row, the order in which slice group#l through slice group#4 are transmitted is not restricted to the order in which slice group#l through slice group#4 are arranged. That is, slice group#l through slice group#4 may be transmitted in any order as long as they can be decoded before they are needed.
[76] The identifiers (ID) of a plurality of PPSs respectively referenced by a plurality of slice groups of a UHD image may be specified in a header, as prescribed in the H.264/AVC standard. In this manner, the slice groups of the UHD image may reference a plurality of SPSs respectively corresponding to the PPSs.
[77] More specifically, referring to FIG. 6, slice group#l designates PPS#1, and PPS#1 designates SPS#1. Thus, slice group#l references both SPS#1 and PPS#1. Alternatively, slice group#l may designate both PPS#1 and SPS#1 at the same time.
[78] If a plurality of slice groups reference different parameter sets (such as SPS#1 through SPS#4 and PPS#1 through PPS#4), as shown in FIG. 6, additional information regarding an image constituted by the slice groups may be required. The additional information may include a plurality of synthesized parameter sets. The synthesized parameter sets will hereinafter be described in detail with reference to FIG. 7.
[79] FIG. 7 illustrates a diagram showing the correspondence between a plurality of parameter sets and a plurality of slice groups, according to another exemplary embodiment of the present invention. The exemplary embodiment of FIG. 7 is almost the same as the exemplary embodiment of FIG. 6 except that a pair of synthesized parameter sets (i.e., SPS#0 and PPS#0) for controlling, for example, a UHD image, are additionally provided.
[80] Referring to FIG. 7, a plurality of slice groups respectively mapped to a plurality of
HD images, i.e., slice group#l through slice group#4, may reference PPS#1 through PPS#4, respectively, and SPS#1 through SPS#4, respectively. Since SPS#1 through SPS#4 are individual parameter sets exclusively for slice group#l through slice group#4, respectively, SPS#1 through SPS#4 may be insufficient to properly constitute a UHD image based on a plurality of HD images. Therefore, the synthesized parameter sets, i.e., SPS#0 and PPS#0, may be additionally provided. SPS#O and PPS#0 may include information for controlling a UHD image and may not be referenced by any of slice group#l through slice group#4.
[81] More specifically, SPS#0 may include various information such as the size of a UHD image. For example, SPS#0 may include first macroblock quantity information indicating the number of macroblocks that lie along the latitudinal direction of a UHD image and second macroblock quantity information indicating the number of macroblocks that lie along the longitudinal direction of the UHD image.
[82] The first macroblock quantity information may be labeled as
"pic_width_in_mbs_minusl", and the second macroblock quantity information may be labeled as "pic_height_in_map_units_minusl".
[83] For example, if a UHD image is a progressively-encoded image having a definition of 3840x2176 and each macroblock of the UHD image has a size of 16x16, "pic_width_in_mbs_minusl" of SPS#0 may have a value of 239 and "pic_height_in_map_units_minusl" of SPS#0 may have a value of 135.
[84] If a UHD image is divided into a plurality of HD images having the same size, as shown in FIG. 3(a), and the HD images are respectively mapped to slice group#l through slice group#4, "pic_width_in_mbs_minusl" of each of SPS#1 through SPS#4 may have a value of 59, and "pic_height_in_mbs_minusl" of each of SPS#1 through SPS#4 may have a value of 33.
[85] That is, the result of adding one to the sum of the value of
"pic_width_in_mbs_minusl" of each of SPS#1 through SPS#4 may be the same as the result of adding one to the value of "pic_width_in_mbs_minus 1 " of SPS#0. Likewise, the result of adding one to the sum of the value of "pic_height_in_mbs_minusl" of each of SPS#1 through SPS#4 may be the same as the result of adding one to the value of "pic_height_in_map_units_minus 1 " of SPS#0.
[86] In short, SPS#0 may include various information necessary for controlling a UHD image, for example, the length and width of the UHD image. Therefore, it is possible to easily determine how many slice groups constitute a UHD image based on SPS#0.
[87] Referring to FIG. 7, PPS#1 through PPS#4 may be referenced by a plurality of partition images. On the other hand, parameter information of PPS#0 may not be used in connection with any one of PPS#1 through PPS#4.
[88] If a plurality of images (i.e., a plurality of HD images) of, for example, a UHD image are separate images, as shown in FIG. 3 (a), macroblock coordinate information or motion vector information may be classified into HD image information, rather than UHD image information.
[89] Therefore, in order to restore a UHD image including a plurality of independent partition images, various parameter data for performing a decoding operation and a plurality of reference images may be managed together or separately.
[90] The exemplary embodiments of FIGS. 6 and 7 can be applied not only to a plurality of video contents that are independent from one another but also to a plurality of multiple viewpoint images that are synchronized and can thus be encoded using the same parameters. That is, a plurality of individual parameter sets may be respectively used for a plurality of slice groups.
[91] FIG. 8 illustrates a diagram showing the correspondence between a plurality of parameter sets and a plurality of slice groups, according to another exemplary embodiment of the present invention. The exemplary embodiment of FIG. 8 is different from the exemplary embodiment of FIG. 7 in that a plurality of slice groups correspond to the same parameter set. That is, referring to FIG. 8, a plurality of slice groups, i.e., slice group#l through slice group#4, all correspond to PPS#1, and PPS#1 corresponds to SPS#1.
[92] An UHD image obtained by synthesizing a plurality of raw images, as shown in FIG.
3, may have only one pair of reference parameters such as SPS#1 and PPS#1. In this case, since the configuration of the UHD image can be easily identified by the addresses of macroblocks at the beginning and the end of each slice group, a pair of synthesized parameter sets for configuring a UHD image such as SPS#0 and PPS#0 of FIG. 7 may be unnecessary.
[93] FIG. 9 illustrates a diagram showing the arrangement of a plurality of partition images of a UHD image. Referring to FIG. 9, MBHO indicates the number of macroblocks that lie along the latitudinal direction of a UHD image, and MBVO indicates the number of macroblocks that lie along the longitudinal direction of the UHD image.
[94] An UHD image may include a plurality of partition images 0 through 7. The partition images 0 through 7 may be respectively mapped to a plurality of slice groups, as described above.
[95] In order to properly display a UHD image, the partition images 0 through 7 may be sequentially arranged in a direction from the left to the right and then in a direction from the top to the bottom. In this case, the sum of the lengths (in macroblocks) of a number of partition images arranged side by side in a row direction may be the same as the length (in macroblocks) of a UHD image, i.e., MBHO. Likewise, the sum of the widths (in macroblocks) of a number of partition images arranged side by side in a column direction may be the same as the width (in macroblocks) of a UHD image, i.e., MBVO.
[96] Since the slice groups respectively corresponding to the partition images 0 through 7 reference their own individual parameter sets and a pair of parameter sets are additionally provided for configuring a UHD image, the partition image 0 may be decoded and may then be disposed at the upper left corner of a UHD image.
[97] Thereafter, the sum of the length (i.e., MBHl) of the partition image and the length
(i.e., MBH2) of the partition image 1 may be compared with MBHO. Since the sum of MBHl and MBH2 is the same as MBHO, the partition image 1 may be disposed next to the partition image 0.
[98] Thereafter, the partition image 2 having the same length as the partition image 1 may be disposed below the partition 1 because the width (i.e., MBH2) of the partition image 1 is less than the width (i.e., MBVl) of the partition image 0.
[99] Thereafter, the partition image 3 may be disposed below the partition image 2. Since the sum of the widths of the partition images 1 through 3, i.e., the sum of MBV2, MBV3 and MBV4, is the same as the width of the partition image 0, i.e., MBVl, the partition images 4 through 7 may all be disposed below the partition image 0 or 3.
[100] FIG. 10 illustrates a diagram showing syntax for identifying the type of parameter set. Referring to FIG. 10, it is possible to determine whether a given parameter set is a parameter set for a UHD image or a parameter set for a partition image of a UHD image based on information labeled as "reserved_zero_4bits". More specifically, it is possible to conclude that a given SPS is not a parameter set for a partition image by decoding "reserved_zero_4bits", which is yet to be specified in the H.264/AVC standard. In addition, it is possible to conclude that a PPS referencing an SPS is a parameter set for configuring a UHD image.
[101] Alternatively, it is possible to determine whether a given parameter set is a synthesized parameter set for a UHD image or an individual parameter set for a partition image of a UHD image based on whether the given parameter set is referenced by a slice group and based on parameter values indicating the size of an image, as described above with reference to FIG. 9.
[102] FIG. 11 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention, FIG. 12 illustrates diagrams for explaining the method shown in FIG. 11, and FIG. 13 illustrates a diagram showing syntax for representing slice group position information.
[103] Referring to FIG. 11, an image may be divided into a plurality of partition images
(Sl 110). The image may be a UHD image having a definition of 4 K (e.g., 4096x2096) or 8K (e.g., 7680x4320).
[104] Referring to FIG. 12, a UHD image 1210 may be divided into a plurality of partition images, i.e., a plurality of HD images 0 through 3 (1220). The UHD image 1210 is illustrated in FIG. 12 as being divided into four HD images, but the present invention is not restricted to this.
[105] Thereafter, referring to FIG. 11, the partition images may be respectively mapped to a plurality of slice groups (Sl 120). Referring to FIG. 12, the HD images 0 through 3 may be mapped to a plurality of slice groups 0 through 3, respectively.
[106] More specifically, the partition images 0 through 3 may be respectively mapped to a plurality of type-2 slice groups. Referring to FIG. 13, "top_left" indicates a top left portion of each slice group, and "bottom_right" indicates a bottom right portion of each slice group. More specifically, "topjeft" corresponds to MBAOT, MBAlT, MBA2T, and MBA3T of the slice groups 0 through 3 of FIG. 12, and "bottom_right" corresponds to MBAOB, MBAlB, MBA2B, and MBA3B of the slice groups 0 through 3 of FIG. 12.
[107] In short, a UHD image may be divided into a plurality of partition images, and the partition images may be respectively mapped to a plurality of slice groups. In this manner, it is possible to encode the UHD image using existing syntax elements without a requirement of additional syntax elements.
[108] Conventionally, a number of bitstreams corresponding to the number of partition images are generated. Thus, it is necessary to multiplex the bitstream. In addition, more than one processor is required to process the bitstreams.
[109] However, in the exemplary embodiment of FIG. 11, a plurality of partition images are respectively mapped to a plurality of slice groups, and the slice groups are encoded, thereby generating a single bitstream. Thus, there is no need to perform multiplexing, and thus, there is no need to use more than one processor. Therefore, it is possible to improve encoding efficiency.
[110] Thereafter, referring to FIG. 11, the slice groups may be encoded (Sl 130). The encoding of the slice groups may involve performing frequency conversion, quantization, entropy encoding, deblocking filtering and motion estimation and compensation. Frequency transform and quantization may be performed as a single process.
[I l l] The encoded slice groups may be synthesized, thereby obtaining the UHD image 1240 of FIG. 12.
[112] In short, in the exemplary embodiment of FIG. 11, a plurality of partition images of an image are respectively mapped to a plurality of slice groups, and the slice groups are encoded, thereby generating a single bitstream.
[113] FIG. 14 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention, FIG. 15 illustrates diagrams for explaining the method shown in FIG. 14, FIG. 16 illustrates a diagram showing syntax that can be applied to the method shown in FIG. 14, and FIG. 17 illustrates a diagram showing the syntax of cropping information that can be used in the method shown in FIG. 14.
[114] The method shown in FIG. 14 is almost the same as the method shown in FIG. 11 except that an expanded image is added to each of a plurality of partition images obtained by dividing an image.
[115] More specifically, referring to FIGS. 14 and 15, a UHD image 1510 may be divided into a plurality of partition images 1520 (i.e., a plurality of HD images 0 through 3), and an expanded image 1530 may be added to each of the partition images 1520 (S 1410). The expanded image 1530 may include at least parts of some of the partition images 1520. More specifically, the expanded image 1530 may be added to each of the partition images 1520 and may be disposed along the boundaries between the partition images 1520. Referring to FIG. 15, the expanded image 1530 may be added to, for example, the upper left partition image 1520, i.e., the HD image 0, so as to surround the right and lower sides of the HD image 0.
[116] The expanded image 1530 may include one or more macroblocks. The number of macroblocks included in the expanded image 1530 may vary according to the color and the format of the UHD image 1510.
[117] Conventionally, a UHD image is divided into a plurality of partition images, and the partition images are encoded. Thereafter, the encoded images are synthesized, thereby restoring the UHD image. In this case, the boundaries between a pair of adjacent partition images in the restored UHD image may appear prominently compared to other portions of the restored UHD image due to the differences between the values of pixels on one side of the partition image boundary and the values of pixels on the other side of the partition image boundary. In order to address this, in the exemplary embodiment of FIG. 14, the expanded image 1530 may be added to each of the partition images 1520 (S 1410). The addition of the expanded image 1730 to each of the partition images 1520 may be performed in consideration of whether it is necessary to perform deblocking filtering.
[118] Thereafter, the partition images may be respectively mapped to a plurality of slice groups (S 1420). More specifically, the partition images 1520 may be respectively mapped to a plurality of slice groups. If the same expanded image 1530 is added to each of the partition images 1520, the slice groups respectively corresponding to the partition images 1520 may have the same expanded image 1730 in common. That is, the slice groups respectively corresponding to the partition images 1520 may be allowed to have an overlap therebetween and may be able to be processed independently from one another.
[119] Thereafter, the slice groups may be encoded (S 1430). As described above with reference to FIG. 11, the encoding of the slice groups may involve performing frequency conversion, quantization, entropy encoding, deblocking filtering and motion estimation and compensation. The encoding of the slice groups may also involve encoding information indicating whether the slice groups have the same expanded image in common. The information indicating whether the slice groups have the same expanded image in common may be labeled as "slice_group_overlap_flag".
[120] Referring to FIG. 16, "slice_group_overlap_flag" may be included in an H.264/AVC picture layer as a new syntax element. If "slice_group_overlap_flag" has a value of 1, it may be determined that there is an overlap between a plurality of slice groups, i.e., it may be determined that the slice groups have the same expanded image in common. On the other hand, if "slice_group_overlap_flag" has a value of 0, it may be determined that the slice groups have no expanded image in common. This is clearly distinguished from the H.264/AVC standard stating that a macroblock belongs to only one slice group.
[121] If the same expanded image 1530 is added to each of the partition images 1520 in operation S 1430, cropping information necessary for cropping the expanded image 1530 may also be encoded. The cropping information may be labeled as "slice_cropping_top_left" or "slice_cropping_bottom_right", as shown in FIG. 16. "slice_cropping_top_left" and "slice_cropping_bottom_right" may be added to an H.264/AVC picture layer as new syntax elements.
[122] More specifically, "slice_cropping_top_left" may indicate a top left boundary portion of each slice group excluding an expanded image, and "slice_cropping_bottom_right" may indicate a right bottom boundary portion of each slice group excluding an expanded image.
[123] Accordingly, referring to FIG. 15, "slice_cropping_top_left" may correspond to MBAOTC, MBAlTC, MBA2TC, and MBA3TC of a plurality of slice groups 0 through 3, and "slice_cropping_bottom_right" may correspond to MBAOBC, MBAlBC, MBA2BC, and MBA3BC of the slice groups 0 through 3.
[124] The cropping information may indicate the boundaries of a partition image excluding an expanded image. In this case, a plurality of offset values indicating how much the boundaries of a partition image are distant from the boundaries of a slice group, i.e., "frame_crop_left_offset, "frame_crop_right_offset", "frame_crop_top_offset", and "frame_crop_bottom_offset", may be used as the cropping information, as described above with reference to FIG. 17. That is, "left_offset, "right_offset", "top_offset", and "bottom_offset" may be used as the cropping information.
[125] If a motion vector in one slice group indicates another slice group and motion vector estimation is confined within each slice group, pixel filling may be performed. On the other hand, if a motion vector in one slice group indicates another slice group and motion vector estimation can be performed on a slice group with reference to other slice groups, the slice group indicated by the motion vector except an expanded image (i.e., an image to be cropped) may be referenced.
[126] In short, a UHD image may be divided into a plurality of partition images, an expanded image may be added to each of the partition images, the partition images may be respectively mapped to a plurality of slice groups, and each of the slice groups may be encoded. In this manner, it is possible to easily encode a UHD image using existing syntax elements without a requirement of additional syntax elements. In addition, it is possible to prevent the deterioration of the quality of an image along the boundaries between a plurality of partition images of the image.
[127] FIG. 18 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention, and FIG. 19 illustrates a diagram for explaining the method shown in FIG. 18. Referring to FIG. 18, an image may be divided into a number of layers, and each of the layers may be divided into a plurality of partition images (S 1810).
[128] Scalable encoding is a method of scalably encoding a plurality of layers in consideration of time, space or signal-to-noise ratio (SNR). Scalable encoding, and particularly, spatial scalable encoding method, will hereinafter be described in detail.
[129] Spatial scalable encoding is characterized by dividing an image into a number of layers, compressing the layers, extracting some of the layers that can be restored during a decoding operation, and restoring an image having an appropriate definition based on the extracted layers.
[130] FIG. 19 illustrates a diagram showing how to perform spatial scalable coding on a UHD. Referring to FIG. 19, a UHD image may be divided into two layers: an enhancement layer and a base layer. The enhancement layer may be divided into four slice groups. The base layer may not include any slice group or may include only one slice group. The definition of the enhancement layer and the base layer may be about 1/4 of the definition of the UHD image.
[131] Referring to FIG. 18, each of the layers of the UHD image may be divided into one or more partition images. For example, an enhancement layer may be divided into four partition images, and a base layer may be divided into one partition image.
[132] Thereafter, the partition images of each of the layers of the UHD image may be respectively mapped to a plurality of slice groups (S 1820). Thereafter, each of the slice groups may be encoded (S 1830). The mapping of the partition images of each of the layers of the UHD image to slice groups and the encoding of the slice groups have already been described above with reference to FIG. 11, and thus, detailed descriptions thereof will be omitted.
[133] In this manner, it is possible to restore an image having an appropriate definition by performing scalable encoding in units of slice groups and extracting only a number of layers that can be restored from a UHD image.
[134] FIG. 20 illustrates a flowchart of a method of encoding an image according to another exemplary embodiment of the present invention, and FIG. 21 illustrates a diagram showing syntax that can be applied to the method shown in FIG. 20. The method shown in FIG. 20 is almost the same as the method shown in FIG. 18 except that each of a plurality of partition images obtained by dividing a UHD image has an expanded image added thereto.
[135] More specifically, referring to FIG. 20, an image may be divided into a number of layers, each of the layers may be divided into one or more partition images, and an expanded image may be added to each of the partition images of each of the layers (S2010). The expanded image may include at least parts of some of the partition images. The expanded image may be added to each of the partition images of each of the layers and may be disposed along the boundaries between the partition images of each of the layers. As a result, each of the partition images of each of the layers may have an expanded image added thereto, and the expanded image may include at least parts of some of the partition images. The expanded image may include one or more macroblocks. The number of macroblocks included in the expanded image may vary according to the color and the format of the original image.
[136] The addition of the expanded image to each of the partition images of each of the layers may be performed in consideration of whether it is necessary to perform deblocking filtering.
[137] Thereafter, the partition images of each of the layers may be respectively mapped to a plurality of slice groups (S2020). If the same expanded image is added to each of the partition images of each of the layers, the slice groups may have the same expanded image in common. That is, the slice groups may be allowed to have an overlap therebetween and may thus be able to be processed independently from one another.
[138] Thereafter, the slice groups may be encoded (S2030). The encoding of the slice groups may involve performing frequency conversion, quantization, entropy encoding, deblocking filtering and motion estimation and compensation. The encoding of each of the slice groups may also involve encoding information indicating whether the slice groups have the same expanded image in common. The information indicating whether the slice groups have the same expanded image in common may be labeled as "slice_group_overlap_flag", as shown in FIG. 21.
[139] If the same expanded image is added to each of the partition images of each of the layers (S2030), cropping information necessary for cropping the expanded image may also be encoded. The cropping information may be labeled as "slice_cropping_top_left" or "slice_cropping_bottom_right".
[140] Referring to FIG. 21, "slice_group_overlap_flag", "slice_cropping_top_left" and
"slice_cropping_bottom_right" may be added to "picture_layer_svc_extension" as new syntax elements or may be added to an H.264/AVC picture layer as new syntax elements.
[141] The cropping information may indicate the boundaries of a partition image excluding an expanded image. In this case, "left_offset, "right_offset", "top_offset", and "bottom_offset" may be used as the cropping information, as described above.
[142] In a scalable video coding (SVC) architecture, a plurality of slice groups of an upper layer may correspond to and reference a plurality of slice groups of a lower layer. In this case, information indicating the correspondence between the slice groups of the upper layer and the slice groups of the lower layer may be additionally provided in order for the slice groups of the upper layer to efficiently use the slice groups of the lower layer. That is, information regarding the hierarchy among a plurality of slice groups of each layer may be additionally provided, and may thus be used to restore an image.
[143] Referring to FIG. 21, information (i.e., "no_inter_layer_pred_flag") indicating whether a predetermined layer references at least one slice group of its underlying layer. If "no_inter_layer_pred_flag" has a value of 1, it may be determined that the predetermined layer does not reference any slice group of its underlying layer. On the other hand, if "no_inter_layer_pred_flag" has a value of 0, it may be determined that the predetermined layer references at least one slice group of its underlying layer. In this case, information (i.e., "lower_layer_slice_group_id" ) indicating the slice group of the underlying layer of the predetermined layer referenced by the predetermined layer may be encoded.
[144] FIG. 22 illustrates a flowchart of a method of decoding an image according to an exemplary embodiment of the present invention. Referring to FIG. 22, a plurality of encoded slice groups may be extracted from an input bitstream (S2210). The input bitstream may be a bitstream obtained by the method shown in FIG. 2 and may include the encoded slice groups.
[145] More specifically, as described above with reference to FIGS. 6 through 8, the extraction of the encoded slice groups may be performed with reference to a plurality of parameter sets (such as SPS#0 through SPS#4 and PPS#0 through PPS#4) respectively corresponding to the encoded slice groups and information (i.e., "reserved_zero_4bits" of FIG. 10) indicating whether each of the parameter sets is a synthesized parameter set for all the encoded slice groups or an individual parameter set exclusively for a corresponding encoded slice group. For this, the parameter sets respectively corresponding to the encoded slice groups and the information "reserved_zero_4bits" may be decoded first. Thereafter, the encoded slice groups may be extracted from the input bitstream. Thereafter, the extracted slice groups may be decoded (S2220).
[146] The decoding of the extracted slice groups may involve performing entropy decoding, inverse quantization, inverse transform, deblocking filtering, and motion compensation and decoding slice group position information such as "top_left" and "bottom_right" of FIG. 5. Inverse quantization and inverse transform may be performed as a single process.
[147] Thereafter, a plurality of images respectively mapped to the decoded slice groups may be extracted (S2230). For example, referring to FIG. 3, the extraction of the images respectively mapped to the decoded slice groups may be performed by using MBAOT, MBAlT, MBA2T, MBA3T, MBAOB, MBAlB, MBA2B, and MBA3B of FIG. 3, and referencing decoded slice group position information (e.g., "top_left" and "bottom_right".
[148] The extracted images may be synthesized into a single image. The extracted images may be separate images having different contents, as shown in FIG. 3 (a). Alternatively, the extracted mages may be partition images of the same image and may represent different viewpoints, as shown in FIG. 3(b). A plurality of partition images of an image may be numbered, and this will be described later in detail with reference to FIG. 24.
[149] FIG. 23 illustrates a block diagram of an apparatus 2300 for displaying an image according to an exemplary embodiment of the present invention, and FIG. 24 illustrates a diagram for explaining the operation of the apparatus 2300.
[150] Referring to FIG. 23, the apparatus 2300 may include a tuner 2310, a decoder 2320, an image signal processing unit 2330 and a display unit 2340.
[151] The tuner 2310 may receive an input bitstream. The input bitstream may be a bitstream obtained by the method shown in FIG. 2. That is, the input bitstream may be a bitstream obtained by respectively mapping a plurality of images to a plurality of slice groups and encoding the slice groups.
[152] The decoder 2320 may extract a plurality of slice groups from the input bitstream received by the tuner 2310, may decode the extracted slice groups and may extract a plurality of images respectively mapped to the decoded slice groups.
[153] More specifically, as described above with reference to FIGS. 6 through 8, the extraction of a plurality of slice groups from the input bitstream may be performed with reference to a plurality of parameter sets (such as SPS#0 through SPS#4 and PPS#0 through PPS#4) respectively corresponding to the slice groups and information (i.e., "reserved_zero_4bits" of FIG. 10) indicating whether each of the parameter sets is a synthesized parameter set for all the slice groups or an individual parameter set exclusively for a corresponding slice group. For this, the decoder 2320 may decode the parameter sets respectively corresponding to the slice groups and the information "reserved_zero_4bits". Thereafter, the slice groups may be extracted from the input bitstream. Thereafter, the extracted slice groups may be decoded (S2220).
[154] The decoding of the extracted slice groups may involve performing entropy decoding, inverse quantization, inverse transform, deblocking filtering, and motion compensation and decoding slice group position information such as "top_left" and "bottom_right" of FIG. 5. Inverse quantization and inverse transform may be performed as a single process.
[155] For example, the extraction of the images respectively mapped to the decoded slice groups may be performed by using MBAOT, MBAlT, MBA2T, MBA3T, MBAOB, MBAlB, MBA2B, and MBA3B of FIG. 3 and referencing decoded slice group position information such as "top_left" and "bottom_right".
[156] The operation of the decoder 2320 will hereinafter be described in further detail. The decoder 2320 may receive a plurality of SPSs and a plurality of PPSs of the input bitstream. If the received SPSs do not use "reserved_zero_4bits", the values of "pic_width_in_mbs_minusl" and "pic_height_in_map_units_minusl" of each of the received SPSs, and thus, it may be determined whether each of the received SPSs is a synthesized parameter set for a UHD image or an individual parameter set for a partition image of a UHD image based on the results of the comparison. On the other hand, if the received SPSs use "reserved_zero_4bits", the codeword of "reserved_zero_4bits" may be interpreted, and thus, it may be determined whether each of the received SPSs is a synthesized parameter set for a UHD image or an individual parameter set for a partition image of a UHD image based on the results of the interpretation.
[157] If a plurality of partition images reference their respective individual parameter sets, the decoder 2320 may decode the partition images using the individual parameter sets of the partition images. Thereafter, a UHD image may be restored by synthesizing the decoded partition images using a number of synthesized parameter sets for a UHD image. Thereafter, the restored UHD image may be displayed on a screen. On the other hand, if the partition images reference the synthesized parameter sets, a UHD image may be restored by synthesizing the partition images with reference to slice group position information included in one of the synthesized parameter sets, and the restored UHD image may be displayed on a screen.
[158] The decoder 2320 may decode an image using various decoding methods, other than an H.264/AVC decoding method, such as a Moving Picture Experts Group (MPEG) decoding method and VC-I, which is the Society of Motion Picture and Television Engineers (SMPTE) video codec standard.
[159] The image signal processing unit 2330 may perform various signal processing operations on an image so that the image can be properly displayed by the display unit 2340. More specifically, the image signal processing unit 2330 may synthesize a plurality of images provided by the decoder 2320 into a single image.
[160] The image signal processing unit 2330 may include a sealer (not shown), an onscreen display (OSD, not shown), a memory (not shown), an image transmitter (not shown) and a controller (not shown).
[161] The sealer may increase or decrease the size of an image so as to be compatible with the size of the screen of the display unit 2340. The OSD may control graphic data, text data and various menus to be displayed on the screen of the display unit 2340. The memory may store various reference pictures for use in a decoding operation performed by the decoder 2320 or may store graphic data or text data to be displayed on the screen of the display unit 2340. The image transmitter may process the graphic data or the text data present in the memory and may transmit an image having the processed graphic data or the processed text data attached thereto to the display unit 2340. The controller may control the operation of the image signal processing unit 2330. The controller may also control the operation of the decoder 2320.
[162] The display unit 2340 may display an image provided by the image signal processing unit 2330. The image provided by the image signal processing unit 2330 may be a UHD image including a plurality of independent images such as the UHD image 320 of FIG. 3(a).
[163] A plurality of partition images of a UHD image may all be displayed together, as shown in FIG. 23. Alternatively, only some of the partition images may be displayed, as shown in FIG. 24. For this, the partition images may be numbered by the OSD of the image signal processing unit 2330 of FIG. 23, as indicated by reference numeral 2410 of FIG. 24, so as to be able to be easily differentiated from one another.
[164] If one of the partition images is chosen based on the serial numbers of the partition images, the image processing unit 2330 may control the chosen partition image to be displayed by the display unit 2340.
[165] FIG. 24 illustrates the case in which one of four partition images is chosen and is then displayed at a resolution compatible with the image display unit 2340, but the present invention is not restricted to this. That is, the image display unit 2340 may display more than one partition image at the same time at various resolutions.
[166] If some of a plurality of images are chosen to be displayed, the decoder 2320 may not decode the slice groups respectively corresponding to the non-chosen images. For this, the image processing unit 2330 may transmit a control signal for enabling the decoder 2320 to perform selective decoding. In this manner, it is possible to reduce power consumption by skipping unnecessary decoding procedures.
[167] FIG. 25 illustrates a block diagram of an apparatus 2500 for displaying an image according to another exemplary embodiment of the present invention. The exemplary embodiment of FIG. 25 is almost the same as the exemplary embodiment of FIG. 23 except that a plurality of images provided by an image signal processing unit 2530 are respectively displayed by a plurality of display units 2540- 1 through 2540-4.
[168] In the exemplary embodiment of FIG. 25, like in the exemplary embodiment of FIG. 23, only some of a plurality of partition images of a UHD image may be chosen by a user, and only the chosen images may be displayed by a display unit.
[169] Referring to FIG. 23, the image signal processing unit 2330 may be connected to the display unit 2340 through a wired or wireless interface. Likewise, referring to FIG. 25, the image signal processing unit 2530 may be connected to each of the display units 2540- 1 through 2540-4 through a wired or wireless interface. By using a single UHD video stream, it is possible to configure on-demand multiple video systems. That is, it is possible to enable a set-top box to properly process a plurality of UHD images and thus to serve as a home media server without a requirement of a plurality of tuners or a plurality of decoders.
[170] FIG. 26 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention. The method shown in FIG. 28 corresponds to the method shown in FIG. 11. Thus, the method shown in FIG. 26 will hereinafter be described, focusing mainly on differences with the method shown in FIG. 11.
[171] Referring to FIG. 26, a plurality of encoded slice groups may be extracted from an input bitstream (S2610). The input bitstream may be a bitstream obtained by the method shown in FIG. 11 and may include the encoded slice groups.
[172] Thereafter, the extracted slice groups may be decoded (S2620). The decoding of the extracted slice groups may involve performing entropy decoding, inverse quantization, inverse transform, deblocking filtering, and motion compensation and decoding slice group position information such as "top_left" and "bottom_right" of FIG. 13. Inverse quantization and inverse transform may be performed as a single process.
[173] Thereafter, a plurality of partition images respectively mapped to the decoded slice groups may be extracted (S2640). For example, referring to FIG. 12, MBAOT, MBAlT, MBA2T, MBA3T, MBAOB, MBAlB, MBA2B, and MBA3B may be extracted from the slice groups 0 through 3 with reference to slice group position information (e.g., "top_left" and "bottom_right") of slice groups 0 through 3.
[174] Thereafter, the extracted partition images may be synthesized into a single image (S2640). [175] FIG. 27 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention. The method shown in FIG. 30 corresponds to the method shown in FIG. 14, and is almost the same as the method shown in FIG. 26 except that an expanded image added to each of a plurality of partition images is cropped. Thus, the method shown in FIG. 26 will hereinafter be described, focusing mainly on differences with the method shown in FIG. 14 and the method shown in FIG. 26.
[176] Referring to FIG. 27, a plurality of slice groups may be extracted from an input bitstream (S2710). Thereafter, the extracted slice groups may be decoded (S2720).
[177] The decoding of the extracted slice groups may involve performing entropy decoding, inverse quantization, inverse transform, deblocking filtering, and motion compensation, decoding "top_left" and "bottom_right" of FIG. 13, decoding "slice_group_overlap_flag", "slice_cropping_top_left" and
"slice_cropping_bottom_right" of FIG. 16, and decoding "left_offset, "right_offset", "top_offset", and "bottom_offset" of FIG. 17.
[178] Thereafter, an expanded image, if any, added to each of a plurality of partition images respectively mapped to the decoded slice groups may be cropped, and the partition images may be extracted (S2730).
[179] More specifically, it may be determined whether each of the partition images has an expanded image added thereto based on "slice_group_overlap_flag". If each of the partition images has an expanded image added thereto, the expanded image added may be cropped with reference to cropping information such as "slice_cropping_top_left" corresponding to MBAOTC, MBAlTC, MBA2TC, and MBA3TC of FIG. 15 and "slice_cropping_bottom_right" corresponding to MBAOBC, MBAlBC, MBA2BC, and MBA3BC of FIG. 15. Alternatively, the expanded image may be cropped with reference to "left_offset, "right_offset", "top_offset", and "bottom_offset".
[180] Thereafter, the partition images may be extracted with reference to, for example, "topjeft" and "bottom_right" and using MBAOT, MBAlT, MBA2T, MBA3T, MBAOB, MBAlB, MBA2B, and MBA3B of FIG. 15.
[181] Thereafter, the extracted partition images may be synthesized into a single image (S2740).
[182] FIG. 28 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention. The method shown in FIG. 28 corresponds to the method shown in FIG. 18 and is almost the same as the method shown in FIG. 26 except that the method shown in FIG. 28 is a scalable decoding method. Thus, the method shown in FIG. 28 will hereinafter be described, focusing mainly on differences with the method shown in FIG. 18 and the method shown in FIG. 26. [183] Referring to FIG. 28, one or more encoded slice groups may be extracted from each of a number of layers of an input bitstream (S2810). More specifically, referring to FIG. 19, four encoded slice groups may be extracted from an enhancement layer, and one encoded slice group may be extracted from a base layer.
[184] Thereafter, the extracted slice groups may be decoded (S2820). Thereafter, a plurality of partition images respectively mapped to the decoded slice groups may be synthesized into a single image (S2840).
[185] FIG. 29 illustrates a flowchart of a method of decoding an image according to another exemplary embodiment of the present invention. The method shown in FIG. 29 corresponds to the method shown in FIG. 20 and is almost the same as the method shown in FIG. 28 except that an expanded image added to each of a plurality of partition images is cropped. Thus, the method shown in FIG. 29 will hereinafter be described, focusing mainly on differences with the method shown in FIG. 20 and the method shown in FIG. 28.
[186] Referring to FIG. 29, a number of slice groups may be extracted from each of a number of layers of an input bitstream (S2910). Thereafter, the extracted slice groups may be decoded (S2920).
[187] The decoding of the extracted slice groups may involve performing entropy decoding, inverse quantization, inverse transform, deblocking filtering and motion compensation, decoding "top_left" and "bottom_right" of FIG. 13, decoding "slice_group_overlap_flag", "slice_cropping_top_left", and
"slice_cropping_bottom_right" of FIG. 21 and decoding "no_inter_layer_pred_flag" and "lower_layer_slice_group_id" of FIG. 21.
[188] Thereafter, an expanded image added to each of a plurality of partition images respectively mapped to the decoded slice groups may be cropped, and then the partition images may be extracted (S2930).
[189] More specifically, it may be determined whether each of the partition images has an expanded image added thereto based on "slice_group_overlap_flag". If each of the partition images has an expanded image added thereto, the expanded image may be cropped with reference to cropping information such as "slice_cropping_top_left" corresponding to MBAOTC, MBAlTC, MBA2TC, and MBA3TC of FIG. 15 and "slice_cropping_bottom_right" corresponding to MBAOBC, MBAlBC, MBA2BC, and MBA3BC of FIG. 15.
[190] Thereafter, the partition images may be extracted with reference to, for example, "topjeft" and "bottom_right" and using MBAOT, MBAlT, MBA2T, MBA3T, MBAOB, MBAlB, MBA2B, and MBA3B of FIG. 15.
[191] Alternatively, if there is a layer referencing at least one slice group of its underlying layer, the partition images may be extracted with reference to a number of slice groups indicated by "no_inter_layer_pred_flag" and "lower_layer_slice_group_id".
[192] Thereafter, the extracted partition images may be synthesized into a single image (S2940).
[193] The present invention can be realized as computer-readable code written on a computer-readable recording medium. The computer-readable recording medium may be any type of recording device in which data is stored in a computer-readable manner. Examples of the computer-readable recording medium include a ROM, a RAM, a CD- ROM, a magnetic tape, a floppy disc, an optical data storage, and a carrier wave (e.g., data transmission through the Internet). The computer-readable recording medium can be distributed over a plurality of computer systems connected to a network so that computer-readable code is written thereto and executed therefrom in a decentralized manner. Functional programs, code, and code segments needed for realizing the present invention can be easily construed by one of ordinary skill in the art.
[194] While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.
[195]
Industrial Applicability
[196] According to the present invention, it is possible to effectively encode or decode a plurality of images respectively mapped to a plurality of slice groups by encoding or decoding the slice groups.
[197]
[198]

Claims

Claims
[I] A method of encoding an image, the method comprising: respectively mapping a plurality of first images to a plurality of slice groups; and encoding the slice groups.
[2] The method of claim 1, wherein the encoding of the slice groups comprises encoding at least one parameter set corresponding to each of the slice groups.
[3] The method of claim 2, wherein the encoding of the slice groups further comprises encoding a plurality of individual parameter sets respectively corresponding to the slice groups.
[4] The method of claim 3, wherein the encoding of the slice groups further comprises encoding a number of synthesized parameter sets including information regarding all the slice groups.
[5] The method of claim 2, wherein the encoding of the slice groups further comprises encoding a number of common parameter sets corresponding to all the slice groups.
[6] The method of claim 2, wherein the encoding of the slice groups further comprises encoding information indicating whether the parameter set corresponding to each of the slice groups is a synthesized parameter set for all the slice groups or an individual parameter set for a corresponding slice group.
[7] The method of claim 2, wherein the encoding of the slice groups further comprises encoding slice group position information indicating the boundaries between the slice groups.
[8] The method of claim 1, further comprising dividing a second image into a plurality of partition images, wherein the first images are the partition images.
[9] The method of claim 8, wherein the dividing of the second image comprises adding an expanded image to each of the partition images, the expanded image including parts of some of the partition images.
[10] The method of claim 9, wherein the adding of the expanded image comprises adding the expanded image to each of the partition images in consideration of whether it is necessary to perform deblocking on the boundaries between the slice groups.
[I I] The method of claim 9, wherein the encoding of the slice groups further comprises encoding information indicating whether the slice groups have the same expanded image in common.
[12] The method of claim 9, wherein the encoding of the slice groups further comprises encoding cropping information for cropping the expanded image.
[13] The method of claim 9, wherein the encoding of the slice groups further comprises encoding slice group position information indicating the boundaries between the slice groups.
[14] The method of claim 8, wherein the method is a scalable encoding method characterized by dividing the second image into a plurality of layers and encoding the second image in units of the layers, and the dividing of the second image, the mapping of the first images, and the encoding of the slice groups are performed for each of the layers.
[15] The method of claim 14, wherein the encoding of the slice groups further comprises encoding information indicating whether the layers reference at least one slice group of their respective underlying layers and encoding information regarding the slice groups referenced by the layers if the layers reference at least one slice group of their respective underlying layers.
[16] A method of decoding an image, the method comprising: extracting a plurality of encoded slice groups from an input bitstream; decoding the extracted slice groups; and extracting a plurality of first images respectively mapped to the decoded slice groups.
[17] The method of claim 16, wherein the decoding of the extracted slice groups comprises decoding one or more parameter sets corresponding to each of the extracted slice groups.
[18] The method of claim 17, wherein the decoding of the extracted slice groups further comprises decoding a plurality of individual parameter sets respectively corresponding to the extracted slice groups.
[19] The method of claim 18, wherein the decoding of the extracted slice groups further comprises decoding a number of synthesized parameter sets including information regarding all the extracted slice groups.
[20] The method of claim 17, wherein the decoding of the extracted slice groups further comprises decoding a number of common parameter sets corresponding to all the extracted slice groups.
[21] The method of claim 17, wherein the decoding of the extracted slice groups further comprises decoding information indicating whether the parameter set corresponding to each of the extracted slice groups is a synthesized parameter set for all the extracted slice groups or an individual parameter set for a corresponding extracted slice group.
[22] The method of claim 17, wherein the decoding of the extracted slice groups further comprises decoding slice group position information indicating the boundaries between the extracted slice groups.
[23] The method of claim 16, further comprising, if the extracted first images are partition images of a second image, synthesizing the extracted first images into a single image.
[24] The method of claim 23, wherein the extracting of the first images comprises cropping an expanded image added to each of the extracted first images, the expanded image including parts of some of the extracted first images.
[25] The method of claim 24, wherein the decoding of the extracted slice groups further comprises decoding information indicating whether the extracted slice groups have the same expanded image in common.
[26] The method of claim 24, wherein the decoding of the extracted slice groups further comprises decoding cropping information for cropping the expanded image.
[27] The method of claim 23, wherein the decoding of the extracted slice groups further comprises decoding slice group position information indicating the boundaries between the extracted slice groups.
[28] The method of claim 23, wherein the method is a scalable decoding method characterized by dividing the second image into a plurality of layers and decoding the second image in units of the layers, and the extracting of the encoded slice groups, the decoding of the extracted slice groups and the synthesizing of the extracted first images are performed for each of the layers.
[29] The method of claim 28, wherein the decoding of the extracted slice groups further comprises decoding information indicating whether the layers reference at least one slice group of their respective underlying layers and decoding information regarding the slice groups referenced by the layers if the layers reference at least one slice group of their respective underlying layers.
[30] An apparatus for displaying an image, the apparatus comprising: one or more display units displaying an image; a decoder extracting a plurality of slice groups from an input bitstream, decoding the extracted slice groups and extracting a plurality of first images respectively mapped to the decoded slice groups; and an image signal processing unit synthesizing the extracted first images into a single image.
[31] The apparatus of claim 30, wherein the image signal processing unit numbers the first images synthesized into the single image.
[32] The apparatus of claim 30, wherein, if one of the first images synthesized into the single image is chosen, the image signal processing unit controls the chosen first image to be displayed by the display units.
[33] The apparatus of claim 31, wherein the image signal processing unit controls only the slice group corresponding to the chosen first image to be decoded by the decoder.
[34] The apparatus of claim 30, wherein the first images synthesized into the single image are respectively displayed by the display units.
PCT/KR2008/005453 2008-05-08 2008-09-16 Method for encoding and decoding image, and apparatus for displaying image WO2009136681A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2008-0042837 2008-05-08
KR1020080042837A KR100988622B1 (en) 2008-05-08 2008-05-08 Method for encoding and decoding image, apparatus for displaying image and recording medium thereof
KR10-2008-0042836 2008-05-08
KR1020080042836A KR100951465B1 (en) 2008-05-08 2008-05-08 Method for encoding and decoding image, and recording medium thereof

Publications (1)

Publication Number Publication Date
WO2009136681A1 true WO2009136681A1 (en) 2009-11-12

Family

ID=41264722

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2008/005453 WO2009136681A1 (en) 2008-05-08 2008-09-16 Method for encoding and decoding image, and apparatus for displaying image

Country Status (1)

Country Link
WO (1) WO2009136681A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013113997A1 (en) * 2012-02-01 2013-08-08 Nokia Corporation Method and apparatus for video coding
ITTO20120901A1 (en) * 2012-10-15 2014-04-16 Rai Radiotelevisione Italiana PROCEDURE FOR CODING AND DECODING A DIGITAL VIDEO AND ITS CODIFICATION AND DECODING DEVICES
RU2663336C2 (en) * 2013-02-22 2018-08-03 Томсон Лайсенсинг Methods of coding and decoding picture block, corresponding devices and data stream
US10701373B2 (en) 2013-02-22 2020-06-30 Interdigital Vc Holdings, Inc. Coding and decoding methods of a picture block, corresponding devices and data stream

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020136306A1 (en) * 2001-03-20 2002-09-26 Per Frojdh Run-length coding of non-coded macroblocks
US20070150786A1 (en) * 2005-12-12 2007-06-28 Thomson Licensing Method for coding, method for decoding, device for coding and device for decoding video data
US20070230567A1 (en) * 2006-03-28 2007-10-04 Nokia Corporation Slice groups and data partitioning in scalable video coding
US20080025412A1 (en) * 2006-07-28 2008-01-31 Mediatek Inc. Method and apparatus for processing video stream

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020136306A1 (en) * 2001-03-20 2002-09-26 Per Frojdh Run-length coding of non-coded macroblocks
US20070150786A1 (en) * 2005-12-12 2007-06-28 Thomson Licensing Method for coding, method for decoding, device for coding and device for decoding video data
US20070230567A1 (en) * 2006-03-28 2007-10-04 Nokia Corporation Slice groups and data partitioning in scalable video coding
US20080025412A1 (en) * 2006-07-28 2008-01-31 Mediatek Inc. Method and apparatus for processing video stream

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9479775B2 (en) 2012-02-01 2016-10-25 Nokia Technologies Oy Method and apparatus for video coding
US10397610B2 (en) 2012-02-01 2019-08-27 Nokia Technologies Oy Method and apparatus for video coding
WO2013113997A1 (en) * 2012-02-01 2013-08-08 Nokia Corporation Method and apparatus for video coding
CN104205819A (en) * 2012-02-01 2014-12-10 诺基亚公司 Method and apparatus for video coding
CN104205819B (en) * 2012-02-01 2017-06-30 诺基亚技术有限公司 Method for video encoding and device
US9961324B2 (en) 2012-10-15 2018-05-01 Rai Radiotelevisione Italiana S.P.A. Method for coding and decoding a digital video, and related coding and decoding devices
TWI555383B (en) * 2012-10-15 2016-10-21 義大利廣播電視公司 Method for coding and decoding a digital video, and related coding and decoding devices
CN104813657A (en) * 2012-10-15 2015-07-29 Rai意大利无线电视股份有限公司 Method for coding and decoding a digital video, and related coding and decoding devices
WO2014060937A1 (en) * 2012-10-15 2014-04-24 Rai Radiotelevisione Italiana S.P.A. Method for coding and decoding a digital video, and related coding and decoding devices
CN104813657B (en) * 2012-10-15 2018-12-21 Rai意大利无线电视股份有限公司 For by the method for digital video encoding and decoding and correlative coding and decoding device
ITTO20120901A1 (en) * 2012-10-15 2014-04-16 Rai Radiotelevisione Italiana PROCEDURE FOR CODING AND DECODING A DIGITAL VIDEO AND ITS CODIFICATION AND DECODING DEVICES
USRE49786E1 (en) 2012-10-15 2024-01-02 Rai Radiotelevisione Italiana S.P.A. Method for coding and decoding a digital video, and related coding and decoding devices
RU2663336C2 (en) * 2013-02-22 2018-08-03 Томсон Лайсенсинг Methods of coding and decoding picture block, corresponding devices and data stream
US10701373B2 (en) 2013-02-22 2020-06-30 Interdigital Vc Holdings, Inc. Coding and decoding methods of a picture block, corresponding devices and data stream
US11558629B2 (en) 2013-02-22 2023-01-17 Interdigital Vc Holdings, Inc. Coding and decoding methods of a picture block, corresponding devices and data stream
US11750830B2 (en) 2013-02-22 2023-09-05 Interdigital Vc Holdings, Inc. Coding and decoding methods of a picture block, corresponding devices and data stream
US12052432B2 (en) 2013-02-22 2024-07-30 Interdigital Vc Holdings, Inc. Coding and decoding methods of a picture block, corresponding devices and data stream

Similar Documents

Publication Publication Date Title
US12096029B2 (en) Coding and decoding of interleaved image data
US9420310B2 (en) Frame packing for video coding
KR102510010B1 (en) Tiling in video encoding and decoding
EP2907308B1 (en) Providing a common set of parameters for sub-layers of coded video
KR20120026026A (en) Broadcast receiver and 3d video data processing method thereof
WO2009104850A1 (en) Method for encoding and decoding image, and apparatus for encoding and decoding image
WO2009136681A1 (en) Method for encoding and decoding image, and apparatus for displaying image
CN116057931A (en) Image encoding apparatus and method based on sub-bitstream extraction for scalability
CN116134823A (en) Image encoding apparatus and method based on multiple layers
KR102139532B1 (en) An apparatus of transmitting/receiving a video stream and a method of transmitting/receiving the video stream thereof
CN116134821A (en) Method and apparatus for processing high level syntax in an image/video coding system
KR100988622B1 (en) Method for encoding and decoding image, apparatus for displaying image and recording medium thereof
CN116057932A (en) Image coding apparatus and method based on layer information signaling
US20130021440A1 (en) Data codec method and device for three dimensional broadcasting
KR20150086801A (en) Methods and apparatuses for transmitting and receiving additional video data for improvement of image quality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08811946

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08811946

Country of ref document: EP

Kind code of ref document: A1