US20140015927A1 - Method and apparatus for transmitting digital broadcasting stream using linking information about multi-view video stream, and method and apparatus for receiving the same - Google Patents

Method and apparatus for transmitting digital broadcasting stream using linking information about multi-view video stream, and method and apparatus for receiving the same Download PDF

Info

Publication number
US20140015927A1
US20140015927A1 US14/028,190 US201314028190A US2014015927A1 US 20140015927 A1 US20140015927 A1 US 20140015927A1 US 201314028190 A US201314028190 A US 201314028190A US 2014015927 A1 US2014015927 A1 US 2014015927A1
Authority
US
United States
Prior art keywords
video
information
view
stream
view video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/028,190
Inventor
Yong-Tae Kim
Jae-Jun Lee
Yong-seok JANG
Kil-soo Jung
Hong-seok PARK
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to US14/028,190 priority Critical patent/US20140015927A1/en
Publication of US20140015927A1 publication Critical patent/US20140015927A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • H04N13/0048
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/356Image reproducers having separate monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2213/00Details of stereoscopic systems
    • H04N2213/003Aspects relating to the "2D+depth" image format

Definitions

  • Apparatuses and methods consistent with exemplary embodiments relate to digital data transmitting and receiving for providing two-dimensional (2D) contents or three-dimensional (3D) contents.
  • a transmission terminal inserts uncompressed video data and an uncompressed audio stream into respective elementary streams (ESs), multiplexes each of the ESs to generate a TS, and transmits the TS via a channel.
  • MPEG Moving Picture Experts Group
  • ESs elementary streams
  • the TS includes program specification information (PSI) together with the ESs.
  • PSI program specification information
  • PAT information providing single-program information describes a Packet Identifier (PID) for each ES
  • PAT information describes a PID for each PMT information.
  • a reception terminal receives a TS via a channel and extracts an ES from the TS through a process reverse to the process performed in a transmission terminal. Digital contents contained in the ES is restored and reproduced by a display device.
  • a digital broadcasting stream transmitting method for providing 3D video services including: generating a plurality of ESs for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video corresponding to the base-view video, and a 2D video having a different view from views of the 3D video; generating at least one TS by multiplexing the plurality of ESs with link information for identifying at least one piece of video information linked with each of the plurality of pieces of video information; and transmitting the generated at least one TS via at least one channel.
  • the generating the at least one TS may include multiplexing each of the ESs to generate at least one TS, and the transmitting the at least one TS may include transmitting the at least one multi-program transport stream via different channels.
  • the generating the at least one TS may include multiplexing each of the ESs to generate at least one TS, and the transmitting the least one TS may include transmitting the at least one TS via a single channel.
  • the generating the at least one TS may include multiplexing the ESs to generate a single TS, and the transmitting the least one TS may include transmitting the single TS via a single channel.
  • a digital broadcasting stream receiving method for providing 3D video services including: receiving at least one TS for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video of the 3D video, and a 2D video having a different view from views of the 3D video, via at least one channel; demultiplexing the received at least one TS to extract, from the at least one TS, linking information for identifying at least one piece of video information linked with the plurality of pieces of video information and at least one ES for the plurality of pieces of video information; and decoding the extracted at least one ES to reproduce at least one of the 3D video and the 2D video restored by the decoding based on the linking information.
  • the receiving may include receiving a plurality of TS by decoding each of the at least one channel and receiving a single TS from each of the at least one channel, and the extracting may include extracting at least one ES by demultiplexing each of the at least one multi-program TS.
  • the receiving may include receiving a plurality of TSs via a channel from among the at least one channel by decoding the channel, and the extracting may include extracting the at least one ES by demultiplexing each of the at least one TS.
  • the receiving may include receiving a TS of a channel from among the at least one channel by decoding the channel, and the extracting may include extracting the at least one ES by demultiplexing the transport stream.
  • the linking information may include a link identifier that represents whether mutually linked pieces of video information exist in the plurality of pieces of video information included in the at least one TS, and a link descriptor that includes information about a link between mutually linked pieces of video information from among the plurality of pieces of video information included in the at least one TS.
  • the link identifier may be included in a program association table for the at least one TS.
  • the link descriptor may be included in a program map table for the ESs for the mutually linked pieces of video information.
  • a 3D video stream descriptor including additional information used to reproduce current video information of a current TS may be included in a program map table for the current video information.
  • a digital broadcasting stream transmitting apparatus for providing 3D video services, the apparatus including: an ES generation unit which generates a plurality of ESs for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video corresponding to the base-view video, and a 2D video having a different view from views of the 3D video; a TS generation unit which generates at least one TS by multiplexing the generated plurality of ESs with link information for identifying at least one piece of video information linked with the plurality of pieces of video information; and a transmission unit which transmits the at least one TS via at least one channel.
  • a digital broadcasting stream receiving apparatus for providing 3D video services, the apparatus including: a TS receiving unit which receives at least one TS for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video of the 3D video, and a 2D video having a different view from views of the 3D video, via at least one channel; an ES extraction unit which demultiplexes the received at least one TS to extract, from the at least one TS, linking information for identifying at least one piece of video information linked with the plurality of pieces of video information and at least one ES for the plurality of pieces of video information; and a reproduction unit which reproduces at least one of the 3D video data and 2D video data restored by decoding the extracted at least one ES based on a link represented by the linking information.
  • a computer-readable recording medium having recorded thereon a program for the above-described digital broadcasting stream transmitting method for providing 3D video services.
  • a computer-readable recording medium having recorded thereon a program for the above-described digital broadcasting stream receiving method for providing 3D video services.
  • FIG. 1 is a block diagram of a digital broadcasting stream transmitting apparatus according to an exemplary embodiment
  • FIG. 2 is a block diagram of a digital broadcasting stream receiving apparatus according to an exemplary embodiment
  • FIG. 3 is a block diagram of a digital TV transmitting system for 2D contents services according to an exemplary embodiment
  • FIG. 4 is a block diagram of a digital TV receiving system for 2D contents services according to an exemplary embodiment
  • FIG. 5 illustrates an example of a distribution of a channel frequency band in which a plurality of video streams are transmitted and received via a plurality of channels, according to a first exemplary embodiment
  • FIG. 6 is a block diagram of a digital broadcasting stream transmitting apparatus that transmits a 3D video stream including linking information via a plurality of channels according to the first exemplary embodiment illustrated in FIG. 5 ;
  • FIG. 7 illustrates an example of a distribution of a channel frequency band in which a plurality of TSs for a 3D video stream and a 2D video stream can be transmitted and received via a single channel, according to a second exemplary embodiment
  • FIG. 8 is a block diagram of a digital broadcasting stream transmitting apparatus that transmits a plurality of TSs having linking information via a single channel according to the second exemplary embodiment illustrated in FIG. 7 ;
  • FIG. 9 is a block diagram of a digital broadcasting stream transmitting apparatus that transmits a single TS including a 3D video stream and a 2D video stream via a single channel according to a third exemplary embodiment
  • FIG. 10 is a block diagram of a digital broadcasting stream receiving apparatus that receives a plurality of TSs via a plurality of channels corresponding to a plurality of transmission network systems according to a fourth exemplary embodiment
  • FIG. 11 illustrates an example of a distribution of a channel frequency band in which a plurality of TSs for a base-view video stream of a 3D video can be transmitted and received via a plurality of channels and in which a single TS for an additional-view video stream for the 3D video can be transmitted and received via a single channel, according to a fifth exemplary embodiment
  • FIG. 12 illustrates an example of a distribution of a channel frequency band in which a plurality of TSs for a multi-view video stream can be transmitted and received via a single channel, according to an exemplary embodiment
  • FIG. 13 illustrates an example in which down-scaling method information for 3D composite formats is used, according to an exemplary embodiment
  • FIGS. 14 , 15 , 16 , 17 , and 18 are schematic views of reproduction units of a digital broadcasting stream receiving apparatus according to exemplary embodiments
  • FIG. 19 is a flowchart of a digital broadcasting stream transmitting method capable of providing 3D video services, according to an exemplary embodiment.
  • FIG. 20 is a flowchart of a digital broadcasting stream receiving method capable of providing 3D video services, according to an exemplary embodiment.
  • a “unit” as used herein may be embodied as a hardware component and/or a software component that is executed by a computer or a hardware processor.
  • a digital broadcasting stream transmitting apparatus and method capable of providing 3D video services and a digital broadcasting stream receiving apparatus and method capable of providing 3D video services according to exemplary embodiments will now be described in detail with reference to FIGS. 1 through 20 .
  • FIG. 1 is a block diagram of a digital broadcasting stream transmitting apparatus 100 according to an exemplary embodiment.
  • the digital broadcasting stream transmitting apparatus 100 includes an elementary stream (ES) generation unit 110 , a transport stream (TS) generation unit 120 , and a transmission unit 130 .
  • ES elementary stream
  • TS transport stream
  • the ES generation unit 110 generates a plurality of ESs for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video of the 3D video, and a two-dimensional (2D) video.
  • a single ES may be generated for a single piece of video information.
  • the 3D video may be a combination of a single base-view video and a single additional-view video corresponding to the single base-view video, such as a stereo video, or a combination of a single base-view video and a plurality of additional-view videos corresponding to the single base-view video.
  • the information about the additional-view video represents an additional-view video corresponding to a base-view video, and thus may be additional-view video data itself or at least one of depth information, disparity information, and the like relative to the base-view video data.
  • the 2D video may not be a one-view video of the 3D video but a 2D video having a view separate from the views of the 3D video.
  • the TS generation unit 120 receives the plurality of ESs generated by the ES generation unit 110 .
  • the TS generation unit 120 multiplexes linking information between the plurality of pieces of video information with the plurality of ESs to generate at least one TS for the plurality of ESs.
  • the transmission unit 130 transmits the at least one TS via at least one channel.
  • the linking information includes identifiers for indicating at least one video information piece associated with each of the plurality of video information pieces.
  • the TS generation unit 120 may generate at least one TS by multiplexing each of the plurality of ESs, and the transmission unit 130 may transmit the at least one TS via different channels.
  • the TS generation unit 120 may generate at least one TS by individually multiplexing the plurality of ESs, and the transmission unit 130 may transmit the at least one TS via a single channel.
  • the TS generation unit 120 may generate a single TS by multiplexing the plurality of ESs, and the transmission unit 130 may transmit the single TS via a single channel.
  • the TS generation unit 120 may packetize the ESs individually to generate packetized elementary stream (PES) packets, and multiplex a program map table (hereinafter, referred to as PMT information) including the PES packets and linking information so as to generate a single-program transport stream (SP TS).
  • the TS generation unit 120 may generate an SP TS by multiplexing a video ES, an audio ES, additional data, and the like.
  • the PMT information may be generated for each SP.
  • the TS generation unit 120 may generate a multi-program transport stream (MP TS) by multiplexing at least one SP TS with a program association table (hereinafter, referred to as PAT information).
  • MP TS multi-program transport stream
  • PAT information a program association table
  • a plurality of MP TSs not greater than the total number of SP TSs may be generated by multiplexing the SP TSs into several groups.
  • the TS generation unit 120 may generate at least one SP TS by multiplexing at least one ES individually, and generate at least one MP TS by multiplexing the at least one SP TS individually.
  • the transmission unit 130 may transmit the at least one MP TS generated by the TS generation unit 120 according to the first exemplary embodiment via channels based on the same transmission network system, respectively.
  • the TS generation unit 120 may generate at least one SP TS by multiplexing at least one ES individually, and generate a single MP TS by multiplexing the at least one SP TS together.
  • the transmission unit 130 may transmit the single MP TS generated by the TS generation unit 120 according to the second exemplary embodiment via a single channel.
  • the TS generation unit 120 may generate a single SP TS by multiplexing at least one ES together, and generate a single MP TS by multiplexing the single SP TS.
  • the transmission unit 130 may transmit the single MP TS generated by the TS generation unit 120 according to the third exemplary embodiment via a single channel.
  • the TS generation unit 120 may generate at least one SP TS and at least one MP TS from at least one ES as in the first exemplary embodiment.
  • the transmission unit 130 according to the fourth exemplary embodiment may transmit the at least one MP TS via channels based on individual kinds of transmission network systems, respectively.
  • the TS generation unit 120 may individually multiplex an ES for at least one base-view video from among at least one ES to generate at least one SP TS for the at least one base-view video. According to the fifth exemplary embodiment, the TS generation unit 120 may individually multiplex the at least one SP TS for the at least one base-view video to generate at least one base-view MP TS.
  • the TS generation unit 120 may individually multiplex an ES for at least one additional-view video corresponding to the at least one base-view video to generate at least one SP TS for the at least one additional-view video. According to the fifth exemplary embodiment, the TS generation unit 120 may multiplex the at least one SP TS for the at least one additional-view video together to generate a single additional-view MP TS.
  • the transmission unit 130 may transmit the at least one base-view MP TS and the single additional-view MP TS generated by the TS generation unit 120 according to the fifth exemplary embodiment via different channels, respectively.
  • the TS generation unit 120 may include linking information between a plurality of pieces of video information in the at least one TS by inserting the linking information into program specification information (PSI).
  • PSI program specification information
  • the linking information may be classified into a link identifier and a link descriptor.
  • the link identifier indicates whether associated pieces of video information exist in the plurality of pieces of video information included in the at least one TS.
  • the TS generation unit 120 may include the link identifier in the PAT information about the at least one TS. In this case, the link identifier may indicate whether pieces of PMT information identified by the PAT information are linked to each other.
  • the link descriptor may include information about a link between the associated pieces of video information existing in the plurality of pieces of video information included in the at least one TS.
  • the TS generation unit 120 may insert the link descriptor into a descriptor region of the PMT information.
  • the TS generation unit 120 may insert not only the linking information but also a 3D video authentication descriptor for representing whether a current ES is for a 3D video, into the PMT information.
  • a 3D video start descriptor including 3D video information start information representing a location where additional information about the 3D video starts to be inserted and a 3D video registration descriptor including format identifier information of the 3D video may be inserted into PMT information about the current ES.
  • link identifier will be described in greater detail below with reference to Table 1, the link descriptor will be described in greater detail below with reference to Tables 4, 5, and 6, and the 3D video authentication descriptor will be described in greater detail below with reference to Tables 2 and 3.
  • the TS generation unit 120 may insert not only the linking information but also a 3D video stream descriptor including additional information used for reproducing current video information of a current TS, into PMT information for the current video information.
  • the 3D video stream descriptor may include information about conversion of a 2D/3D reproduction mode that occurs on a current video stream.
  • the 3D video stream descriptor may include information about the views of a 3D video used to set view information individually, for example, for children and adults.
  • the 3D video stream descriptor will be described in greater detail below with reference to Tables 7 through 16.
  • the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert 3D video information into different channels, different TSs, or different ESs to transmit the 3D video information.
  • associated pieces of video information linking information for identifying locations into which their opponent pieces of video information have been inserted may be set.
  • the linking information may be inserted into PMT information of TSs for the associated video information pieces and transmitted.
  • FIG. 2 is a block diagram of a digital broadcasting stream receiving apparatus 200 according to an exemplary embodiment.
  • the digital broadcasting stream receiving apparatus 200 includes a TS receiving unit 210 , an ES extraction unit 220 , and a reproduction unit 230 .
  • the TS receiving unit 210 receives at least one TS including a plurality of pieces of video information via at least one channel.
  • the received at least one TS may include at least one of information about a base-view video of a 3D video, information about an additional-view video corresponding to the base-view video, and a 2D video.
  • the 2D video may be video information of a view different from the views of the 3D video.
  • the information about the additional-view video corresponding to the base-view video may be additional-view video data itself or may be at least one of disparity information and depth information that allows the additional-view video data to be restored based on the base-view video data.
  • the ES extraction unit 220 receives the at least one TS from the TS receiving unit 210 and demultiplexes the TS to extract at least one ES for the plurality of video information pieces from the TS.
  • the ES extraction unit 220 also extracts linking information between video information pieces from the demultiplexed TS.
  • the TS receiving unit 210 may receive an MP TS via at least one channel. A single MP TS may be received via each channel.
  • the TS receiving unit 210 may receive a plurality of TSs by individually decoding a plurality of channels and receiving a single TS from each of the channels.
  • the ES extraction unit 220 may extract a plurality of ESs by demultiplexing a plurality of MP TSs individually.
  • the TS receiving unit 210 may receive a plurality of TSs by decoding a single channel, and the ES extraction unit 220 may extract a plurality of ESs by demultiplexing the plurality of TSs.
  • the TS receiving unit 210 may receive a single TS by decoding a single channel, and the ES extraction unit 220 may extract a plurality of ESs by demultiplexing the single TS.
  • the ES extraction unit 220 may demultiplex an MP TS to extract at least one SP TS together with PAT information from the demultiplexed MP TS.
  • the ES extraction unit 220 may demultiplex each SP TS to extract PES packets and PMT information.
  • the PMT information may include linking information about a link between a plurality of ESs included in the at least one SP TS.
  • the PES packets may be depacketized into the ESs.
  • Each ES may include base-view video information for a 3D video, additional-view video information for the 3D video, a 3D composite format of the base-view video information and the additional-view video information, or 2D video information.
  • the ES extraction unit 220 may also extract an audio ES from the demultiplexed SP TS.
  • a single MP TS may be demultiplexed into at least one SP TS, no fewer SP TSs than a plurality of MP TSs may be extracted.
  • the TS receiving unit 210 and the ES extraction unit 220 respectively receive and demultiplex a TS generated and transmitted respectively by the TS generation unit 120 and the transmission unit 130 . Accordingly, operations of the TS receiving unit 210 and the ES extraction unit 220 to receive the at least one TS and extract an ES from the TS according to the aforementioned first through fifth exemplary embodiments are as follows.
  • the TS receiving unit 210 may receive at least one MP TS by decoding at least one channel individually and receiving a single MP TS from each of the channels.
  • the ES extraction unit 220 may demultiplex the at least one MP TS individually to extract a single SP TS from each of the MP TSs, and demultiplex each of the SP TSs to extract a single ES from each of the SP TSs.
  • the TS receiving unit 210 and the ES extraction unit 220 may finally extract ESs for 3D video information or 2D video information from at least one TS received for at least one channel.
  • the TS receiving unit 210 may decode a single channel to extract a single MP TS.
  • the ES extraction unit 220 may demultiplex the single MP TS to extract at least one SP TS, and demultiplex the at least one SP TS individually to extract at least one ES.
  • the TS receiving unit 210 and the ES extraction unit 220 according to the second exemplary embodiment may extract at least one SP TS via a single channel and thus extract ESs for 3D video information or 2D video information.
  • the TS receiving unit 210 may decode a single channel to extract a single MP TS.
  • the ES extraction unit 220 may demultiplex the single MP TS to extract a single SP TS, and demultiplex the single SP TS to extract at least one ES. Accordingly, the TS receiving unit 210 and the ES extraction unit 220 according to the third exemplary embodiment may finally extract at least one ES including 3D video information or 2D video information via a single channel.
  • the TS receiving unit 210 may individually decode channels based on individual kinds of transmission network systems and receive a single MP TS for each channel to thereby receive at least one MP TS.
  • the ES extraction unit 220 may demultiplex the at least one MP TS individually to extract a single SP TS from each of the at least one MP TS, and demultiplex each SP TS to extract a single ES from each of the SP TSs.
  • the TS receiving unit 210 and the ES extraction unit 220 according to the fourth exemplary embodiment may finally extract ESs for 3D video information or 2D video information from TSs respectively received via a plurality of channels based on individual kinds of transmission network systems.
  • the TS receiving unit 210 may decode a plurality of channels to receive at least one base-view MP TS for a 3D video from at least one of the plurality of channels and receive a single additional-view MP TS for the 3D video from one of the plurality of channels.
  • the ES extraction unit 220 may demultiplex the at least one base-view MP TS individually to extract at least one base-view SP TS, and demultiplex the at least one base-view SP TS individually to extract at least one base-view ES. According to the fifth exemplary embodiment, the ES extraction unit 220 may also demultiplex the single additional-view MP TS to extract at least one additional-view SP TS, and demultiplex the at least one additional-view SP TS individually to extract at least one additional-view ES.
  • the TS receiving unit 210 and the ES extraction unit 220 may finally extract a plurality of ESs for 3D video information or 2D video information via a plurality of channels.
  • the linking information extracted from the ES extraction unit 220 includes an identifier for indicating at least one piece of video information associated with each of a plurality of pieces of video information.
  • the linking information may be used by the reproduction unit 230 to search for other video data corresponding to predetermined video data and reproduce the video data while maintaining a link therebetween.
  • the ES extraction unit 220 may extract linking information between a plurality of pieces of video information from PSI of a TS.
  • the linking information may include a link identifier and a link descriptor.
  • the ES extraction unit 220 may extract the link identifier of the linking information from PAT information of the TS. In this case, the link identifier may represent whether pieces of PMT information identified by the PAT information are linked to each other.
  • the ES extraction unit 220 may extract the link descriptor of the linking information from a descriptor region of the PMT information.
  • the ES extraction unit 220 may extract not only the linking information but also a 3D video authentication descriptor from the PMT information. Whether 3D video information is included in a current ES may be checked from at least one of a 3D video start descriptor and a 3D video registration descriptor extracted as the 3D authentication descriptor.
  • link identifier will be described in greater detail below with reference to Table 1, the link descriptor will be described in greater detail below with reference to Tables 4, 5, and 6, and the 3D video authentication descriptor will be described in greater detail below with reference to Tables 2 and 3.
  • the ES extraction unit 220 may extract not only the linking information but also a 3D video stream descriptor from PMT information for current video information.
  • the ES extraction unit 220 may extract information related to conversion of a 2D/3D reproduction mode that occurs on a current video stream, information related with the views of a 3D video used to set view information individually, for example, for children and adults, and other information from the 3D video stream descriptor.
  • the 3D video stream descriptor will be described in greater detail below with reference to Tables 7 through 16.
  • the reproduction unit 230 decodes the at least one ES extracted by the ES extraction unit 220 to restore 3D video data or 2D video data.
  • the reproduction unit 230 may reproduce at least one of the restored 3D video data and the restored 2D video data in consideration of a link represented by the linking information.
  • the reproduction unit 230 may restore base-view video data and additional-view video data of the 3D video from the ESs.
  • the reproduction unit 230 may search for and restore the base-view video data and the additional-view video data based on the linking information, and may convert and reproduce the base-view video data and the additional-view video data in a 3D reproduction format that can be reproduced by a 3D display device.
  • the reproduction unit 230 may restore the base-view video data and additional-view video data of the 3D video from the ESs.
  • the reproduction unit 230 may search for disparity information or depth information corresponding to 2D video data based on the linking information to restore the additional-view video data, and may convert and reproduce the additional-view video data into a 3D reproduction format that can be reproduced by a 3D display device.
  • an ES for a 3D video of a 3D composite format and an ES for a 2D video may be input to the reproduction unit 230 .
  • the reproduction unit 230 may restore video information of a 3D composite format into 3D video data based on the linking information and may convert the 3D video data into base-view video data and additional-view video data that may both be reproduced by a 3D display device. Since the reproduction unit 230 may also restore 3D video data, the reproduction unit 230 may selectively reproduce the base-view video data, the additional-view video data, and the 2D video data.
  • the reproduction unit 230 may restore corresponding items of 2D video data based on the linking information and selectively and independently reproduce the 2D video data items.
  • the reproduction unit 230 may selectively reproduce the corresponding 2D video data items according to the linking information in a Picture-in-Picture (PIP) mode.
  • PIP Picture-in-Picture
  • the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 may detect associated pieces of video information transmitted via different channels, different SP TSs, or different ESs by using the linking information to extract only associated ESs.
  • the reproduction unit 230 of the digital broadcasting stream receiving apparatus 200 may recognize the existence of different video streams associated with an SP video stream selected by a user according to the linking information and may reproduce the associated video streams.
  • the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 may provide digital contents services of a 3D video and digital contents services of a 2D video while maintaining compatibility with digital broadcasting systems that use Moving Picture Experts Group (MPEG) based TSs.
  • MPEG Moving Picture Experts Group
  • PMT information includes information about a single video ES
  • a transmission terminal inserts, into each PMT information, association information for identifying an ES, a TS, or a channel in which opponent video information is located, and thus a reception terminal may reproduce associated items video data according to the association information between ESs, TSs, or channels in a 3D reproduction mode or a 2D reproduction mode.
  • the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 may smoothly provide digital contents services of 3D video while maintaining compatibility with digital broadcasting systems for 2D contents services.
  • linking information between a plurality of pieces of video information and additional information such as a 3D video stream descriptor may be included in PMT information or PAT information.
  • additional information may be included in a reserved field without needing to add a packet identifier (PID) to the PMT information.
  • PID packet identifier
  • FIG. 3 is a block diagram of a digital TV transmitting system 300 for 2D contents services according to an exemplary embodiment.
  • the digital TV transmitting system 300 generates an SP TS including a single video ES and a single audio ES by using a single-program encoder 310 and multiplexes at least one SP TS generated by a plurality of single-program encoders by using a multiplexer (MUX) 380 to generate and transmit an MP TS.
  • MUX multiplexer
  • the single-program encoder 310 includes a video encoder 320 , an audio encoder 330 , packetizers 340 and 350 , and a multiplexer (MUX) 360 .
  • MUX multiplexer
  • the video encoder 320 and the audio encoder 330 encode uncompressed video data and uncompressed audio data, respectively, to generate and output a video ES and an audio ES, respectively.
  • the packetizers 340 and 350 of the single-program encoder 310 packetize the video ES and the audio ES, respectively, and insert PES headers into the packetized video ES and the packetized audio ES, respectively, to generate a video PES packet and an audio PES packet, respectively.
  • the MUX 360 multiplexes the video PES packet, the audio PES packet, and a variety of additional data to generate a first single-program transport stream SP TS 1 .
  • PMT information may be multiplexed with the video PES packet and the audio PES packet and included in the first single-program transport stream SP TS 1 .
  • PMT information is inserted into each single-program transport stream and describes a PID of each ES or additional data.
  • the MUX 380 multiplexes a plurality of single-program transport streams SP TS 1 , SP TS 2 , . . . with PAT information to generate a multi-program transport stream MP TS.
  • the PMT information and the PAT information are generated by a PSI and Program and System Information Protocol (PSIP) generator 370 .
  • PSIP Program and System Information Protocol
  • the multi-program transport stream may include the PAT information and a PSIP.
  • the PAT information describes PIDs of PMT information about single-program transport streams included in a multi-program transport stream.
  • FIG. 4 is a block diagram of a digital TV receiving system 400 for 2D contents services according to an exemplary embodiment.
  • the digital TV receiving system 400 receives a digital broadcasting stream and extracts video data, audio data, and additional data from the digital broadcasting stream.
  • a digital TV (DTV) tuner 410 is tuned to a wave frequency of a channel selected according to a physical channel select signal input by a viewer to selectively extract a signal received via a wave corresponding to the wave frequency.
  • a channel decoder and demodulator 420 extracts a multi-program transport stream MP TS from a channel signal.
  • the multi-program transport stream MP TS is demultiplexed into a plurality of single-program transport streams SP TS 1 , SP TS 2 , . . . and a PSIP by a demultiplexer (DEMUX) 430 .
  • a first single-program transport stream SP TS 1 selected by a program select signal input by a viewer is decoded by a single-program decoder 440 .
  • the single-program decoder 440 operates reversely to the single-program encoder 310 .
  • a video PES packet, an audio PES packet, and additional data are restored from the first single-program transport stream SP TS 1 .
  • the video PES packet and the audio PES packet are restored to ESs by depacketizers 460 and 465 , respectively, and then the ESs are restored to video data and audio data by a video decoder 470 and an audio decoder 475 , respectively.
  • the video data may be converted into a displayable format by a display processor 480 .
  • a clock recovery and audio/video (AV) synchronization device 490 may synchronize a video data reproduction time with an audio data reproduction time by using program clock reference (PCR) information and time stamp information extracted from the first single-program transport stream SP TS 1 .
  • PCR program clock reference
  • a program guide database 445 may receive the program select signal input by a viewer, search for a channel and a program corresponding to the program select signal input by the viewer by comparing the program select signal with the PSIP extracted from the multi-program transport stream MP TS, and then transmit a channel selection input signal to the digital TV tuner 410 and a program selection input signal to the DEMUX 430 .
  • the program guide database 445 may also transmit on-screen display information to the display processor 480 and support an on-screen display operation.
  • the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 transmit and receive 3D video information, linking information, and 3D additional information through a plurality of channels or a plurality of ESs in order to provide a 3D digital broadcasting service while maintaining compatibility with an MPEG TS-based digital broadcasting systems for providing 2D contents services will now be described with reference to FIGS. 5 through 11 .
  • a video stream may be a general term for video information such as video data, disparity information, depth information, or the like, and an ES, a PES packet, a single-program transport stream, and a multi-program transport stream that are results of conversions performed at different stages.
  • a stereo video stream including a left view video and a right view video is transmitted and received for convenience of explanation.
  • the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 may also be applied to multi-view video streams each including a reference view and at least one additional view, as in an exemplary embodiment of FIG. 12 .
  • FIG. 5 illustrates an example of a distribution of a channel frequency band 500 in which a plurality of video streams can be transmitted and received via a plurality of channels, according to the above-described first exemplary embodiment.
  • the channel frequency band 500 a frequency band is allocated to each of the plurality of channels.
  • the channel frequency band 500 includes a frequency band 510 for channel 6 , a frequency band 520 for channel 7 , a frequency band 530 for channel 8 , a frequency band 540 for channel 9 , and a frequency band 550 for channel 10 .
  • a TS for left-view video information and a TS for right-view video information may be transmitted and received as a first stereo video through the frequency band 510 for channel 6 and the frequency band 530 for channel 8 , respectively.
  • linking information indicating existence of an association between videos of a stereo video, namely, indicating that the videos are stereo linked, may be set for channel 6 and channel 8 .
  • the linking information may be set between channels, TSs, or ESs.
  • a TS for left-view video information and a TS for right-view video information may be transmitted and received as a second stereo video through the frequency band 520 for channel 7 and the frequency band 550 for channel 10 , respectively.
  • linking information indicating existence of an association between videos of a stereo video namely, indicating that the videos are stereo linked, may be set for channel 7 and channel 10 .
  • a TS for a left-view video and a TS for a right-view video are transmitted via different channels, respectively.
  • FIG. 6 is a block diagram of a digital broadcasting stream transmitting apparatus 600 that transmits a 3D video stream having linking information via a plurality of channels according to the first exemplary embodiment described above with reference to FIG. 5 .
  • the digital broadcasting stream transmitting apparatus 600 corresponds to a block diagram of the digital broadcasting stream transmitting apparatus 100 that is constructed according to the first exemplary embodiment of FIG. 5 .
  • Operations of single-program encoders 610 , 630 , and 650 and MUXes 620 , 640 , and 660 of the digital broadcasting stream transmitting apparatus 600 correspond to operations of the ES generation unit 110 and the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 that are performed according to the first exemplary embodiment.
  • Operations of channel encoder and modulators 625 , 645 , and 665 and a DTV transmitter 670 of the digital broadcasting stream transmitting apparatus 600 correspond to an operation of the transmission unit 130 of the digital broadcasting stream transmitting apparatus 100 that is performed according to the first exemplary embodiment.
  • the single-program encoders 610 , 630 , and 650 may receive a left-view video sequence Left Seq., a 2D video sequence 2D Seq., and a right-view video sequence Right Seq. to generate and output a first single-program transport stream SP TS 1 , a second single-program transport stream SP TS 2 , and a third single-program transport stream SP TS 3 , respectively.
  • the first single-program transport stream SP TS 1 for the left-view video sequence may be multiplexed by the MUX 620 to generate a first multi-program transport stream MP TS 1 .
  • the second single-program transport stream SP TS 2 for the 2D video sequence and the third single-program transport stream SP TS 3 for the right-view video sequence may be respectively multiplexed by the MUXes 640 and 660 to respectively generate a second multi-program transport stream MP TS 2 and a third multi-program transport stream MP TS 3 .
  • the first multi-program transport stream MP TS 1 may be encoded and modulated according to channel 6 (or channel 7 ) by the channel encoder and modulator 625 .
  • the third multi-program transport stream MP TS 3 may be encoded and modulated according to channel 8 (or channel 10 ) by the channel encoder and modulator 665 .
  • the second multi-program transport stream MP TS 2 may be encoded and modulated according to channel 9 by the channel encoder and modulator 645 .
  • the DTV transmitter 670 may transmit a broadcasting video stream allocated to the plurality of channels.
  • the digital broadcasting stream transmitting apparatus 600 and the digital broadcasting stream transmitting apparatus 100 may generate a single-program transport stream and a multi-program transport stream from each of a left-view video and a right-view video of a stereo video and a 2D video and transmit the single-program transport streams and the multi-program transport streams via a plurality of channels, respectively.
  • the single program encoder 610 for the left-view video may set linking information 615 for identifying the right-view video as corresponding to the left-view video.
  • the single program encoder 650 for the right-view video may set linking Information 655 for identifying the left-view video as corresponding to the right-view video.
  • the single program encoder 630 for the 2D video may set linking information 635 for identifying the associated video.
  • Each linking information may be inserted into PMT information of a single-program transport stream.
  • the digital broadcasting stream receiving apparatus 200 may selectively receive a TS for a left-view video of a stereo video, a TS for a right-view video of the stereo video, and a TS for a 2D video transmitted via different channels, and restore video data of a desired TS.
  • the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 may extract linking information.
  • the reproduction unit 230 according to the first exemplary embodiment may identify a right-view video sequence corresponding to a left-view video sequence by using the linking information and thus perform 3D reproduction.
  • the linking information may also be used by the ES extraction unit 220 when searching for a third single-program transport stream corresponding to a first single-program transport stream to extract ESs from the first and third single-program transport streams, respectively.
  • FIG. 7 illustrates an example of a distribution of a channel frequency band 700 in which a plurality of TSs for a 3D video stream and a 2D video stream can be transmitted and received via a single channel, according to the above-described second exemplary embodiment.
  • the channel frequency band 700 includes a frequency band 710 for channel 8 , a frequency band 720 for channel 9 , and a frequency band 730 for channel 10 .
  • a TS 740 for left-view video information and a TS 760 for right-view video information may be transmitted and received as a stereo video through the frequency band 710 for channel 8 .
  • linking information indicating existence of an association between videos of a stereo video namely, indicating that the videos are stereo linked, may be set between an ES for the left-view video information and an ES for the right-view video information or between an SP TS for the left-view video information and an SP TS for the right-view video information.
  • a TS 750 for a normal 2D video and another TS 770 may be transmitted and received through the frequency band 710 for channel 8 .
  • a TS for a left-view video and a TS for a right-view video are transmitted via a single channel.
  • FIG. 8 is a block diagram of a digital broadcasting stream transmitting apparatus 800 that transmits a plurality of TSs having linking information via a single channel according to the second exemplary embodiment described above with reference to FIG. 7 .
  • the digital broadcasting stream transmitting apparatus 800 corresponds to a block diagram of the digital broadcasting stream transmitting apparatus 100 that is constructed according to the second exemplary embodiment.
  • operations of single-program encoders 810 , 820 , and 830 and a MUX 840 of the digital broadcasting stream transmitting apparatus 800 correspond to operations of the ES generation unit 110 and the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 that are performed according to the second exemplary embodiment.
  • Operations of a channel encoder and modulator 850 and a DTV transmitter 860 of the digital broadcasting stream transmitting apparatus 800 correspond to an operation of the transmission unit 130 of the digital broadcasting stream transmitting apparatus 100 that is performed according to the second exemplary embodiment.
  • the single-program encoders 810 , 820 , and 830 may receive a left-view video sequence Left Seq., a 2D video sequence 2D Seq., and a right-view video sequence Right Seq., respectively, to generate and output a first single-program transport stream SP TS 1 , a second single-program transport stream SP TS 2 , and a third single-program transport stream SP TS 3 , respectively.
  • the first single-program transport stream SP TS 1 , the second single-program transport stream SP TS 2 , and the third single-program transport stream SP TS 3 may be multiplexed by the MUX 840 to generate a multi-program transport stream MP TS.
  • the first, second, and third single-program transport streams SP TS 1 , SP TS 2 , and SP TS 3 for the left-view video sequence, the 2D video sequence, and the right-view video sequence may be multiplexed together to generate a single multi-program transport stream MP TS
  • the multi-program transport stream MP TS may be encoded and modulated according to channel 8 by the channel encoder and modulator 850 .
  • the DTV transmitter 860 may transmit a broadcasting video stream allocated to channel 8 .
  • the digital broadcasting stream transmitting apparatus 800 and the digital broadcasting stream transmitting apparatus 100 according to the second embodiment may multiplex a single-program transport stream for each of a left-view video of a stereo video, a right-view video of the stereo video, and a 2D video into a single multi-program transport stream and transmit the single multi-program transport stream via a single channel.
  • the single program encoder 810 for the left-view video may set linking information 815 for identifying the right-view video as corresponding to the left-view video and may insert the linking information 815 into PMT information of the first single-program transport stream SP TS 1 .
  • the single program encoder 830 for the right-view video may set linking Information 835 for identifying the left-view video as corresponding to the right-view video and may insert the linking information 835 into PMT information of the third single-program transport stream SP TS 3 . If a video associated with the 2D video exists, the single program encoder 820 for the 2D video may set linking information 825 for identifying the associated video and may insert the linking information 825 into PMT information of the second single-program transport stream SP TS 2 .
  • the single program encoders 810 , 820 , and 830 according to the second exemplary embodiment may follow individual digital data communication systems.
  • an Advanced Television Systems Committee (ATSC) terrestrial broadcasting communication method supports Enhanced Vestigial Sideband (E-VSB) technology.
  • E-VSB Enhanced Vestigial Sideband
  • a TS may be constructed in a different way from that in the MPEG technology.
  • linking information according to an exemplary embodiment is inserted into a TS to be transmitted, a base-view video stream may be transmitted in the form of an MPEG transport stream, and an additional-view video stream may be transmitted in the form of an E-VSB transport stream.
  • the single program encoders 810 , 820 , and 830 according to the second exemplary embodiment may follow individual video encoding systems. Since the linking information according to an exemplary embodiment is inserted into PMT information of a TS, a base-view video may be encoded according to an MPEG-2 video encoding method, and an additional-view video may be encoded according to an MPEG Advanced Video Coding (AVC)/H.264 video encoding method.
  • AVC MPEG Advanced Video Coding
  • the digital broadcasting stream receiving apparatus 200 may receive a single multi-program transport stream for a left-view video of a stereo video, a right-view video of the stereo video, and a 2D video transmitted via a single channel, demultiplex the single multi-program transport stream into a single-program transport stream for the left-view video, a single-program transport stream for the right-view video, and a single-program transport stream for the 2D video, and extract at least one from the single-program transport streams, thereby restoring desired video sequence data.
  • the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 may extract linking information.
  • the reproduction unit 230 according to the second exemplary embodiment may identify a right-view video sequence corresponding to a left-view video sequence by using the linking information and thus perform 3D reproduction.
  • the linking information may also be used by the ES extraction unit 220 when searching for a third single-program transport stream corresponding to a first single-program transport stream and extracting ESs from the first and third single-program transport streams, respectively.
  • FIG. 9 is a block diagram of a digital broadcasting stream transmitting apparatus 900 that transmits a single TS including a 3D video stream and a 2D video stream via a single channel according to the above-described third exemplary embodiment.
  • the digital broadcasting stream transmitting apparatus 900 corresponds to a block diagram of the digital broadcasting stream transmitting apparatus 100 that is constructed according to the third exemplary embodiment.
  • operations of a single-program encoder 910 and a MUX 980 of the digital broadcasting stream transmitting apparatus 900 correspond to operations of the ES generation unit 110 and the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 that are performed according to the third exemplary embodiment.
  • Operations of a channel encoder and modulator 990 and a DTV transmitter 995 of the digital broadcasting stream transmitting apparatus 900 correspond to an operation of the transmission unit 130 of the digital broadcasting stream transmitting apparatus 100 that is performed according to the third exemplary embodiment.
  • the single-program encoder 910 may receive a left-view video, a right-view video, and a 2D video and generate a first video elementary stream Video ES 1 , a second video elementary stream Video ES 2 , and a third video elementary stream Video ES 3 by using video encoders 920 , 930 , and 940 , respectively.
  • the first, second, and third video elementary streams Video ES 1 , Video ES 2 , and Video ES 3 are packetized into a first video PES packet Video PES 1 , a second video PES packet Video PES 2 , and a third video PES packet Video PES 3 by packetizers 925 , 935 , and 945 , respectively.
  • the single program encoder 910 may receive audio, convert the audio into an audio elementary stream Audio ES by using an audio encoder 950 , and convert the audio elementary stream Audio ES into an audio PES packet Audio PES by using a packetizer 955 .
  • a MUX 960 of the single program encoder 910 may multiplex the first, second, and third video PES packets and the audio PES packet into a first single-program transport stream SP TS 1 .
  • the single program encoder 910 may also receive PMT information generated by a PSI and PSIP generator 970 and a variety of additional data DATA and multiplex the PMT information and the additional data DATA together with the first, second, and third video PES packets and the audio PES packet by using the MUX 960 , so that the PMT information and the additional data DATA are inserted into the first single-program transport stream SP TS 1 .
  • the first single-program transport stream SP TS 1 may then be output.
  • At least one of the 3D video data and other 2D video data may be multiplexed with PMT information into a second single-program transport stream SP TS 2 .
  • the PSI and PSIP generator 970 may generate PAT information including PIDs of the PMT information included in the first and second single-program transport streams SP TS 1 and SP TS 2 , and a PSIP about various programs and system information.
  • a MUX 980 may multiplex the first and second single-program transport streams SP TS 1 and SP TS 2 , the PAT information, and the PSIP into a single multi-program transport stream MP TS.
  • the multi-program transport stream MP TS may be encoded and modulated according to a channel by the channel encoder and modulator 990 .
  • the DTV transmitter 995 may transmit a broadcasting video stream allocated to the channel.
  • the digital broadcasting stream transmitting apparatus 900 and the digital broadcasting stream transmitting apparatus 100 according to the third exemplary embodiment may multiplex ESs for all of a left-view video of a stereo video, a right-view video of the stereo video, and a 2D video into a single single-program transport stream, and transmit a single multi-program transport stream via a single channel.
  • the single program encoder 910 may set linking information for identifying a left-view video or a right-view video corresponding to each other as videos of a stereo video, namely, indicating that the videos are stereo linked, to the first or second video PES packet, respectively, and may insert the stereo linked into PMT information of the first single-program transport stream SP TS 1 .
  • the single program encoder 910 may generate a TS according to independent digital data communication methods.
  • the first single-program transport stream SP TS 1 may be generated in the form of an MPEG transport stream
  • the second single-program transport stream SP TS 2 may be transmitted in the form of an E-VSB transport stream
  • linking information may be inserted into each of the PMT information of the first and second single-program transport streams SP TS 1 and SP TS 2 .
  • the video encoders 920 and 930 according to the third exemplary embodiment may follow independent video encoding methods.
  • a base-view video may be encoded according to an MPEG-2 video encoding method
  • an additional-view video may be encoded according to an MPEG AVC/H.264 video encoding method
  • linking information may be inserted into each of the PMT information of the first and second single-program transport streams SP TS 1 and SP TS 2 .
  • the digital broadcasting stream receiving apparatus 200 may extract a single multi-program transport stream for a left-view video of a stereo video, a right-view video of the stereo video, a 2D video, and audio transmitted via a single channel, and demultiplex the single multi-program transport stream, thereby selecting and extracting a desired single-program transport stream.
  • the digital broadcasting stream receiving apparatus 200 according to the third exemplary embodiment may select and extract a video ES for the left-view video of the stereo video, a video ES for the right-view video of the stereo video, and a video ES for the 2D video from the extracted single-program transport stream, thereby restoring desired video data.
  • the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 may extract linking information.
  • the reproduction unit 230 according to the third exemplary embodiment may identify right-view video data corresponding to left-view video data by using the linking information and thus accurately reproduce a 3D video.
  • the linking information may also be used by the ES extraction unit 220 when searching for and extracting a second video ES corresponding to a first video ES.
  • FIG. 10 is a block diagram of a digital broadcasting stream receiving apparatus 1000 that receives a plurality of TSs via a plurality of channels based on a plurality of transmission network systems according to the above-described fourth exemplary embodiment.
  • the digital broadcasting stream transmitting apparatus 100 receives a TS for a left-view video and a TS for a right-view video via different channels that are based on individual transmission network system, for example, a terrestrial system 1010 , a satellite TV system 1012 , a cable TV system 1014 , an Internet Protocol Television (IPTV) system 1016 , and the like.
  • individual transmission network system for example, a terrestrial system 1010 , a satellite TV system 1012 , a cable TV system 1014 , an Internet Protocol Television (IPTV) system 1016 , and the like.
  • IPTV Internet Protocol Television
  • the digital broadcasting stream receiving apparatus 1000 corresponds to a block diagram of the digital broadcasting stream receiving apparatus 200 that is constructed according to the fourth exemplary embodiment.
  • operations of either a terrestrial digital tuner 1020 or a satellite digital tuner 1060 and channel decoder and demodulators 1030 and 1070 of the digital broadcasting stream receiving apparatus 1000 may correspond to an operation of the TS receiving unit 210 of the digital broadcasting stream receiving apparatus 200 that is performed according to the fourth exemplary embodiment
  • operations of TS DEMUXes 1040 and 1080 and single-program decoders 1050 and 1090 of the digital broadcasting stream receiving apparatus 1000 may correspond to an operation of the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 that is performed according to the fourth exemplary embodiment.
  • the digital broadcasting stream receiving apparatus 1000 may be a digital TV receiving system.
  • the digital broadcasting stream receiving apparatus 1000 may receive broadcasting streams via channels corresponding to the terrestrial system 1010 , the satellite TV system 1012 , the cable TV system 1014 , and the IPTV system 1016 .
  • the terrestrial digital tuner 1020 and the channel decoder and demodulator 1030 are tuned to a terrestrial channel to extract a multi-program transport stream received via terrestrial waves.
  • a TS for a left-view video of a stereo video may be received via the terrestrial channel.
  • the satellite digital tuner 1060 and the channel decoder & demodulator 1070 are tuned to a satellite channel to extract a multi-program transport stream received via satellite waves.
  • a TS for a right-view video of the stereo video may be received via the satellite channel.
  • the multi-program transport streams may be demultiplexed into single-program transport streams by the TS DEMUXes 1040 and 1080 .
  • the single-program transport streams may be restored to a left-view video and a right-view video by the single-program decoders 1050 and 1090 , respectively.
  • the single-program transport stream for the left-view video received and extracted via the terrestrial channel may include linking information 1055 about the right-view video constituting a remaining view of a stereo image.
  • linking information of the left-view video may include an identifier indicating a channel, a TS, or an ES of the right-view video received via the satellite channel.
  • the single-program transport stream for the right-view video received and extracted via the satellite channel may include linking information 1095 about the left-view video.
  • Linking information of the right-view video may include an identifier indicating a channel, a TS, or an ES of the right-view video received via the terrestrial channel.
  • the digital broadcasting stream receiving apparatus 1000 may detect the left-view video and the right-view video received via the terrestrial channel and the satellite channel, respectively, by using the linking information to reproduce a 3D video, thereby providing 3D video broadcasting services.
  • FIG. 11 illustrates an example of a distribution of a channel frequency band 1100 in which a plurality of TSs for a left-view video stream of a 3D video can be transmitted and received via a plurality of channels and in which a TS for a right-view video stream for the 3D video can be transmitted and received via a single channel, according to the above-described fifth exemplary embodiment.
  • the channel frequency band 1100 includes a frequency band 1110 for channel 8 , a frequency band 1120 for channel 9 , and a frequency band 1130 for channel 10 .
  • a left-view video Left Video 1 of a first stereo video and a left-view video Left Video 2 of a second stereo video are allocated to the frequency band 1120 for channel 9 and the frequency band 1130 for channel 10 , respectively.
  • a TS 1140 for a right-view video Right Video 1 of the first stereo video and a TS 1150 for a right-view video Right Video 2 of the second stereo video may be transmitted and received through the frequency band 1110 for channel 8 .
  • linking information indicating existence of an association between videos of a stereo video may be set between channels (video streams) of the left-view video Left Video 1 and the right-view video Right Video 1 of the first stereo video and between channels (video streams) of the left-view video Left Video 2 and the right-view video Right Video 2 of the second stereo video.
  • a TS 1160 for another right-view video may be transmitted via the frequency band 1110 for channel 8 . If the channel frequency band 1100 is sufficiently large, a TS 1170 for other data may be further transmitted.
  • the digital broadcasting stream transmitting apparatus 100 may multiplex an ES for at least one left-view video to generate at least one single-program transport stream for the at least one left-view video, and generate at least one left-view multi-program transport stream from the at least one single-program transport stream.
  • the digital broadcasting stream transmitting apparatus 100 according to the fifth exemplary embodiment may also multiplex an ES for at least one right-view video corresponding to the at least one left-view video to generate at least one single-program transport stream for the at least one right-view video, and generate a single right-view multi-program transport stream from the at least one single-program transport stream.
  • the digital broadcasting stream transmitting apparatus 100 according to the fifth exemplary embodiment may transmit the at least one left-view multi-program transport stream and the single right-view multi-program transport stream via different channels.
  • the digital broadcasting stream receiving apparatus 200 may decode a plurality of channels to receive a left-view multi-program transport stream of a 3D video from at least one of the channels and receive a right-view multi-program transport stream of the 3D video from one of the channels.
  • the digital broadcasting stream receiving apparatus 200 according to the fifth exemplary embodiment may demultiplex the at least one left-view multi-program transport stream individually to extract at least one left-view single-program transport stream, and demultiplex the at least one left-view single-program transport stream individually to extract at least one left-view elementary stream.
  • the ES extraction unit 220 may demultiplex a single right-view multi-program transport stream to extract at least one right-view single-program transport stream, and demultiplex the at least one right-view single-program transport stream individually to extract at least one right-view elementary stream.
  • the digital broadcasting stream receiving apparatus 200 may extract a plurality of ESs for 3D video information or 2D video information via a plurality of channels.
  • the digital broadcasting stream receiving apparatus 200 according to the fifth exemplary embodiment may reproduce mutually associated stereo videos three-dimensionally by using linking information.
  • a 3D video may be a multi-view video including at least three view videos.
  • a plurality of video streams associated with one another may be transmitted or received via at least one channel and at least one TS.
  • a left-view video stream and a right-view video stream associated with each other may be transmitted or received via a single channel.
  • a case where the second or third exemplary embodiment is implemented so as to maintain compatibility with related art 2D digital terrestrial broadcasting systems is supposed.
  • the 2D digital terrestrial broadcasting system may transmit and receive a base-view video stream at a second bitrate lower than the first bitrate and an additional-view video stream at remaining bitrate in order to implement the second or third exemplary embodiment.
  • both base-view video information and additional-view video information are transmitted and received via a single channel, and thus compatibility with existing broadcasting systems is possible without using additional channels.
  • video data is compressed using a high-compressibility encoding and decoding method, data loss due to compression of a base-view video stream is minimized, and the base-view video stream may be transmitted and received via a single channel together with additional-view video information.
  • a base-view video stream and an additional-view video stream may be transmitted and received via a plurality of channels.
  • a base-view video stream may be transmitted via an existing channel, and an additional-view video stream may be transmitted via an additional channel.
  • base-view video information and additional-view video information are transmitted and received via different channels, and thus if the first, fourth, and fifth exemplary embodiments are compatible with related art broadcasting systems on a channel-by-channel basis, multi-view digital contents broadcasting services may be provided without degradation of the quality of a displayed image.
  • FIG. 12 illustrates an example of a distribution of a channel frequency band 1200 in which a plurality of TSs for a multi-view video stream can be transmitted and received via a single channel, according to an exemplary embodiment.
  • the channel frequency band 1200 includes a frequency band 1210 for channel 8 and a frequency band 1220 for channel 9 , and the like.
  • a TS 1230 for first view video information Video 1 of a multi-view video may be transmitted and received through the frequency band 1210 for channel 8 .
  • a TS 1240 for second view video information Video 2 of the multi-view video may be transmitted and received through the frequency band 1210 for channel 8 .
  • a TS 1250 for third view video information Video 3 of the multi-view video may be transmitted and received through the frequency band 1210 for channel 8 .
  • linking information ‘3d linked’ for representing that a link exists between each of the first, second, fourth, and fifth view video information Video 1 , Video 2 , Video 4 , and Video 5 and the third view video information Video 3 , may be set.
  • the digital broadcasting stream transmitting apparatus 100 may convert ESs of videos of a plurality of views that constitute a multi-view video, into single-program transport streams, respectively, multiplex the single-program transport streams for the respective views into a single multi-program transport stream, and transmit the single multi-program transport stream via a single channel.
  • the videos of the plurality of views are referred to as a plurality of view videos.
  • the digital broadcasting stream receiving apparatus 200 may decode a single channel to extract a single multi-program transport stream, and may demultiplex the single multi-program transport stream to extract respective single-program transport streams for a plurality of view videos.
  • the plurality of view videos constitute a multi-view video, and respective ESs for the view videos may be extracted from the single-program transport streams for the view videos.
  • the digital broadcasting stream receiving apparatus 200 according to the present exemplary embodiment may finally extract the single-program transport streams of the view videos and the ESs of the view videos via a single channel and thus reproduce restored view videos.
  • the digital broadcasting stream transmitting apparatus 100 since the digital broadcasting stream transmitting apparatus 100 according to the exemplary embodiment of FIG. 1 transmits mutually associated pieces of video information via different channels, TSs, or ESs, the digital broadcasting stream receiving apparatus 200 according to the exemplary embodiment of FIG. 2 may check what channel, TS, or ES video information associated with extracted video information has been received through. To this end, the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 according to the exemplary embodiments of FIGS. 1 and 2 use linking information representing association between 3D videos such as stereo videos or multi-view videos.
  • the digital broadcasting stream transmitting apparatus 100 converts mutually associated pieces of 3D video information into a transmission format and transmits the mutually associated 3D video information pieces in the transmission format
  • the digital broadcasting stream receiving apparatus 200 may ascertain the features of a 3D video in order to properly restore the mutually associated 3D video information pieces into a 3D reproduction format and reproduce the mutually associated 3D video information pieces in the 3D reproduction format.
  • the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 according to the exemplary embodiments of FIGS. 1 and 2 use a 3D video start descriptor or a 3D video registration descriptor for representing existence or absence of 3D video information, and a 3D video stream descriptor for accurate restoration and reproduction of the 3D video information.
  • the linking information may include a link identifier representing whether associated pieces of video information exist in a plurality of pieces of video information included in a TS, and a link descriptor used to identify an ES, a TS, or a channel of video information associated with each video information.
  • the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert the link identifier according to an exemplary embodiment into PAT information.
  • the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 checks a link identifier included in PAT information extracted from a TS and predict a link between pieces of PMT information included in the TS.
  • Mutually associated pieces of PMT information include a link descriptor that defines a link between pieces of video information.
  • parameter ‘linked_indicator’ including a link identifier may be additionally set for each PMT information within the PAT information. If parameter ‘linked_indicator’ of current PMT information is 000, no PMT information from among at least one piece of opponent PMT information indicated by the PAT information is associated with the current PMT information. On the other hand, if parameter ‘linked_indicator’ of the current PMT information is one among 001 to 111, PMT information including parameter ‘linked_indicator’ with the same value as that of the current PMT information, from among the at least one piece of opponent PMT information indicated by the PAT information, is associated with the current PMT information. In other words, PMT information pieces including parameters ‘linked_indicator’ with the same value may each include a link descriptor that defines a link therebetween.
  • the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert a link describer according to an exemplary embodiment into PMT information.
  • the link describer may be set for each video information.
  • the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 may check a link describer included in PMT information extracted from a single-program transport stream and determine a location of opponent video information corresponding to the video information included in the single-program transport stream.
  • the digital broadcasting stream receiving apparatus 200 of FIG. 2 may detect mutually associated pieces of video information and restore video data from the mutually associated video information pieces.
  • Table 1 shows an example of a syntax of parameter ‘program_stream_map( )’ of PMT information.
  • the link describer according to an exemplary embodiment is included in parameter ‘linking_descriptor’.
  • the parameter ‘linking_descriptor’ may be included in parameter ‘descriptor( )’ as a descriptor region following parameter ‘elementary_stream_info_length’ included in parameter ‘program_stream_map( )’.
  • the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert a 3D video start descriptor or a 3D video registration descriptor according to an exemplary embodiment, as 3D video authentication information, into a PMT.
  • the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 may check a 3D video start descriptor or a 3D video registration descriptor included in PMT information extracted from a single-program transport stream, and predict whether video information of an ES for the single-program transport stream is information about a 3D video.
  • the ES extraction unit 220 may extract the 3D video stream descriptor from the PMT information and extract 3D video information from the ES.
  • Parameter ‘3d_start_descriptor’ used to set a 3D video start descriptor or parameter ‘3d_registration_descriptor’ used to set a 3D video registration descriptor may be included in parameter ‘descriptor( )’, which is a descriptor region following parameter ‘program_stream_info_length’ included in the PMT of Table 1.
  • Table 2 shows an example of a syntax of parameter ‘3d_start_descriptor’ that represents the 3D video start descriptor.
  • Descriptors defined by a user may be inserted into a parameter ‘descriptor_tag’ having a value between 64-255, in a descriptor region of an MPEG TS.
  • the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert parameter ‘3d_start_descriptor’, representing the 3D video start descriptor, into a descriptor region having parameter ‘descriptor_tag’ with a value of 0xF0.
  • Parameter ‘descriptor_length’ represents the number of bytes that follow parameter ‘descriptor_length’.
  • ASCII code ‘3DAV’ may be set as the value of parameter ‘threed_info_start_code’ so that 3D video information is included in the single-program transport stream and a 3D video stream descriptor is included in PMT information.
  • Table 3 shows an example of a syntax of parameter ‘3d_registration_descriptor’ that represents a 3D video registration descriptor.
  • ASCII code ‘3DAV’ may be set as the value of parameter ‘format_identifier’ so that parameter ‘format_identifier’ represents that 3D video format data is included in the single-program transport stream and a 3D video stream descriptor is included in the PMT.
  • Table 4 shows an example of a syntax of parameter ‘linking_descriptor’ that represents a link descriptor.
  • Parameter ‘linking_priority’ represents linking priority information that indicates existence or absence of a link between a plurality of video streams and priority between associated video streams.
  • An exemplary embodiment of the values of parameter ‘linking_priority’ that define a link between a current video stream and an opponent video stream associated with the current video stream is based on Table 5.
  • linking_priority Description 00 no linking The two video streams are not linked with each other. 01 linking_no_priorty The two video streams are linked with each other, but they are equal to each other without priority. 10 linking_high_priorty The current video stream is linked with the opponent video stream and has higher priority than the opponent video stream. 11 linking_low_priorty The current video stream is linked with the opponent video stream and has lower priority than the opponent video stream.
  • parameter ‘linking_priority’ is 01
  • the two video streams are equally linked to each other without priorities.
  • the value of parameter ‘linking_priority’ may be set to be 01 for each channel.
  • the value of parameter ‘linking_priority’ may be set to be 10 or 11.
  • the value of parameter ‘linking_priority’ is 10
  • the current video stream has higher priority than the opponent video stream.
  • the value of parameter ‘linking_priority’ is 11, the current video stream has lower priority than the opponent video stream.
  • the value of parameter ‘linking_priority’ may be set to be 10.
  • the value of parameter ‘linking_priority’ may be set to be 11.
  • Distribution_indicator_flag represents same transmission network method information that indicates whether mutually associated video streams are transmitted and received using the same transmission network method.
  • parameter ‘distribution_indicator_flag’ when the value of parameter ‘distribution_indicator_flag’ is 0, it may indicate that the mutually associated video streams are transmitted and received using the same transmission network method. When the value of parameter ‘distribution_indicator_flag’ is 1, it may indicate that the mutually associated video streams are transmitted and received using different transmission network methods.
  • channel_indicator_flag represents same channel information that indicates whether mutually associated pieces of video information are transmitted via the same channel.
  • parameter ‘channel_indicator_flag’ when the value of parameter ‘channel_indicator_flag’ is 0, it may indicate that the mutually associated video streams use the same channel. When the value of parameter ‘channel_indicator_flag’ is 1, it may indicate that the mutually associated video streams use different channels.
  • the value of parameter ‘channel_indicator_flag’ may be set to be 0. In the other exemplary embodiments, the value of parameter ‘channel_indicator_flag’ may be set to be 1.
  • Parameter ‘pmt_indicator_flag’ represents same single-program transport stream information that represents whether the mutually associated video streams exist within the same single-program transport stream.
  • parameter ‘pmt_indicator_flag’ when the value of parameter ‘pmt_indicator_flag’ is 0, it may indicate that the mutually associated video streams such as ESs, PESs, or the like exist within the same single-program transport stream. When the value of parameter ‘pmt_indicator_flag’ is 1, it may indicate that the mutually associated video streams such as ESs, PESs, or the like exist within different single-program transport streams, respectively.
  • the value of parameter ‘pmt_indicator_flag’ may be set to be 0. In other exemplary embodiments, the value of parameter ‘pmt_indicator_flag’ may be set to be 1.
  • Parameter ‘simulcast_flag’ represents same view information that represents whether mutually associated video streams of the same view exist.
  • parameter ‘simulcast_flag’ when the value of parameter ‘simulcast_flag’ is 0, it may indicate that mutually associated video streams exist as video information of the same view.
  • value of parameter ‘simulcast_flag’ when the value of parameter ‘simulcast_flag’ is 1, it may indicate that mutually associated video streams do not exist at the same point of time but data may be provided later.
  • Parameter ‘linked_distribution_method’ represents linked video transmission network method information that represents the transmission network method of a channel through which the opponent video stream is transmitted.
  • the linked video transmission network method information may be set to define a transmission network method for the opponent video stream.
  • Table 6 shows an example of the linked video transmission network method information.
  • Parameter ‘linked_ts_id’ represents linked TS PID information that indicates a PID of a multi-program TS including mutually associated video streams.
  • Parameter ‘linked_channel’ represents linked video channel information that represents a channel through which an opponent video stream from among the mutually associated video streams is transmitted and received.
  • the linked video channel information may be set.
  • frequency information may be provided as the linked video channel information.
  • IPTV broadcasting Uniform Resource Locator (URL) information may be provided as the linked video channel information.
  • URL Uniform Resource Locator
  • Parameter ‘linked_pmt_pid’ represents linked video PID information that indicates a PID of a PMT of a single-program transport stream through which the opponent video stream is transmitted.
  • parameter ‘pmt_indicator_flag’ representing the same single-program transport stream information
  • linked video packet identifier information may be defined.
  • Parameter ‘linked_stream_PID’ represents linked video stream PID information that indicates a PID of an opponent video stream from among the mutually associated video streams.
  • Parameter ‘linked_service_identification_time’ represents linked video service identifying time information that represents when a program for the opponent video stream from among the mutually associated video streams is provided.
  • the linked video service identifying time information may indicate, in units of months, days, hours, and minutes, when a linked program is provided.
  • the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may transmit linking information such as a link descriptor, a link identifier, and the like for identifying existence and locations of the mutually associated video streams, together with the mutually associated video streams.
  • the digital broadcasting stream receiving apparatus 200 of FIG. 2 may extract link information and video streams from a received TS and also extract mutually associated video streams according to the linking information, thereby properly reproducing 3D video.
  • a link descriptor is used between non-linked video streams.
  • the value of parameter ‘linking_priority’ representing link priority information is 00
  • the values of parameters ‘distribution_indicator_flag’, ‘channel_indicator_flag’, and ‘pmt_indicator_flag’ are set to be 0.
  • the value of parameter ‘linked_stream_PID’ may be set to be a PID value newly defined for video information for an additional view video of a 3D video.
  • the PMT information may include parameter ‘linked_network_id’ representing service provider information or broadcasting station information that provides a TS.
  • a TS is identified using parameter ‘transport_stream_id’, and the service provider information or the broadcasting station information is identified using parameter ‘original_network_id’.
  • program inserted into the TS may be securely distinguished from each other.
  • stereo link information may be accurately determined using parameter ‘linked_network_id’ instead of using parameter ‘linked_channel’.
  • the value of parameter ‘linked_channel’ denotes a channel of the opponent video information associated with the current TS, and may vary according to a broadcasting method or the type of a broadcasting system.
  • the link descriptor according to an exemplary embodiment is not limited to that shown in Table 4, and may be appropriately changed according to a case where the link descriptor is expanded to include a multi-view image or used for a predetermined purpose.
  • linking information described up to now may be set for each video information.
  • unidirectional linking information in which linking information is set for only base-view video information may be used.
  • the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert a 3D video stream descriptor for a current video stream into PMT information for the current video stream.
  • the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 may extract the 3D video stream descriptor from the PMT information for the current video stream.
  • the 3D video stream descriptor includes additional information that is used when the reproduction unit 230 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 performs 3D rendering to accurately restore and reproduce video data of the current video stream.
  • Table 7 shows parameter ‘3d_video_stream_descriptor’ including a 3D video stream descriptor according to an exemplary embodiment.
  • Parameter ‘3d_video_property’ represents 3D video property information that represents video properties of the current video stream when a 3D video is constructed.
  • 3D video property information may be defined with reference to Table 8 below.
  • the current video stream is a left-view video of a stereo video.
  • the current video stream is a right-view video of the stereo video.
  • the current video stream is a 2D video.
  • the current video stream is depth information or disparity information, respectively, of an additional-view video for the left-view video of the stereo video.
  • the value of parameter ‘3d_video_property’ of a video stream including the 2D video information may be set to be 0x02, and the value of parameter ‘3d_video_property’ of a video stream including the depth information may be set to be 0x03.
  • the 3D video is an image based on a 3D dot method or a random dot stereogram method.
  • the current video stream is a plurality of 2D video streams for a multi-view video.
  • the current video stream is a video stream in a 3D composite format that is obtained by composing a left-view image and a right-view image in a single frame, such as in a side-by-side format or a top-and-bottom format.
  • the other fields not defined in Table 8 may be reserved as reserved fields.
  • Video formats of a new video property may be set in the reserved fields as a user demands. For example, a 3D composite format and a video format of a combination of depth information and disparity information may be allocated to reserved parameters as needed.
  • Parameter ‘linked_stream_coding_mode’ represents linked video encoding method information that represents a compressive encoding method between mutually associated pieces of video information.
  • Linked video encoding method information according to an exemplary embodiment is set with reference to Table 9.
  • the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may compress the two video data items by using two independent video encoders, respectively, and transmit the two video data items in the form of two video streams.
  • the digital broadcasting stream receiving apparatus 200 of FIG. 2 may receive the two video streams and restore video data from each of the two video streams by using two independent video decoders.
  • parameter ‘linked_stream_coding_mode’ When the value of parameter ‘linked_stream_coding_mode’ is 001, the mutually associated video data items are encoded using a scalable coding method. When the value of parameter ‘linked_stream_coding_mode’ is 010, a differential image between a left-view image and a right-view image is encoded as additional-view video information.
  • Parameter ‘full_image_size_indicator’ represents size indicator information that indicates whether a current video stream transmits current video information at the original size of the original video information.
  • the size indicator information may represent a rate at which video data of the current video stream is scaled with respect to the original size of the original video data.
  • the value of parameter ‘full_image_size_indicator’ is 0, the size of current video data is not the same as the full size of the original video data.
  • the value of parameter ‘full_image_size_indicator’ is 1, the size of current video data is the same as the full size of the original video data.
  • Some polarization-type display devices halve the vertical resolution of an image and display an image with halved vertical resolution. Even when receiving 3D video data at a full resolution, the polarization type display devices halve the vertical resolution of the 3D video data and display an image with halved vertical resolution. Since full-resolution base-view video data and full-resolution additional-view video data do not need to be provided to these display devices, providing half-size video data obtained by halving a vertical resolution is efficient in terms of the amount of transmission data and a data processing rate.
  • a base-view video is transmitted at a full resolution in consideration of compatibility with broadcasting systems for 2D contents services, and an additional-view video is transmitted and received at a half size because it is used during 3D video reproduction, thereby efficiently transmitting and receiving a video stream.
  • depth information and disparity information may be transmitted and received at a 1 ⁇ 2 or 1 ⁇ 4 size, and not at a full size.
  • the size of a composite frame is an original size of a frame, and the sizes of a left-view frame and a right-view frame that constitute the composite frame may be reduced to a half size.
  • the value of parameter ‘full_image_size_indicator’ is not 1. If each of the left-view frame and the right-view frame is constructed to have the full size of the original frame and thus the size of a 3D composite format frame is twice the size of the original frame, the value of parameter ‘full_image_size_indicator’ may be set to be 1.
  • parameter ‘3d_video_property’ represents ‘3d_composite_format’
  • parameter ‘3d_composite_format’ and parameter ‘is_left_first’ may be set as below.
  • 3d_composite_format represents 3D composite format information that represents a method of constructing 3D composite format images by composing images corresponding to a left-view video and a right-view video.
  • the 3D composite format information corresponds to the types of 3D composite formats as in Table 10.
  • the 3D composite format of current video data is a side-by-side format.
  • the 3D composite format of the current video data is a top-and-bottom format, a vertical line interleaved format, a horizontal line interleaved format, a frame sequential format, a field sequential format, or a checker board format, respectively.
  • the side-by-side format is an image format in which a left-view image and a right-view image respectively corresponding to a left region and a right region of a 3D composite format image are arranged side by side.
  • the top-and-bottom format is an image format in which a left-view image and a right-view image respectively corresponding to an upper region and a lower region of the 3D composite format image are arranged.
  • the vertical line interleaved format is an image format in which a left-view image and a right-view image respectively corresponding to odd-numbered vertical lines and even-numbered vertical lines of the 3D composite format image are arranged.
  • the horizontal line interleaved format is an image format in which a left-view image and a right-view image respectively corresponding to odd-numbered horizontal lines and even-numbered horizontal lines of the 3D composite format image are arranged.
  • the frame sequential format is an image format in which a left-view image and a right-view image respectively corresponding to odd-numbered frames and even-numbered frames of the 3D composite format image are arranged.
  • the field sequential format is an image format in which a left-view image and a right-view image respectively corresponding to odd-numbered fields and even-numbered fields of the 3D composite format image are arranged.
  • the checker board format is an image format in which a left-view image and a right-view image respectively corresponding to pixels in a horizontal direction and pixels in a vertical direction of the 3D composite format image are arranged alternately in units of pixels.
  • Parameter ‘is_left_first’ represents format arrangement sequence information that represents a sequence in which a left-view image and a right-view image of the 3D composite format image are arranged.
  • Parameter ‘is_left_first’ may represent which region is a left-view image of the 3D composite format image of the current video stream and which region is a right-view image thereof.
  • Format arrangement sequence information according to an exemplary embodiment is linked with the 3D composite format information with reference to Table 11, so that positions of a left-view image and a right-view image of the 3D composite format image may be set as follows.
  • left video data is arranged in a left region of a side by side format image, an upper region of a top and bottom format image, odd-numbered lines of a vertical line interleaved format image, odd-numbered lines of a horizontal line interleaved format image, odd-numbered frames of a frame sequential format image, odd-numbered fields of a field sequential format image, and odd-numbered pixels of a checker board format image.
  • right-view video data is arranged in a region opposite to the region of each of the 3D composite format images where the left-view video data is arranged.
  • parameter ‘full_image_size_indicator’ when the value of parameter ‘full_image_size_indicator’ is 0, parameter ‘additional_view_image_size’, parameter ‘scaling_method’, and parameter ‘scaling_order’ may be set as follows.
  • Additional_view_image_size includes additional-view image size information that represents a rate at which the image size of additional-view video information of the current video stream is enlarged or reduced from the original image size. Additional-view video information according to an exemplary embodiment may be set as follows with reference to Table 12.
  • parameter ‘additional_view_image_size’ When the value of parameter ‘additional_view_image_size’ is 0x00, it represents a method of halving the image size of the additional-view video information in a horizontal direction.
  • This method is an additional-view image reducing method that can be efficiently used in 3D display devices for halving input video data in a horizontal direction and reproducing a result of the halving, including parallax barrier display devices.
  • parameter ‘additional_view_image_size’ When the value of parameter ‘additional_view_image_size’ is 0x01, it represents a method of halving the image size of the additional-view video information in a vertical direction.
  • This additional-view image reducing method may be efficiently used in display devices for halving the resolution of an image in a vertical direction and reproducing a result of the halving, including display devices that perform reproduction while changing a polarization angle in units of horizontal lines of a 3D video.
  • parameter ‘additional_view_image_size’ When the value of parameter ‘additional_view_image_size’ is 0x02, it represents a method of reducing the image size by half in a vertical direction and in a horizontal direction, respectively, and thus reducing the image size to 1 ⁇ 4 overall.
  • This additional-view image reducing method may be used in depth information or disparity information rather than video data in order to reduce loss and increase compression efficiency.
  • scaling_method represents down-scaling method information that represents a method of down-scaling the left-view image and the right-view image of a 3D composite format image.
  • a 3D composite format scales down the video data of a frame for each of a plurality of views so that a single frame includes data pieces for the plurality of views.
  • the down-scaling may be performed using any of various down-scaling methods. However, since the down-scaling method is performed in a pre-processing process occurring before compression, if the image size is restored according to an up-scaling method not corresponding to the down-scaling method during video decoding, artifacts may be generated on a restored image. An example where artifacts may be generated during image down-scaling and image restoration will now be described with reference to FIG. 13 .
  • FIG. 13 illustrates an example in which down-scaling method information for 3D composite formats is used, according to an exemplary embodiment.
  • lines 1314 and 1318 from among lines 1312 , 1314 , 1316 , and 1318 of an original image 1310 are sub-sampled and arranged in an upper region 1320 of a top-and-bottom format image.
  • lines 1322 and 1324 of the upper region 1320 of the top-and-bottom format image are the same as the even-numbered lines 1314 and 1318 of the original image 1310 .
  • the lines 1322 and 1324 of the top-and-bottom format image are arranged at locations of odd-numbered lines 1332 and 1336 of a restored image 1330 , and not at locations of even-numbered lines 1334 and 1338 of the restored image 1330 , during an up-scaling operation performed during image decoding, one-pixel mismatch in a vertical direction may occur in all pixels of the restored image 1330 . If a left-view image and a right-view image are each mismatched in units of one pixel, mismatch in units of a maximum of two pixels may occur in a 3D video.
  • the digital broadcasting stream transmitting apparatus 100 of FIG. 1 and the digital broadcasting stream receiving apparatus 200 of FIG. 2 may transmit and receive the down-scaling method information for 3D composite formats.
  • Down-scaling method information may be set as parameter ‘scaling_method’ as in Table 13.
  • a current 3D composite format may include left-view image information and right-view image information that have been down-scaled according to a sampling method in which one of every two consecutive lines is selected and extracted.
  • the current 3D composite format may include left-view image information and right-view image information that have been down-scaled with respect to two consecutive lines of each of an original left-view image and an original right-view image with a single line by replacing the two consecutive lines with a result of an arithmetic operation performed thereon.
  • a representative method of forming a 3D composite format with a down-scaled left-view image and a down-scaled right-view image may be an averaging method in which an average value between pixels of two consecutive lines is determined as a pixel value of a single line.
  • Parameter ‘scaling_order’ represents down-scaling sampling order information that represents a sampling order of a 3D composite format image down-scaled according to a sampling method.
  • scaling_method representing the down-scaling sampling order information
  • Table 14 An example of using down-scaling sampling order information according to an exemplary embodiment is shown in Table 14.
  • parameter ‘scaling_order’ When the value of parameter ‘scaling_order’ is 0x00, an odd-numbered line of a left-view image is sampled and an even-numbered line of a right-view image is sampled to construct a 3D composite format image. Similarly, when the value of parameter ‘scaling_order’ is 0x01, an even-numbered line of a left-view image is sampled and an odd-numbered line of a right-view image is sampled to construct a 3D composite format image. When the value of parameter ‘scaling_order’ is 0x02, an odd-numbered line of a left-view image and an odd-numbered line of a right-view image are sampled to construct a 3D composite format image. When the value of parameter ‘scaling_order’ is 0x03, an even-numbered line of a left-view image and an even numbered line of a right-view image are sampled to construct a 3D composite format image.
  • parameter ‘Is_Main’, parameter ‘picture_display_order’, and parameter ‘view_info’ are set when the video property information of the current video stream represents 3D video data, that is, when parameter ‘3d_video_property’ is ‘Left Video’ or ‘Right Video’.
  • Parameter ‘Is_Main’ represents base-view indicator information that represents whether the current video stream is a base-view video stream. For example, when parameter ‘3d_video_property’ representing the video property information of the current video stream is ‘Left video’, parameter ‘Is_Main’ may be set as in Table 15.
  • left-view video data is set to be a sub video.
  • the left-view video data is set to be a main video.
  • Parameter ‘picture_display_order’ represents display order information that represents an order in which ESs of left-view and right-view are displayed.
  • Parameter ‘view_info’ represents 3D-video view-related information in which, if a current video stream is a 3D video, view information is set differently for children and adults.
  • view-related information such as depth and disparity may be set differently for children and adults.
  • an image for adults is an image having a relatively large binocular parallax
  • an image for children is an image having a relatively small binocular parallax. Due to the use of parameter ‘view_info’, images for adults may be distinguished from images for children, and also selective 3D reproduction may be performed according to the screen size of a 3D display device.
  • Parameter ‘view_info’ may be used so that if the screen size of a 3D display device is relatively large, a 3D image having a relatively small binocular parallax is reproduced and if the screen size of a 3D display device is relatively small, a 3D image having a relatively large binocular parallax is reproduced.
  • a stereoscopic effect may be uniformly provided to viewers regardless of the size of a 3D display.
  • Parameter ‘view_index’ represents video index information that indicates a current view from among multiple views of a current video stream if the current video stream is in regards to one of a plurality of 2D videos each including the multiple views.
  • parameter ‘view_index’ may be set.
  • Parameter ‘2D_Multi’ may be used when a service including a plurality of 2D videos is received. In this case, parameter ‘view_index’ may be used to distinguish 2D moving pictures from one another.
  • Parameter ‘es_icon_indicator’ represents a 3D video service notification indicator that indicates provision of 3D service-related icons from a contents provider. Indication of notification of a 3D video service may be inserted into contents through the 3D video service notification indicator without overlapping with a settop box or a TV.
  • the digital broadcasting stream transmitting apparatus 100 of FIG. 1 and the digital broadcasting stream receiving apparatus 200 of FIG. 2 may transmit and receive information associated with conversion of a 2D or 3D reproduction mode in the current video stream.
  • a reproduction mode transition indicator, reproduction mode conversion time information, or the like may be set as the information associated with reproduction mode conversion.
  • Parameter ‘transition_indicator’ includes a reproduction mode transition indicator that indicates whether a 2D/3D video reproduction mode different from that set in current PMT information is set in PMT information following the current PMT information.
  • the reproduction mode transition indicator indicates that, when different pieces of reproduction mode information are set in the following PMT information and the current PMT information, a reproduction mode transition has occurred.
  • Transition_time_stamp represents reproduction mode transition time information that represents, in units of Presentation Time Stamps (PTSs), the time during which reproduction mode transition occurs.
  • the reproduction mode transition time information may be set as an absolute period of time or as a relative period time starting from a predetermined reference point of time.
  • Parameter ‘transition_message’ includes a variety of message information such as a text, an image, an audio, and the like for notifying viewers of a reproduction mode transition.
  • parameter ‘transition_message’ includes size information and message information of a reproduction mode transition message, and a reproduction mode transition message may be represented in an 8-bit character string by using a for statement that is repeated by a number of times corresponding to the value of a y with media such as digThe 3D video stream descriptor according to an
  • the 3D video stream descriptor according to an exemplary embodiment is not limited to the syntax shown in Table 7 and may be suitably changed when a 3D video is expanded into a multi-view image or used for predetermined purposes.
  • the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert a multi-view video stream descriptor for accurate distinction among multi-view video streams into the PMT information for the current video stream.
  • the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 may extract the multi-view video stream descriptor from the PMT information for the current video stream.
  • the multi-view video stream descriptor includes additional information used by the reproduction unit 230 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 to accurately restore and reproduce multi-view video streams.
  • a multi-view video stream descriptor according to an exemplary embodiment may be set as in Table 16.
  • Parameter ‘number_of views’ represents information about the number of views of a multi-view video stream.
  • Parameter ‘mv_numbering’ represents video index information of the multi-view video stream.
  • the value of parameter ‘mv_numbering’ may be set to start at 0 for a leftmost view video stream and increase by 1 with each video stream to the right of the leftmost view video stream.
  • the value of parameter ‘mv_numbering’ of the leftmost view video stream may be set to be 0, and the value of parameter ‘mv_numbering’ of the leftmost view video stream may be set to be a value obtained by subtracting 1 from the value of parameter ‘number_of views’.
  • a location of a base-view video stream in a multi-view video stream may be predicted from parameter ‘linking_priority’ of parameter ‘linking_descriptor’.
  • the value of parameter ‘linking_priority’ of the base-view video stream may be set to be 10, and the values of parameter ‘linking_priority’ of the other-view video streams may be all set to be 11.
  • the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may set a partial link descriptor and a 3D video stream descriptor of a left-view video stream corresponding to the base-view video stream as in Table 17 and set a partial link descriptor and a 3D video stream descriptor of a right-view video stream corresponding to an additional-view video stream as in Table 18.
  • the digital broadcasting stream receiving apparatus 200 of FIG. 2 may determine that, as the values of parameter ‘linked_stream_coding_mode’ for the first and second video streams are 000, the first and second video streams have been encoded according to an encoding method independent from that used to encode the opponent video stream, and that as the value of parameter ‘full_image_size_indicator’ is 1, the original image size is maintained.
  • FIGS. 14 , 15 , 16 , 17 , and 18 are schematic views of reproduction units of digital broadcasting stream receiving apparatuses according to exemplary embodiments.
  • a method in which the reproduction unit 230 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 restores and reproduces 3D or 2D video data from a video stream will now be described with reference to FIGS. 14 through 18 .
  • left-view video data 1415 and right-view video data 1425 may be restored by stream decoders 1410 and 1420 , respectively.
  • the left-view video data 1415 and the right-view video data 1425 may be converted into a 3D reproduction format that can be reproduced as a 3D video, by a 3D formatter or renderer 1430 .
  • a signal of the 3D reproduction format may be reproduced by a 3D display device 1440 .
  • 2D video data 1515 and depth/disparity video 1525 may be restored by stream decoders 1510 and 1520 , respectively.
  • the 2D video data 1515 and the depth/disparity video 1525 may be converted into the 3D reproduction format by a 3D renderer 1530 .
  • a signal of the 3D reproduction format may be reproduced by a 3D display device 1540 .
  • 2D video data 1615 and a 3D composite video 1625 may be restored by stream decoders 1610 and 1620 , respectively.
  • the 3D composite video 1625 is converted into the 3D reproduction format by a 3D formatter 1630 .
  • the 2D video data 1615 and a signal of the 3D reproduction format may undergo a reproduction mode conversion process 1640 based on a user input or an automatic reproduction mode conversion algorithm and then may be selectively reproduced by a 3D display device 1650 .
  • main 2D video data 1715 and sub 2D video data 1725 may be restored by stream decoders 1710 and 1720 , respectively.
  • the main 2D video data 1715 and the sub 2D video data 1725 may undergo a video conversion process 1730 based on a user input or a main/sub video selection algorithm and then may be selectively reproduced by a 2D display device 1740 .
  • main 2D video data 1815 and sub 2D video data 1825 may be restored by stream decoders 1810 and 1820 .
  • the main 2D video data 1815 and the sub 2D video data 1825 may be reproduced in a PIP mode by a 2D display device 1830 .
  • the main 2D video data 1815 may be reproduced on a screen 1840 of the 2D display device 1830 to cover the screen 1840 entirely, and the sub 2D video data 1825 may be reproduced on a partial screen 1850 of the 2D display device 1830 .
  • a Main 2D video and a sub 2D video are not videos for achieving a stereoscopic effect of 3D contents but may be videos of mutually associated contents.
  • a 2D video of a main view may contain contents of a baseball game scene
  • a 2D video of a sub view may contain contents of additional information of a baseball game, such as stand scenes, analysis information of the pitching posture of a current pitcher, batting average information of the current pitcher, and the like.
  • the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may set and transmit link information and 3D video stream information for the main 2D video and the sub 2D video
  • the digital broadcasting stream receiving apparatus 200 of FIG. 2 may restore the main 2D video and the sub 2D video linked with each other by using the link information and the 3D video stream information and selectively reproduce the main 2D video and the sub 2D video or reproduce the same in the PIP mode. Accordingly, a user may watch a variety of information about a baseball game in a sub view while continuously watching the baseball game in a main view.
  • the digital broadcasting stream transmitting apparatus 100 of FIG. 1 transmit and the digital broadcasting stream receiving apparatus 200 of FIG. 2 receive link information between mutually associated video streams for a data stream that provides 3D video and 2D video, thereby realizing a 3D digital video contents broadcasting system while securing compatibility with digital broadcasting systems for 2D contents services.
  • the digital broadcasting stream transmitting apparatus 100 of FIG. 1 and the digital broadcasting stream receiving apparatus 200 of FIG. 2 provide 3D moving picture services which are not interrupted by time and place, by providing 2D digital contents and 3D digital contents while securing compatibility with media such as digital video discs (DVDs).
  • 3D moving picture services which are not interrupted by time and place, by providing 2D digital contents and 3D digital contents while securing compatibility with media such as digital video discs (DVDs).
  • FIG. 19 is a flowchart of a digital broadcasting stream transmitting method capable of providing 3D video services, according to an exemplary embodiment.
  • a plurality of ESs for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video of the 3D video, and a 2D video having a different view from that of the 3D video are generated.
  • link information between the plurality of pieces of video information and the plurality of ESs are multiplexed to generate at least one TS.
  • the at least one TS is transmitted via at least one channel.
  • FIG. 20 is a flowchart of a digital broadcasting stream receiving method capable of providing 3D video services, according to an exemplary embodiment.
  • At least one TS is received via at least one channel.
  • the at least one TS may include a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video of the 3D video, and a 2D video having a different view from that of the 3D video.
  • the at least one received TS is demultiplexed to extract linking information between pieces of video information and at least one ES for the plurality of video information pieces from the TS.
  • At least one of 3D video data and 2D video data restored by decoding the extracted at least one ES is reproduced in consideration of a link represented by the linking information.
  • the digital broadcasting stream transmitting method of FIG. 19 and the digital broadcasting stream receiving method of FIG. 20 may be performed by computer program interactions, respectively. By doing this, operations of the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 that are performed according to the exemplary embodiments described above with reference to FIGS. 1 , 2 , 5 - 12 , and 14 - 18 may be implemented.
  • Exemplary embodiments can be written as computer programs and can be implemented in general-use digital computers that execute the programs using a computer readable recording medium.
  • Examples of the computer readable recording medium include magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.) and optical recording media (e.g., CD-ROMs, or DVDs).
  • one or more units of the apparatuses 100 , 200 , 300 , 400 , 600 , 800 , 900 , and 1000 can include a processor or microprocessor executing a computer program stored in a computer-readable medium.

Abstract

A digital broadcasting stream transmitting method and a digital broadcasting stream receiving method and apparatus for providing three-dimensional (3D) video services are provided. The transmitting method including: generating a plurality of elementary streams (ESs) for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video corresponding to the base-view video, and a two-dimensional (2D) video having a different view from views of the 3D video; multiplexing the plurality of ESs with link information for identifying at least one piece of video information linked with the plurality of pieces of video information, to generate at least one transport stream (TS); and transmitting the generated at least one TS via at least one channel.

Description

    CROSS-REFERENCE TO RELATED PATENT APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 13/016,339 filed Jan. 28, 2011, which claims the benefit of U.S. Provisional Patent Application No. 61/299,121, filed on Jan. 28, 2010 in the U.S. Patent and Trademark Office, and priority from Korean Patent Application No. 10-2010-0044056, filed on May 11, 2010 in the Korean Intellectual Property Office, the disclosures of which are incorporated herein in their entireties by reference.
  • BACKGROUND
  • 1. Field
  • Apparatuses and methods consistent with exemplary embodiments relate to digital data transmitting and receiving for providing two-dimensional (2D) contents or three-dimensional (3D) contents.
  • 2. Description of the Related Art
  • In a Moving Picture Experts Group (MPEG) Transport (TS) based digital broadcasting method, a transmission terminal inserts uncompressed video data and an uncompressed audio stream into respective elementary streams (ESs), multiplexes each of the ESs to generate a TS, and transmits the TS via a channel.
  • The TS includes program specification information (PSI) together with the ESs. The PSI representatively includes a program association table (hereinafter, referred to as PAT information) and a program map table (hereinafter, referred to as PMT information). PMT information providing single-program information describes a Packet Identifier (PID) for each ES, and PAT information describes a PID for each PMT information.
  • A reception terminal receives a TS via a channel and extracts an ES from the TS through a process reverse to the process performed in a transmission terminal. Digital contents contained in the ES is restored and reproduced by a display device.
  • SUMMARY
  • According to an aspect of an exemplary embodiment, there is provided a digital broadcasting stream transmitting method for providing 3D video services, the method including: generating a plurality of ESs for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video corresponding to the base-view video, and a 2D video having a different view from views of the 3D video; generating at least one TS by multiplexing the plurality of ESs with link information for identifying at least one piece of video information linked with each of the plurality of pieces of video information; and transmitting the generated at least one TS via at least one channel.
  • The generating the at least one TS may include multiplexing each of the ESs to generate at least one TS, and the transmitting the at least one TS may include transmitting the at least one multi-program transport stream via different channels.
  • The generating the at least one TS may include multiplexing each of the ESs to generate at least one TS, and the transmitting the least one TS may include transmitting the at least one TS via a single channel.
  • The generating the at least one TS may include multiplexing the ESs to generate a single TS, and the transmitting the least one TS may include transmitting the single TS via a single channel.
  • According to an aspect of another exemplary embodiment, there is provided a digital broadcasting stream receiving method for providing 3D video services, the method including: receiving at least one TS for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video of the 3D video, and a 2D video having a different view from views of the 3D video, via at least one channel; demultiplexing the received at least one TS to extract, from the at least one TS, linking information for identifying at least one piece of video information linked with the plurality of pieces of video information and at least one ES for the plurality of pieces of video information; and decoding the extracted at least one ES to reproduce at least one of the 3D video and the 2D video restored by the decoding based on the linking information.
  • The receiving may include receiving a plurality of TS by decoding each of the at least one channel and receiving a single TS from each of the at least one channel, and the extracting may include extracting at least one ES by demultiplexing each of the at least one multi-program TS.
  • The receiving may include receiving a plurality of TSs via a channel from among the at least one channel by decoding the channel, and the extracting may include extracting the at least one ES by demultiplexing each of the at least one TS.
  • The receiving may include receiving a TS of a channel from among the at least one channel by decoding the channel, and the extracting may include extracting the at least one ES by demultiplexing the transport stream.
  • The linking information may include a link identifier that represents whether mutually linked pieces of video information exist in the plurality of pieces of video information included in the at least one TS, and a link descriptor that includes information about a link between mutually linked pieces of video information from among the plurality of pieces of video information included in the at least one TS.
  • The link identifier may be included in a program association table for the at least one TS.
  • The link descriptor may be included in a program map table for the ESs for the mutually linked pieces of video information.
  • A 3D video stream descriptor including additional information used to reproduce current video information of a current TS may be included in a program map table for the current video information.
  • According to an aspect of another exemplary embodiment, there is provided a digital broadcasting stream transmitting apparatus for providing 3D video services, the apparatus including: an ES generation unit which generates a plurality of ESs for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video corresponding to the base-view video, and a 2D video having a different view from views of the 3D video; a TS generation unit which generates at least one TS by multiplexing the generated plurality of ESs with link information for identifying at least one piece of video information linked with the plurality of pieces of video information; and a transmission unit which transmits the at least one TS via at least one channel.
  • According to an aspect of another exemplary embodiment, there is provided a digital broadcasting stream receiving apparatus for providing 3D video services, the apparatus including: a TS receiving unit which receives at least one TS for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video of the 3D video, and a 2D video having a different view from views of the 3D video, via at least one channel; an ES extraction unit which demultiplexes the received at least one TS to extract, from the at least one TS, linking information for identifying at least one piece of video information linked with the plurality of pieces of video information and at least one ES for the plurality of pieces of video information; and a reproduction unit which reproduces at least one of the 3D video data and 2D video data restored by decoding the extracted at least one ES based on a link represented by the linking information.
  • According to an aspect of another exemplary embodiment, there is provided a computer-readable recording medium having recorded thereon a program for the above-described digital broadcasting stream transmitting method for providing 3D video services.
  • According to an aspect of another exemplary embodiment, there is provided a computer-readable recording medium having recorded thereon a program for the above-described digital broadcasting stream receiving method for providing 3D video services.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and/or other aspects will become more apparent by describing in detail exemplary embodiments with reference to the attached drawings in which:
  • FIG. 1 is a block diagram of a digital broadcasting stream transmitting apparatus according to an exemplary embodiment;
  • FIG. 2 is a block diagram of a digital broadcasting stream receiving apparatus according to an exemplary embodiment;
  • FIG. 3 is a block diagram of a digital TV transmitting system for 2D contents services according to an exemplary embodiment;
  • FIG. 4 is a block diagram of a digital TV receiving system for 2D contents services according to an exemplary embodiment;
  • FIG. 5 illustrates an example of a distribution of a channel frequency band in which a plurality of video streams are transmitted and received via a plurality of channels, according to a first exemplary embodiment;
  • FIG. 6 is a block diagram of a digital broadcasting stream transmitting apparatus that transmits a 3D video stream including linking information via a plurality of channels according to the first exemplary embodiment illustrated in FIG. 5;
  • FIG. 7 illustrates an example of a distribution of a channel frequency band in which a plurality of TSs for a 3D video stream and a 2D video stream can be transmitted and received via a single channel, according to a second exemplary embodiment;
  • FIG. 8 is a block diagram of a digital broadcasting stream transmitting apparatus that transmits a plurality of TSs having linking information via a single channel according to the second exemplary embodiment illustrated in FIG. 7;
  • FIG. 9 is a block diagram of a digital broadcasting stream transmitting apparatus that transmits a single TS including a 3D video stream and a 2D video stream via a single channel according to a third exemplary embodiment;
  • FIG. 10 is a block diagram of a digital broadcasting stream receiving apparatus that receives a plurality of TSs via a plurality of channels corresponding to a plurality of transmission network systems according to a fourth exemplary embodiment;
  • FIG. 11 illustrates an example of a distribution of a channel frequency band in which a plurality of TSs for a base-view video stream of a 3D video can be transmitted and received via a plurality of channels and in which a single TS for an additional-view video stream for the 3D video can be transmitted and received via a single channel, according to a fifth exemplary embodiment;
  • FIG. 12 illustrates an example of a distribution of a channel frequency band in which a plurality of TSs for a multi-view video stream can be transmitted and received via a single channel, according to an exemplary embodiment;
  • FIG. 13 illustrates an example in which down-scaling method information for 3D composite formats is used, according to an exemplary embodiment;
  • FIGS. 14, 15, 16, 17, and 18 are schematic views of reproduction units of a digital broadcasting stream receiving apparatus according to exemplary embodiments;
  • FIG. 19 is a flowchart of a digital broadcasting stream transmitting method capable of providing 3D video services, according to an exemplary embodiment; and
  • FIG. 20 is a flowchart of a digital broadcasting stream receiving method capable of providing 3D video services, according to an exemplary embodiment.
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • Hereinafter, exemplary embodiments will be described more fully with reference to the accompanying drawings. It is understood that expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Further, a “unit” as used herein may be embodied as a hardware component and/or a software component that is executed by a computer or a hardware processor.
  • A digital broadcasting stream transmitting apparatus and method capable of providing 3D video services and a digital broadcasting stream receiving apparatus and method capable of providing 3D video services according to exemplary embodiments will now be described in detail with reference to FIGS. 1 through 20.
  • FIG. 1 is a block diagram of a digital broadcasting stream transmitting apparatus 100 according to an exemplary embodiment.
  • Referring to FIG. 1, the digital broadcasting stream transmitting apparatus 100 includes an elementary stream (ES) generation unit 110, a transport stream (TS) generation unit 120, and a transmission unit 130.
  • The ES generation unit 110 generates a plurality of ESs for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video of the 3D video, and a two-dimensional (2D) video.
  • A single ES may be generated for a single piece of video information. The 3D video may be a combination of a single base-view video and a single additional-view video corresponding to the single base-view video, such as a stereo video, or a combination of a single base-view video and a plurality of additional-view videos corresponding to the single base-view video.
  • The information about the additional-view video represents an additional-view video corresponding to a base-view video, and thus may be additional-view video data itself or at least one of depth information, disparity information, and the like relative to the base-view video data.
  • The 2D video may not be a one-view video of the 3D video but a 2D video having a view separate from the views of the 3D video.
  • The TS generation unit 120 receives the plurality of ESs generated by the ES generation unit 110. The TS generation unit 120 multiplexes linking information between the plurality of pieces of video information with the plurality of ESs to generate at least one TS for the plurality of ESs. The transmission unit 130 transmits the at least one TS via at least one channel.
  • The linking information includes identifiers for indicating at least one video information piece associated with each of the plurality of video information pieces.
  • The TS generation unit 120 may generate at least one TS by multiplexing each of the plurality of ESs, and the transmission unit 130 may transmit the at least one TS via different channels.
  • The TS generation unit 120 may generate at least one TS by individually multiplexing the plurality of ESs, and the transmission unit 130 may transmit the at least one TS via a single channel.
  • The TS generation unit 120 may generate a single TS by multiplexing the plurality of ESs, and the transmission unit 130 may transmit the single TS via a single channel.
  • An operation of the TS generation unit 120 will now be described in detail.
  • The TS generation unit 120 may packetize the ESs individually to generate packetized elementary stream (PES) packets, and multiplex a program map table (hereinafter, referred to as PMT information) including the PES packets and linking information so as to generate a single-program transport stream (SP TS). The TS generation unit 120 may generate an SP TS by multiplexing a video ES, an audio ES, additional data, and the like. The PMT information may be generated for each SP.
  • The TS generation unit 120 may generate a multi-program transport stream (MP TS) by multiplexing at least one SP TS with a program association table (hereinafter, referred to as PAT information).
  • In this case, a plurality of MP TSs not greater than the total number of SP TSs may be generated by multiplexing the SP TSs into several groups.
  • First through fifth exemplary embodiments in which the TS generation unit 120 and the transmission unit 130 transmit a plurality of pieces of video information via different ESs, different TSs, or different channels will now be described.
  • According to the first exemplary embodiment, the TS generation unit 120 may generate at least one SP TS by multiplexing at least one ES individually, and generate at least one MP TS by multiplexing the at least one SP TS individually. According to the first exemplary embodiment, the transmission unit 130 may transmit the at least one MP TS generated by the TS generation unit 120 according to the first exemplary embodiment via channels based on the same transmission network system, respectively.
  • According to the second exemplary embodiment, the TS generation unit 120 may generate at least one SP TS by multiplexing at least one ES individually, and generate a single MP TS by multiplexing the at least one SP TS together. According to the second exemplary embodiment, the transmission unit 130 may transmit the single MP TS generated by the TS generation unit 120 according to the second exemplary embodiment via a single channel.
  • According to the third exemplary embodiment, the TS generation unit 120 may generate a single SP TS by multiplexing at least one ES together, and generate a single MP TS by multiplexing the single SP TS. According to the third embodiment, the transmission unit 130 may transmit the single MP TS generated by the TS generation unit 120 according to the third exemplary embodiment via a single channel.
  • According to the fourth exemplary embodiment, the TS generation unit 120 may generate at least one SP TS and at least one MP TS from at least one ES as in the first exemplary embodiment. However, in contrast with the first exemplary embodiment, the transmission unit 130 according to the fourth exemplary embodiment may transmit the at least one MP TS via channels based on individual kinds of transmission network systems, respectively.
  • According to the fifth exemplary embodiment, the TS generation unit 120 may individually multiplex an ES for at least one base-view video from among at least one ES to generate at least one SP TS for the at least one base-view video. According to the fifth exemplary embodiment, the TS generation unit 120 may individually multiplex the at least one SP TS for the at least one base-view video to generate at least one base-view MP TS.
  • In addition, according to the fifth exemplary embodiment, the TS generation unit 120 may individually multiplex an ES for at least one additional-view video corresponding to the at least one base-view video to generate at least one SP TS for the at least one additional-view video. According to the fifth exemplary embodiment, the TS generation unit 120 may multiplex the at least one SP TS for the at least one additional-view video together to generate a single additional-view MP TS.
  • According to the fifth exemplary embodiment, the transmission unit 130 may transmit the at least one base-view MP TS and the single additional-view MP TS generated by the TS generation unit 120 according to the fifth exemplary embodiment via different channels, respectively.
  • Referring back to FIG. 1, the TS generation unit 120 may include linking information between a plurality of pieces of video information in the at least one TS by inserting the linking information into program specification information (PSI).
  • The linking information may be classified into a link identifier and a link descriptor.
  • The link identifier indicates whether associated pieces of video information exist in the plurality of pieces of video information included in the at least one TS. The TS generation unit 120 may include the link identifier in the PAT information about the at least one TS. In this case, the link identifier may indicate whether pieces of PMT information identified by the PAT information are linked to each other.
  • The link descriptor may include information about a link between the associated pieces of video information existing in the plurality of pieces of video information included in the at least one TS. The TS generation unit 120 may insert the link descriptor into a descriptor region of the PMT information.
  • The TS generation unit 120 may insert not only the linking information but also a 3D video authentication descriptor for representing whether a current ES is for a 3D video, into the PMT information. For example, at least one of a 3D video start descriptor including 3D video information start information representing a location where additional information about the 3D video starts to be inserted, and a 3D video registration descriptor including format identifier information of the 3D video may be inserted into PMT information about the current ES.
  • The link identifier will be described in greater detail below with reference to Table 1, the link descriptor will be described in greater detail below with reference to Tables 4, 5, and 6, and the 3D video authentication descriptor will be described in greater detail below with reference to Tables 2 and 3.
  • The TS generation unit 120 may insert not only the linking information but also a 3D video stream descriptor including additional information used for reproducing current video information of a current TS, into PMT information for the current video information.
  • The 3D video stream descriptor may include information about conversion of a 2D/3D reproduction mode that occurs on a current video stream.
  • The 3D video stream descriptor may include information about the views of a 3D video used to set view information individually, for example, for children and adults.
  • The 3D video stream descriptor will be described in greater detail below with reference to Tables 7 through 16.
  • Accordingly, the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert 3D video information into different channels, different TSs, or different ESs to transmit the 3D video information. As for associated pieces of video information, linking information for identifying locations into which their opponent pieces of video information have been inserted may be set. The linking information may be inserted into PMT information of TSs for the associated video information pieces and transmitted.
  • FIG. 2 is a block diagram of a digital broadcasting stream receiving apparatus 200 according to an exemplary embodiment.
  • Referring to FIG. 2, the digital broadcasting stream receiving apparatus 200 includes a TS receiving unit 210, an ES extraction unit 220, and a reproduction unit 230.
  • The TS receiving unit 210 receives at least one TS including a plurality of pieces of video information via at least one channel. The received at least one TS may include at least one of information about a base-view video of a 3D video, information about an additional-view video corresponding to the base-view video, and a 2D video. The 2D video may be video information of a view different from the views of the 3D video. The information about the additional-view video corresponding to the base-view video may be additional-view video data itself or may be at least one of disparity information and depth information that allows the additional-view video data to be restored based on the base-view video data.
  • The ES extraction unit 220 receives the at least one TS from the TS receiving unit 210 and demultiplexes the TS to extract at least one ES for the plurality of video information pieces from the TS. The ES extraction unit 220 also extracts linking information between video information pieces from the demultiplexed TS.
  • The TS receiving unit 210 may receive an MP TS via at least one channel. A single MP TS may be received via each channel.
  • The TS receiving unit 210 may receive a plurality of TSs by individually decoding a plurality of channels and receiving a single TS from each of the channels. The ES extraction unit 220 may extract a plurality of ESs by demultiplexing a plurality of MP TSs individually.
  • The TS receiving unit 210 may receive a plurality of TSs by decoding a single channel, and the ES extraction unit 220 may extract a plurality of ESs by demultiplexing the plurality of TSs.
  • The TS receiving unit 210 may receive a single TS by decoding a single channel, and the ES extraction unit 220 may extract a plurality of ESs by demultiplexing the single TS.
  • An operation of the ES extraction unit 220 will now be described in greater detail.
  • The ES extraction unit 220 may demultiplex an MP TS to extract at least one SP TS together with PAT information from the demultiplexed MP TS. The ES extraction unit 220 may demultiplex each SP TS to extract PES packets and PMT information.
  • The PMT information may include linking information about a link between a plurality of ESs included in the at least one SP TS. The PES packets may be depacketized into the ESs.
  • Each ES may include base-view video information for a 3D video, additional-view video information for the 3D video, a 3D composite format of the base-view video information and the additional-view video information, or 2D video information. The ES extraction unit 220 may also extract an audio ES from the demultiplexed SP TS.
  • Since a single MP TS may be demultiplexed into at least one SP TS, no fewer SP TSs than a plurality of MP TSs may be extracted.
  • The TS receiving unit 210 and the ES extraction unit 220 respectively receive and demultiplex a TS generated and transmitted respectively by the TS generation unit 120 and the transmission unit 130. Accordingly, operations of the TS receiving unit 210 and the ES extraction unit 220 to receive the at least one TS and extract an ES from the TS according to the aforementioned first through fifth exemplary embodiments are as follows.
  • According to the first exemplary embodiment, the TS receiving unit 210 may receive at least one MP TS by decoding at least one channel individually and receiving a single MP TS from each of the channels. According to the first exemplary embodiment, the ES extraction unit 220 may demultiplex the at least one MP TS individually to extract a single SP TS from each of the MP TSs, and demultiplex each of the SP TSs to extract a single ES from each of the SP TSs. Thus, the TS receiving unit 210 and the ES extraction unit 220 according to the first exemplary embodiment may finally extract ESs for 3D video information or 2D video information from at least one TS received for at least one channel.
  • According to the second exemplary embodiment, the TS receiving unit 210 may decode a single channel to extract a single MP TS. According to the second exemplary embodiment, the ES extraction unit 220 may demultiplex the single MP TS to extract at least one SP TS, and demultiplex the at least one SP TS individually to extract at least one ES. Thus, the TS receiving unit 210 and the ES extraction unit 220 according to the second exemplary embodiment may extract at least one SP TS via a single channel and thus extract ESs for 3D video information or 2D video information.
  • According to the third exemplary embodiment, the TS receiving unit 210 may decode a single channel to extract a single MP TS. According to the third exemplary embodiment, the ES extraction unit 220 may demultiplex the single MP TS to extract a single SP TS, and demultiplex the single SP TS to extract at least one ES. Accordingly, the TS receiving unit 210 and the ES extraction unit 220 according to the third exemplary embodiment may finally extract at least one ES including 3D video information or 2D video information via a single channel.
  • According to the fourth exemplary embodiment, in contrast with the first exemplary embodiment, the TS receiving unit 210 may individually decode channels based on individual kinds of transmission network systems and receive a single MP TS for each channel to thereby receive at least one MP TS. According to the fourth exemplary embodiment, the ES extraction unit 220 may demultiplex the at least one MP TS individually to extract a single SP TS from each of the at least one MP TS, and demultiplex each SP TS to extract a single ES from each of the SP TSs. Thus, the TS receiving unit 210 and the ES extraction unit 220 according to the fourth exemplary embodiment may finally extract ESs for 3D video information or 2D video information from TSs respectively received via a plurality of channels based on individual kinds of transmission network systems.
  • According to the fifth exemplary embodiment, the TS receiving unit 210 may decode a plurality of channels to receive at least one base-view MP TS for a 3D video from at least one of the plurality of channels and receive a single additional-view MP TS for the 3D video from one of the plurality of channels.
  • According to the fifth exemplary embodiment, the ES extraction unit 220 may demultiplex the at least one base-view MP TS individually to extract at least one base-view SP TS, and demultiplex the at least one base-view SP TS individually to extract at least one base-view ES. According to the fifth exemplary embodiment, the ES extraction unit 220 may also demultiplex the single additional-view MP TS to extract at least one additional-view SP TS, and demultiplex the at least one additional-view SP TS individually to extract at least one additional-view ES.
  • Accordingly, the TS receiving unit 210 and the ES extraction unit 220 according to the fifth exemplary embodiment may finally extract a plurality of ESs for 3D video information or 2D video information via a plurality of channels.
  • Referring back to FIG. 2, the linking information extracted from the ES extraction unit 220 includes an identifier for indicating at least one piece of video information associated with each of a plurality of pieces of video information. The linking information may be used by the reproduction unit 230 to search for other video data corresponding to predetermined video data and reproduce the video data while maintaining a link therebetween.
  • The ES extraction unit 220 may extract linking information between a plurality of pieces of video information from PSI of a TS. The linking information may include a link identifier and a link descriptor. The ES extraction unit 220 may extract the link identifier of the linking information from PAT information of the TS. In this case, the link identifier may represent whether pieces of PMT information identified by the PAT information are linked to each other. The ES extraction unit 220 may extract the link descriptor of the linking information from a descriptor region of the PMT information.
  • The ES extraction unit 220 may extract not only the linking information but also a 3D video authentication descriptor from the PMT information. Whether 3D video information is included in a current ES may be checked from at least one of a 3D video start descriptor and a 3D video registration descriptor extracted as the 3D authentication descriptor.
  • The link identifier will be described in greater detail below with reference to Table 1, the link descriptor will be described in greater detail below with reference to Tables 4, 5, and 6, and the 3D video authentication descriptor will be described in greater detail below with reference to Tables 2 and 3.
  • The ES extraction unit 220 may extract not only the linking information but also a 3D video stream descriptor from PMT information for current video information. The ES extraction unit 220 may extract information related to conversion of a 2D/3D reproduction mode that occurs on a current video stream, information related with the views of a 3D video used to set view information individually, for example, for children and adults, and other information from the 3D video stream descriptor. The 3D video stream descriptor will be described in greater detail below with reference to Tables 7 through 16.
  • The reproduction unit 230 decodes the at least one ES extracted by the ES extraction unit 220 to restore 3D video data or 2D video data. The reproduction unit 230 may reproduce at least one of the restored 3D video data and the restored 2D video data in consideration of a link represented by the linking information.
  • For example, if the reproduction unit 230 receives an ES for base-view video information of a 3D video and an ES for additional-view video information of the 3D video from the ES extraction unit 220, the reproduction unit 230 may restore base-view video data and additional-view video data of the 3D video from the ESs. In particular, the reproduction unit 230 may search for and restore the base-view video data and the additional-view video data based on the linking information, and may convert and reproduce the base-view video data and the additional-view video data in a 3D reproduction format that can be reproduced by a 3D display device.
  • If the reproduction unit 230 receives an ES for 2D video information and an ES for disparity information or depth information for additional-view video from the ES extraction unit 220, the reproduction unit 230 may restore the base-view video data and additional-view video data of the 3D video from the ESs. In particular, the reproduction unit 230 may search for disparity information or depth information corresponding to 2D video data based on the linking information to restore the additional-view video data, and may convert and reproduce the additional-view video data into a 3D reproduction format that can be reproduced by a 3D display device.
  • In addition, an ES for a 3D video of a 3D composite format and an ES for a 2D video may be input to the reproduction unit 230. The reproduction unit 230 may restore video information of a 3D composite format into 3D video data based on the linking information and may convert the 3D video data into base-view video data and additional-view video data that may both be reproduced by a 3D display device. Since the reproduction unit 230 may also restore 3D video data, the reproduction unit 230 may selectively reproduce the base-view video data, the additional-view video data, and the 2D video data.
  • If associated ESs for a 2D video are input to the reproduction unit 230, the reproduction unit 230 may restore corresponding items of 2D video data based on the linking information and selectively and independently reproduce the 2D video data items. The reproduction unit 230 may selectively reproduce the corresponding 2D video data items according to the linking information in a Picture-in-Picture (PIP) mode.
  • The ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 may detect associated pieces of video information transmitted via different channels, different SP TSs, or different ESs by using the linking information to extract only associated ESs. The reproduction unit 230 of the digital broadcasting stream receiving apparatus 200 may recognize the existence of different video streams associated with an SP video stream selected by a user according to the linking information and may reproduce the associated video streams.
  • The digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 may provide digital contents services of a 3D video and digital contents services of a 2D video while maintaining compatibility with digital broadcasting systems that use Moving Picture Experts Group (MPEG) based TSs.
  • Since PMT information includes information about a single video ES, a transmission terminal inserts, into each PMT information, association information for identifying an ES, a TS, or a channel in which opponent video information is located, and thus a reception terminal may reproduce associated items video data according to the association information between ESs, TSs, or channels in a 3D reproduction mode or a 2D reproduction mode.
  • In addition, since not only linking information but also information about a conversion between a 2D video reproduction method and a 3D video reproduction method and additional information about 3D video characteristics are transmitted and received via PMT information, the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 may smoothly provide digital contents services of 3D video while maintaining compatibility with digital broadcasting systems for 2D contents services.
  • Without needing to newly generate PSI or a TS for the PSI, linking information between a plurality of pieces of video information and additional information such as a 3D video stream descriptor may be included in PMT information or PAT information. In addition, additional information may be included in a reserved field without needing to add a packet identifier (PID) to the PMT information.
  • FIG. 3 is a block diagram of a digital TV transmitting system 300 for 2D contents services according to an exemplary embodiment.
  • Referring to FIG. 3, the digital TV transmitting system 300 generates an SP TS including a single video ES and a single audio ES by using a single-program encoder 310 and multiplexes at least one SP TS generated by a plurality of single-program encoders by using a multiplexer (MUX) 380 to generate and transmit an MP TS.
  • The single-program encoder 310 includes a video encoder 320, an audio encoder 330, packetizers 340 and 350, and a multiplexer (MUX) 360.
  • The video encoder 320 and the audio encoder 330 encode uncompressed video data and uncompressed audio data, respectively, to generate and output a video ES and an audio ES, respectively. The packetizers 340 and 350 of the single-program encoder 310 packetize the video ES and the audio ES, respectively, and insert PES headers into the packetized video ES and the packetized audio ES, respectively, to generate a video PES packet and an audio PES packet, respectively.
  • The MUX 360 multiplexes the video PES packet, the audio PES packet, and a variety of additional data to generate a first single-program transport stream SP TS1. PMT information may be multiplexed with the video PES packet and the audio PES packet and included in the first single-program transport stream SP TS1. PMT information is inserted into each single-program transport stream and describes a PID of each ES or additional data.
  • The MUX 380 multiplexes a plurality of single-program transport streams SP TS1, SP TS2, . . . with PAT information to generate a multi-program transport stream MP TS.
  • The PMT information and the PAT information are generated by a PSI and Program and System Information Protocol (PSIP) generator 370.
  • The multi-program transport stream (MP TS) may include the PAT information and a PSIP. The PAT information describes PIDs of PMT information about single-program transport streams included in a multi-program transport stream.
  • FIG. 4 is a block diagram of a digital TV receiving system 400 for 2D contents services according to an exemplary embodiment.
  • The digital TV receiving system 400 receives a digital broadcasting stream and extracts video data, audio data, and additional data from the digital broadcasting stream.
  • A digital TV (DTV) tuner 410 is tuned to a wave frequency of a channel selected according to a physical channel select signal input by a viewer to selectively extract a signal received via a wave corresponding to the wave frequency. A channel decoder and demodulator 420 extracts a multi-program transport stream MP TS from a channel signal. The multi-program transport stream MP TS is demultiplexed into a plurality of single-program transport streams SP TS1, SP TS2, . . . and a PSIP by a demultiplexer (DEMUX) 430.
  • A first single-program transport stream SP TS1 selected by a program select signal input by a viewer is decoded by a single-program decoder 440. The single-program decoder 440 operates reversely to the single-program encoder 310. A video PES packet, an audio PES packet, and additional data are restored from the first single-program transport stream SP TS1. The video PES packet and the audio PES packet are restored to ESs by depacketizers 460 and 465, respectively, and then the ESs are restored to video data and audio data by a video decoder 470 and an audio decoder 475, respectively. The video data may be converted into a displayable format by a display processor 480.
  • A clock recovery and audio/video (AV) synchronization device 490 may synchronize a video data reproduction time with an audio data reproduction time by using program clock reference (PCR) information and time stamp information extracted from the first single-program transport stream SP TS1.
  • A program guide database 445 may receive the program select signal input by a viewer, search for a channel and a program corresponding to the program select signal input by the viewer by comparing the program select signal with the PSIP extracted from the multi-program transport stream MP TS, and then transmit a channel selection input signal to the digital TV tuner 410 and a program selection input signal to the DEMUX 430. The program guide database 445 may also transmit on-screen display information to the display processor 480 and support an on-screen display operation.
  • Various exemplary embodiments in which the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 transmit and receive 3D video information, linking information, and 3D additional information through a plurality of channels or a plurality of ESs in order to provide a 3D digital broadcasting service while maintaining compatibility with an MPEG TS-based digital broadcasting systems for providing 2D contents services will now be described with reference to FIGS. 5 through 11.
  • Hereinafter, a video stream may be a general term for video information such as video data, disparity information, depth information, or the like, and an ES, a PES packet, a single-program transport stream, and a multi-program transport stream that are results of conversions performed at different stages.
  • In the exemplary embodiments described below with reference to FIGS. 5 through 11, a stereo video stream including a left view video and a right view video is transmitted and received for convenience of explanation. However, the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 may also be applied to multi-view video streams each including a reference view and at least one additional view, as in an exemplary embodiment of FIG. 12.
  • FIG. 5 illustrates an example of a distribution of a channel frequency band 500 in which a plurality of video streams can be transmitted and received via a plurality of channels, according to the above-described first exemplary embodiment.
  • In the channel frequency band 500, a frequency band is allocated to each of the plurality of channels. For example, the channel frequency band 500 includes a frequency band 510 for channel 6, a frequency band 520 for channel 7, a frequency band 530 for channel 8, a frequency band 540 for channel 9, and a frequency band 550 for channel 10.
  • A TS for left-view video information and a TS for right-view video information may be transmitted and received as a first stereo video through the frequency band 510 for channel 6 and the frequency band 530 for channel 8, respectively. In this case, linking information indicating existence of an association between videos of a stereo video, namely, indicating that the videos are stereo linked, may be set for channel 6 and channel 8. The linking information may be set between channels, TSs, or ESs.
  • A TS for left-view video information and a TS for right-view video information may be transmitted and received as a second stereo video through the frequency band 520 for channel 7 and the frequency band 550 for channel 10, respectively. In this case, linking information indicating existence of an association between videos of a stereo video, namely, indicating that the videos are stereo linked, may be set for channel 7 and channel 10.
  • Accordingly, in a method of allocating a stereo video stream to channels according to the first exemplary embodiment, a TS for a left-view video and a TS for a right-view video are transmitted via different channels, respectively.
  • FIG. 6 is a block diagram of a digital broadcasting stream transmitting apparatus 600 that transmits a 3D video stream having linking information via a plurality of channels according to the first exemplary embodiment described above with reference to FIG. 5.
  • The digital broadcasting stream transmitting apparatus 600 corresponds to a block diagram of the digital broadcasting stream transmitting apparatus 100 that is constructed according to the first exemplary embodiment of FIG. 5. Operations of single- program encoders 610, 630, and 650 and MUXes 620, 640, and 660 of the digital broadcasting stream transmitting apparatus 600 correspond to operations of the ES generation unit 110 and the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 that are performed according to the first exemplary embodiment. Operations of channel encoder and modulators 625, 645, and 665 and a DTV transmitter 670 of the digital broadcasting stream transmitting apparatus 600 correspond to an operation of the transmission unit 130 of the digital broadcasting stream transmitting apparatus 100 that is performed according to the first exemplary embodiment.
  • The single- program encoders 610, 630, and 650 may receive a left-view video sequence Left Seq., a 2D video sequence 2D Seq., and a right-view video sequence Right Seq. to generate and output a first single-program transport stream SP TS1, a second single-program transport stream SP TS2, and a third single-program transport stream SP TS3, respectively.
  • The first single-program transport stream SP TS1 for the left-view video sequence may be multiplexed by the MUX 620 to generate a first multi-program transport stream MP TS1. Similarly, the second single-program transport stream SP TS2 for the 2D video sequence and the third single-program transport stream SP TS3 for the right-view video sequence may be respectively multiplexed by the MUXes 640 and 660 to respectively generate a second multi-program transport stream MP TS2 and a third multi-program transport stream MP TS3.
  • The first multi-program transport stream MP TS1 may be encoded and modulated according to channel 6 (or channel 7) by the channel encoder and modulator 625. The third multi-program transport stream MP TS3 may be encoded and modulated according to channel 8 (or channel 10) by the channel encoder and modulator 665. The second multi-program transport stream MP TS2 may be encoded and modulated according to channel 9 by the channel encoder and modulator 645.
  • The DTV transmitter 670 may transmit a broadcasting video stream allocated to the plurality of channels. Thus, the digital broadcasting stream transmitting apparatus 600 and the digital broadcasting stream transmitting apparatus 100 according to the first exemplary embodiment may generate a single-program transport stream and a multi-program transport stream from each of a left-view video and a right-view video of a stereo video and a 2D video and transmit the single-program transport streams and the multi-program transport streams via a plurality of channels, respectively.
  • The single program encoder 610 for the left-view video may set linking information 615 for identifying the right-view video as corresponding to the left-view video. The single program encoder 650 for the right-view video may set linking Information 655 for identifying the left-view video as corresponding to the right-view video. If a video associated with the 2D video exists, the single program encoder 630 for the 2D video may set linking information 635 for identifying the associated video. Each linking information may be inserted into PMT information of a single-program transport stream.
  • According to the first exemplary embodiment, the digital broadcasting stream receiving apparatus 200 may selectively receive a TS for a left-view video of a stereo video, a TS for a right-view video of the stereo video, and a TS for a 2D video transmitted via different channels, and restore video data of a desired TS.
  • According to the first exemplary embodiment, the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 may extract linking information. The reproduction unit 230 according to the first exemplary embodiment may identify a right-view video sequence corresponding to a left-view video sequence by using the linking information and thus perform 3D reproduction. The linking information may also be used by the ES extraction unit 220 when searching for a third single-program transport stream corresponding to a first single-program transport stream to extract ESs from the first and third single-program transport streams, respectively.
  • FIG. 7 illustrates an example of a distribution of a channel frequency band 700 in which a plurality of TSs for a 3D video stream and a 2D video stream can be transmitted and received via a single channel, according to the above-described second exemplary embodiment.
  • The channel frequency band 700 includes a frequency band 710 for channel 8, a frequency band 720 for channel 9, and a frequency band 730 for channel 10.
  • A TS 740 for left-view video information and a TS 760 for right-view video information may be transmitted and received as a stereo video through the frequency band 710 for channel 8. In this case, linking information indicating existence of an association between videos of a stereo video, namely, indicating that the videos are stereo linked, may be set between an ES for the left-view video information and an ES for the right-view video information or between an SP TS for the left-view video information and an SP TS for the right-view video information.
  • A TS 750 for a normal 2D video and another TS 770 may be transmitted and received through the frequency band 710 for channel 8.
  • Accordingly, in a method of allocating a stereo video stream to channels according to the second exemplary embodiment, a TS for a left-view video and a TS for a right-view video are transmitted via a single channel.
  • FIG. 8 is a block diagram of a digital broadcasting stream transmitting apparatus 800 that transmits a plurality of TSs having linking information via a single channel according to the second exemplary embodiment described above with reference to FIG. 7.
  • The digital broadcasting stream transmitting apparatus 800 corresponds to a block diagram of the digital broadcasting stream transmitting apparatus 100 that is constructed according to the second exemplary embodiment. In other words, operations of single- program encoders 810, 820, and 830 and a MUX 840 of the digital broadcasting stream transmitting apparatus 800 correspond to operations of the ES generation unit 110 and the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 that are performed according to the second exemplary embodiment. Operations of a channel encoder and modulator 850 and a DTV transmitter 860 of the digital broadcasting stream transmitting apparatus 800 correspond to an operation of the transmission unit 130 of the digital broadcasting stream transmitting apparatus 100 that is performed according to the second exemplary embodiment.
  • The single- program encoders 810, 820, and 830 may receive a left-view video sequence Left Seq., a 2D video sequence 2D Seq., and a right-view video sequence Right Seq., respectively, to generate and output a first single-program transport stream SP TS1, a second single-program transport stream SP TS2, and a third single-program transport stream SP TS3, respectively.
  • The first single-program transport stream SP TS1, the second single-program transport stream SP TS2, and the third single-program transport stream SP TS3 may be multiplexed by the MUX 840 to generate a multi-program transport stream MP TS. In other words, the first, second, and third single-program transport streams SP TS1, SP TS2, and SP TS3 for the left-view video sequence, the 2D video sequence, and the right-view video sequence may be multiplexed together to generate a single multi-program transport stream MP TS
  • The multi-program transport stream MP TS may be encoded and modulated according to channel 8 by the channel encoder and modulator 850. The DTV transmitter 860 may transmit a broadcasting video stream allocated to channel 8. Thus, the digital broadcasting stream transmitting apparatus 800 and the digital broadcasting stream transmitting apparatus 100 according to the second embodiment may multiplex a single-program transport stream for each of a left-view video of a stereo video, a right-view video of the stereo video, and a 2D video into a single multi-program transport stream and transmit the single multi-program transport stream via a single channel.
  • The single program encoder 810 for the left-view video may set linking information 815 for identifying the right-view video as corresponding to the left-view video and may insert the linking information 815 into PMT information of the first single-program transport stream SP TS1. The single program encoder 830 for the right-view video may set linking Information 835 for identifying the left-view video as corresponding to the right-view video and may insert the linking information 835 into PMT information of the third single-program transport stream SP TS3. If a video associated with the 2D video exists, the single program encoder 820 for the 2D video may set linking information 825 for identifying the associated video and may insert the linking information 825 into PMT information of the second single-program transport stream SP TS2.
  • The single program encoders 810, 820, and 830 according to the second exemplary embodiment may follow individual digital data communication systems. For example, an Advanced Television Systems Committee (ATSC) terrestrial broadcasting communication method supports Enhanced Vestigial Sideband (E-VSB) technology. In the E-VSB technology, a TS may be constructed in a different way from that in the MPEG technology. For example, since linking information according to an exemplary embodiment is inserted into a TS to be transmitted, a base-view video stream may be transmitted in the form of an MPEG transport stream, and an additional-view video stream may be transmitted in the form of an E-VSB transport stream.
  • The single program encoders 810, 820, and 830 according to the second exemplary embodiment may follow individual video encoding systems. Since the linking information according to an exemplary embodiment is inserted into PMT information of a TS, a base-view video may be encoded according to an MPEG-2 video encoding method, and an additional-view video may be encoded according to an MPEG Advanced Video Coding (AVC)/H.264 video encoding method.
  • According to the second exemplary embodiment, the digital broadcasting stream receiving apparatus 200 may receive a single multi-program transport stream for a left-view video of a stereo video, a right-view video of the stereo video, and a 2D video transmitted via a single channel, demultiplex the single multi-program transport stream into a single-program transport stream for the left-view video, a single-program transport stream for the right-view video, and a single-program transport stream for the 2D video, and extract at least one from the single-program transport streams, thereby restoring desired video sequence data.
  • According to the second exemplary embodiment, the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 may extract linking information. The reproduction unit 230 according to the second exemplary embodiment may identify a right-view video sequence corresponding to a left-view video sequence by using the linking information and thus perform 3D reproduction. The linking information may also be used by the ES extraction unit 220 when searching for a third single-program transport stream corresponding to a first single-program transport stream and extracting ESs from the first and third single-program transport streams, respectively.
  • FIG. 9 is a block diagram of a digital broadcasting stream transmitting apparatus 900 that transmits a single TS including a 3D video stream and a 2D video stream via a single channel according to the above-described third exemplary embodiment.
  • The digital broadcasting stream transmitting apparatus 900 corresponds to a block diagram of the digital broadcasting stream transmitting apparatus 100 that is constructed according to the third exemplary embodiment. In other words, operations of a single-program encoder 910 and a MUX 980 of the digital broadcasting stream transmitting apparatus 900 correspond to operations of the ES generation unit 110 and the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 that are performed according to the third exemplary embodiment. Operations of a channel encoder and modulator 990 and a DTV transmitter 995 of the digital broadcasting stream transmitting apparatus 900 correspond to an operation of the transmission unit 130 of the digital broadcasting stream transmitting apparatus 100 that is performed according to the third exemplary embodiment.
  • The single-program encoder 910 may receive a left-view video, a right-view video, and a 2D video and generate a first video elementary stream Video ES1, a second video elementary stream Video ES2, and a third video elementary stream Video ES3 by using video encoders 920, 930, and 940, respectively. The first, second, and third video elementary streams Video ES1, Video ES2, and Video ES3 are packetized into a first video PES packet Video PES1, a second video PES packet Video PES2, and a third video PES packet Video PES3 by packetizers 925, 935, and 945, respectively.
  • The single program encoder 910 may receive audio, convert the audio into an audio elementary stream Audio ES by using an audio encoder 950, and convert the audio elementary stream Audio ES into an audio PES packet Audio PES by using a packetizer 955.
  • A MUX 960 of the single program encoder 910 may multiplex the first, second, and third video PES packets and the audio PES packet into a first single-program transport stream SP TS1. The single program encoder 910 may also receive PMT information generated by a PSI and PSIP generator 970 and a variety of additional data DATA and multiplex the PMT information and the additional data DATA together with the first, second, and third video PES packets and the audio PES packet by using the MUX 960, so that the PMT information and the additional data DATA are inserted into the first single-program transport stream SP TS1. The first single-program transport stream SP TS1 may then be output.
  • At least one of the 3D video data and other 2D video data may be multiplexed with PMT information into a second single-program transport stream SP TS2. The PSI and PSIP generator 970 may generate PAT information including PIDs of the PMT information included in the first and second single-program transport streams SP TS1 and SP TS2, and a PSIP about various programs and system information. A MUX 980 may multiplex the first and second single-program transport streams SP TS1 and SP TS2, the PAT information, and the PSIP into a single multi-program transport stream MP TS.
  • The multi-program transport stream MP TS may be encoded and modulated according to a channel by the channel encoder and modulator 990. The DTV transmitter 995 may transmit a broadcasting video stream allocated to the channel. Thus, the digital broadcasting stream transmitting apparatus 900 and the digital broadcasting stream transmitting apparatus 100 according to the third exemplary embodiment may multiplex ESs for all of a left-view video of a stereo video, a right-view video of the stereo video, and a 2D video into a single single-program transport stream, and transmit a single multi-program transport stream via a single channel.
  • The single program encoder 910 may set linking information for identifying a left-view video or a right-view video corresponding to each other as videos of a stereo video, namely, indicating that the videos are stereo linked, to the first or second video PES packet, respectively, and may insert the stereo linked into PMT information of the first single-program transport stream SP TS1.
  • The single program encoder 910 according to the third exemplary embodiment may generate a TS according to independent digital data communication methods. For example, the first single-program transport stream SP TS1 may be generated in the form of an MPEG transport stream, the second single-program transport stream SP TS2 may be transmitted in the form of an E-VSB transport stream, and linking information may be inserted into each of the PMT information of the first and second single-program transport streams SP TS1 and SP TS2. The video encoders 920 and 930 according to the third exemplary embodiment may follow independent video encoding methods. For example, a base-view video may be encoded according to an MPEG-2 video encoding method, an additional-view video may be encoded according to an MPEG AVC/H.264 video encoding method, and linking information may be inserted into each of the PMT information of the first and second single-program transport streams SP TS1 and SP TS2.
  • According to the third exemplary embodiment, the digital broadcasting stream receiving apparatus 200 may extract a single multi-program transport stream for a left-view video of a stereo video, a right-view video of the stereo video, a 2D video, and audio transmitted via a single channel, and demultiplex the single multi-program transport stream, thereby selecting and extracting a desired single-program transport stream. In addition, the digital broadcasting stream receiving apparatus 200 according to the third exemplary embodiment may select and extract a video ES for the left-view video of the stereo video, a video ES for the right-view video of the stereo video, and a video ES for the 2D video from the extracted single-program transport stream, thereby restoring desired video data.
  • According to the third exemplary embodiment, the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 may extract linking information. The reproduction unit 230 according to the third exemplary embodiment may identify right-view video data corresponding to left-view video data by using the linking information and thus accurately reproduce a 3D video. The linking information may also be used by the ES extraction unit 220 when searching for and extracting a second video ES corresponding to a first video ES.
  • FIG. 10 is a block diagram of a digital broadcasting stream receiving apparatus 1000 that receives a plurality of TSs via a plurality of channels based on a plurality of transmission network systems according to the above-described fourth exemplary embodiment.
  • According to the fourth exemplary embodiment, the digital broadcasting stream transmitting apparatus 100 receives a TS for a left-view video and a TS for a right-view video via different channels that are based on individual transmission network system, for example, a terrestrial system 1010, a satellite TV system 1012, a cable TV system 1014, an Internet Protocol Television (IPTV) system 1016, and the like.
  • The digital broadcasting stream receiving apparatus 1000 corresponds to a block diagram of the digital broadcasting stream receiving apparatus 200 that is constructed according to the fourth exemplary embodiment. In other words, operations of either a terrestrial digital tuner 1020 or a satellite digital tuner 1060 and channel decoder and demodulators 1030 and 1070 of the digital broadcasting stream receiving apparatus 1000 may correspond to an operation of the TS receiving unit 210 of the digital broadcasting stream receiving apparatus 200 that is performed according to the fourth exemplary embodiment, and operations of TS DEMUXes 1040 and 1080 and single- program decoders 1050 and 1090 of the digital broadcasting stream receiving apparatus 1000 may correspond to an operation of the ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 that is performed according to the fourth exemplary embodiment.
  • The digital broadcasting stream receiving apparatus 1000 may be a digital TV receiving system. The digital broadcasting stream receiving apparatus 1000 may receive broadcasting streams via channels corresponding to the terrestrial system 1010, the satellite TV system 1012, the cable TV system 1014, and the IPTV system 1016.
  • The terrestrial digital tuner 1020 and the channel decoder and demodulator 1030 are tuned to a terrestrial channel to extract a multi-program transport stream received via terrestrial waves. In this case, a TS for a left-view video of a stereo video may be received via the terrestrial channel.
  • The satellite digital tuner 1060 and the channel decoder & demodulator 1070 are tuned to a satellite channel to extract a multi-program transport stream received via satellite waves. In this case, a TS for a right-view video of the stereo video may be received via the satellite channel.
  • The multi-program transport streams may be demultiplexed into single-program transport streams by the TS DEMUXes 1040 and 1080. The single-program transport streams may be restored to a left-view video and a right-view video by the single- program decoders 1050 and 1090, respectively.
  • In this case, the single-program transport stream for the left-view video received and extracted via the terrestrial channel may include linking information 1055 about the right-view video constituting a remaining view of a stereo image. In this case, linking information of the left-view video may include an identifier indicating a channel, a TS, or an ES of the right-view video received via the satellite channel.
  • The single-program transport stream for the right-view video received and extracted via the satellite channel may include linking information 1095 about the left-view video. Linking information of the right-view video may include an identifier indicating a channel, a TS, or an ES of the right-view video received via the terrestrial channel.
  • The digital broadcasting stream receiving apparatus 1000 may detect the left-view video and the right-view video received via the terrestrial channel and the satellite channel, respectively, by using the linking information to reproduce a 3D video, thereby providing 3D video broadcasting services.
  • FIG. 11 illustrates an example of a distribution of a channel frequency band 1100 in which a plurality of TSs for a left-view video stream of a 3D video can be transmitted and received via a plurality of channels and in which a TS for a right-view video stream for the 3D video can be transmitted and received via a single channel, according to the above-described fifth exemplary embodiment.
  • The channel frequency band 1100 includes a frequency band 1110 for channel 8, a frequency band 1120 for channel 9, and a frequency band 1130 for channel 10.
  • According to the fifth exemplary embodiment, a left-view video Left Video 1 of a first stereo video and a left-view video Left Video 2 of a second stereo video are allocated to the frequency band 1120 for channel 9 and the frequency band 1130 for channel 10, respectively. A TS 1140 for a right-view video Right Video 1 of the first stereo video and a TS 1150 for a right-view video Right Video 2 of the second stereo video may be transmitted and received through the frequency band 1110 for channel 8.
  • In this case, linking information indicating existence of an association between videos of a stereo video, namely, indicating that the videos are stereo linked, may be set between channels (video streams) of the left-view video Left Video 1 and the right-view video Right Video 1 of the first stereo video and between channels (video streams) of the left-view video Left Video 2 and the right-view video Right Video 2 of the second stereo video.
  • Although not shown in FIG. 11, if another left-view video is allocated to another channel, a TS 1160 for another right-view video may be transmitted via the frequency band 1110 for channel 8. If the channel frequency band 1100 is sufficiently large, a TS 1170 for other data may be further transmitted.
  • Accordingly, the digital broadcasting stream transmitting apparatus 100 according to the fifth exemplary embodiment may multiplex an ES for at least one left-view video to generate at least one single-program transport stream for the at least one left-view video, and generate at least one left-view multi-program transport stream from the at least one single-program transport stream. The digital broadcasting stream transmitting apparatus 100 according to the fifth exemplary embodiment may also multiplex an ES for at least one right-view video corresponding to the at least one left-view video to generate at least one single-program transport stream for the at least one right-view video, and generate a single right-view multi-program transport stream from the at least one single-program transport stream. Accordingly, the digital broadcasting stream transmitting apparatus 100 according to the fifth exemplary embodiment may transmit the at least one left-view multi-program transport stream and the single right-view multi-program transport stream via different channels.
  • The digital broadcasting stream receiving apparatus 200 according to the fifth exemplary embodiment may decode a plurality of channels to receive a left-view multi-program transport stream of a 3D video from at least one of the channels and receive a right-view multi-program transport stream of the 3D video from one of the channels. The digital broadcasting stream receiving apparatus 200 according to the fifth exemplary embodiment may demultiplex the at least one left-view multi-program transport stream individually to extract at least one left-view single-program transport stream, and demultiplex the at least one left-view single-program transport stream individually to extract at least one left-view elementary stream. The ES extraction unit 220 according to the fifth exemplary embodiment may demultiplex a single right-view multi-program transport stream to extract at least one right-view single-program transport stream, and demultiplex the at least one right-view single-program transport stream individually to extract at least one right-view elementary stream.
  • Accordingly, the digital broadcasting stream receiving apparatus 200 according to the fifth exemplary embodiment may extract a plurality of ESs for 3D video information or 2D video information via a plurality of channels. In addition, the digital broadcasting stream receiving apparatus 200 according to the fifth exemplary embodiment may reproduce mutually associated stereo videos three-dimensionally by using linking information.
  • The first through fifth exemplary embodiments of the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 have been described above with reference to FIGS. 5 through 11 in relation to a stereo video of a 3D video. However, it is understood that another exemplary embodiment is not limited to stereo images. For example, a 3D video may be a multi-view video including at least three view videos.
  • According to the above-described first through fifth exemplary embodiments, a plurality of video streams associated with one another may be transmitted or received via at least one channel and at least one TS.
  • According to the second and third exemplary embodiments, a left-view video stream and a right-view video stream associated with each other may be transmitted or received via a single channel. For example, a case where the second or third exemplary embodiment is implemented so as to maintain compatibility with related art 2D digital terrestrial broadcasting systems is supposed. When a 2D digital terrestrial broadcasting system transmits and receives a single video stream within a frequency band of around 19.38 Mbps at a first bitrate, the 2D digital terrestrial broadcasting system may transmit and receive a base-view video stream at a second bitrate lower than the first bitrate and an additional-view video stream at remaining bitrate in order to implement the second or third exemplary embodiment.
  • According to the second and third exemplary embodiments, both base-view video information and additional-view video information are transmitted and received via a single channel, and thus compatibility with existing broadcasting systems is possible without using additional channels. When video data is compressed using a high-compressibility encoding and decoding method, data loss due to compression of a base-view video stream is minimized, and the base-view video stream may be transmitted and received via a single channel together with additional-view video information.
  • According to the first, fourth, and fifth exemplary embodiments, a base-view video stream and an additional-view video stream may be transmitted and received via a plurality of channels. For example, if the first, fourth, or fifth exemplary embodiment is implemented to allow maintaining of compatibility with related art 2D digital terrestrial broadcasting systems, a base-view video stream may be transmitted via an existing channel, and an additional-view video stream may be transmitted via an additional channel.
  • According to the first, fourth, and fifth exemplary embodiments, base-view video information and additional-view video information are transmitted and received via different channels, and thus if the first, fourth, and fifth exemplary embodiments are compatible with related art broadcasting systems on a channel-by-channel basis, multi-view digital contents broadcasting services may be provided without degradation of the quality of a displayed image.
  • FIG. 12 illustrates an example of a distribution of a channel frequency band 1200 in which a plurality of TSs for a multi-view video stream can be transmitted and received via a single channel, according to an exemplary embodiment.
  • The channel frequency band 1200 includes a frequency band 1210 for channel 8 and a frequency band 1220 for channel 9, and the like.
  • According to the present exemplary embodiment, a TS 1230 for first view video information Video 1 of a multi-view video, a TS 1240 for second view video information Video 2 of the multi-view video, a TS 1250 for third view video information Video 3 of the multi-view video, a TS 1260 for fourth view video information Video 4 of the multi-view video, and a TS 1270 for fifth view video information Video 5 of the multi-view video may be transmitted and received through the frequency band 1210 for channel 8.
  • If the third view video information Video 3 is a main view video, linking information ‘3d linked’ for representing that a link exists between each of the first, second, fourth, and fifth view video information Video 1, Video 2, Video 4, and Video 5 and the third view video information Video 3, may be set.
  • The digital broadcasting stream transmitting apparatus 100 according to the present exemplary embodiment for achieving multi-view video services may convert ESs of videos of a plurality of views that constitute a multi-view video, into single-program transport streams, respectively, multiplex the single-program transport streams for the respective views into a single multi-program transport stream, and transmit the single multi-program transport stream via a single channel. Hereinafter, the videos of the plurality of views are referred to as a plurality of view videos.
  • The digital broadcasting stream receiving apparatus 200 according to the present exemplary embodiment for achieving multi-view video services may decode a single channel to extract a single multi-program transport stream, and may demultiplex the single multi-program transport stream to extract respective single-program transport streams for a plurality of view videos. The plurality of view videos constitute a multi-view video, and respective ESs for the view videos may be extracted from the single-program transport streams for the view videos. Accordingly, the digital broadcasting stream receiving apparatus 200 according to the present exemplary embodiment may finally extract the single-program transport streams of the view videos and the ESs of the view videos via a single channel and thus reproduce restored view videos.
  • Since the digital broadcasting stream transmitting apparatus 100 according to the exemplary embodiment of FIG. 1 transmits mutually associated pieces of video information via different channels, TSs, or ESs, the digital broadcasting stream receiving apparatus 200 according to the exemplary embodiment of FIG. 2 may check what channel, TS, or ES video information associated with extracted video information has been received through. To this end, the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 according to the exemplary embodiments of FIGS. 1 and 2 use linking information representing association between 3D videos such as stereo videos or multi-view videos.
  • Since the digital broadcasting stream transmitting apparatus 100 according to the exemplary embodiment of FIG. 1 converts mutually associated pieces of 3D video information into a transmission format and transmits the mutually associated 3D video information pieces in the transmission format, the digital broadcasting stream receiving apparatus 200 according to the exemplary embodiment of FIG. 2 may ascertain the features of a 3D video in order to properly restore the mutually associated 3D video information pieces into a 3D reproduction format and reproduce the mutually associated 3D video information pieces in the 3D reproduction format. Accordingly, the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 according to the exemplary embodiments of FIGS. 1 and 2 use a 3D video start descriptor or a 3D video registration descriptor for representing existence or absence of 3D video information, and a 3D video stream descriptor for accurate restoration and reproduction of the 3D video information.
  • Exemplary embodiments in which the digital broadcasting stream transmitting apparatus 100 of FIG. 1 and the digital broadcasting stream receiving apparatus 200 of FIG. 2 transmit and receive linking information, a 3D video start descriptor or a 3D video registration descriptor, and a 3D video stream descriptor by using PSI will now be described with reference to Tables 1 through 18.
  • The linking information according to an exemplary embodiment may include a link identifier representing whether associated pieces of video information exist in a plurality of pieces of video information included in a TS, and a link descriptor used to identify an ES, a TS, or a channel of video information associated with each video information.
  • The TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert the link identifier according to an exemplary embodiment into PAT information. The ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 checks a link identifier included in PAT information extracted from a TS and predict a link between pieces of PMT information included in the TS. Mutually associated pieces of PMT information include a link descriptor that defines a link between pieces of video information.
  • For example, parameter ‘linked_indicator’ including a link identifier may be additionally set for each PMT information within the PAT information. If parameter ‘linked_indicator’ of current PMT information is 000, no PMT information from among at least one piece of opponent PMT information indicated by the PAT information is associated with the current PMT information. On the other hand, if parameter ‘linked_indicator’ of the current PMT information is one among 001 to 111, PMT information including parameter ‘linked_indicator’ with the same value as that of the current PMT information, from among the at least one piece of opponent PMT information indicated by the PAT information, is associated with the current PMT information. In other words, PMT information pieces including parameters ‘linked_indicator’ with the same value may each include a link descriptor that defines a link therebetween.
  • The TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert a link describer according to an exemplary embodiment into PMT information. The link describer may be set for each video information. The ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 may check a link describer included in PMT information extracted from a single-program transport stream and determine a location of opponent video information corresponding to the video information included in the single-program transport stream. Thus, the digital broadcasting stream receiving apparatus 200 of FIG. 2 may detect mutually associated pieces of video information and restore video data from the mutually associated video information pieces.
  • Table 1 shows an example of a syntax of parameter ‘program_stream_map( )’ of PMT information. The link describer according to an exemplary embodiment is included in parameter ‘linking_descriptor’. The parameter ‘linking_descriptor’ may be included in parameter ‘descriptor( )’ as a descriptor region following parameter ‘elementary_stream_info_length’ included in parameter ‘program_stream_map( )’.
  • TABLE 1
    Syntax
    Program_stream_map( ) {
     packet_start_code_prefix
     map_stream_id
     program_stream_map_length
     current_next_indicator
     reserved
     program_stream_map_version
     reserved
     marker_bit
     program_stream_info_length
     for (i = 0; i < N; i++) {
      descriptor( )
     )
     elementary_stream_map_length
     for(i = O; i < N1; i++) {
      stream_type
      elementary_stream_id
      elementary_stream_info_length
      for (i = 0; i < N2; i++){
       descriptor( )
       }
     }
      CRC_32
    }
  • The TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert a 3D video start descriptor or a 3D video registration descriptor according to an exemplary embodiment, as 3D video authentication information, into a PMT. The ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 may check a 3D video start descriptor or a 3D video registration descriptor included in PMT information extracted from a single-program transport stream, and predict whether video information of an ES for the single-program transport stream is information about a 3D video. In addition, if it is determined based on the 3D video start descriptor or the 3D video registration descriptor that a 3D video stream descriptor is included in the PMT information, the ES extraction unit 220 may extract the 3D video stream descriptor from the PMT information and extract 3D video information from the ES.
  • Parameter ‘3d_start_descriptor’ used to set a 3D video start descriptor or parameter ‘3d_registration_descriptor’ used to set a 3D video registration descriptor may be included in parameter ‘descriptor( )’, which is a descriptor region following parameter ‘program_stream_info_length’ included in the PMT of Table 1.
  • Table 2 shows an example of a syntax of parameter ‘3d_start_descriptor’ that represents the 3D video start descriptor.
  • TABLE 2
    Syntax
    3d_start_descriptor( ){
     descriptor_tag
     descriptor_length
     threed_info_start_code -
    }
  • Descriptors defined by a user may be inserted into a parameter ‘descriptor_tag’ having a value between 64-255, in a descriptor region of an MPEG TS. For example, the TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert parameter ‘3d_start_descriptor’, representing the 3D video start descriptor, into a descriptor region having parameter ‘descriptor_tag’ with a value of 0xF0.
  • Parameter ‘descriptor_length’ represents the number of bytes that follow parameter ‘descriptor_length’. ASCII code ‘3DAV’ may be set as the value of parameter ‘threed_info_start_code’ so that 3D video information is included in the single-program transport stream and a 3D video stream descriptor is included in PMT information.
  • Table 3 shows an example of a syntax of parameter ‘3d_registration_descriptor’ that represents a 3D video registration descriptor.
  • TABLE 3
    Syntax
    3d_registration_descriptor( ){
      descriptor_tag
      descriptor_length
      format_identifier
      for(i = 0 ; i < N; i++){
       private_data_byte
      }
    }
  • ASCII code ‘3DAV’ may be set as the value of parameter ‘format_identifier’ so that parameter ‘format_identifier’ represents that 3D video format data is included in the single-program transport stream and a 3D video stream descriptor is included in the PMT.
  • Table 4 shows an example of a syntax of parameter ‘linking_descriptor’ that represents a link descriptor.
  • TABLE 4
    Syntax
    linking_descriptor( ){
      descriptor_tag
      descriptor_length
      linking_priority
      distribution_indicator_flag
      channel_indicator_flag
      pmt_indicator_flag
      simulcast_flag
      Reserved
      if(distribution_indicator_flag){
       linked_distribution_method
      }
      if(channel_indicator_flag){
        linked_ts_id
       linked_channel
      }
      if(pmt_indicator_flag){
       linked_pmt_pid
       Reserved
      }
      linked_stream_PID
      if(simulcast_flag){
       linked_service_identification_time
      }
      Reserved
    }
  • Parameter ‘linking_priority’ represents linking priority information that indicates existence or absence of a link between a plurality of video streams and priority between associated video streams. An exemplary embodiment of the values of parameter ‘linking_priority’ that define a link between a current video stream and an opponent video stream associated with the current video stream is based on Table 5.
  • TABLE 5
    linking_priority Description
    00 no linking The two video streams are not
    linked with each other.
    01 linking_no_priorty The two video streams are linked
    with each other, but they are equal
    to each other without priority.
    10 linking_high_priorty The current video stream is linked
    with the opponent video stream and
    has higher priority than the opponent
    video stream.
    11 linking_low_priorty The current video stream is linked
    with the opponent video stream and
    has lower priority than the opponent
    video stream.
  • If the value of parameter ‘linking_priority’ is 00, the two video streams are not linked to each other.
  • If the value of parameter ‘linking_priority’ is 01, the two video streams are equally linked to each other without priorities. In this case, when two video streams are independently encoded to have an equal relationship as in the first exemplary embodiment, the value of parameter ‘linking_priority’ may be set to be 01 for each channel.
  • When a reference view and an additional view of a stereo video are accurate, the value of parameter ‘linking_priority’ may be set to be 10 or 11. When the value of parameter ‘linking_priority’ is 10, the current video stream has higher priority than the opponent video stream. When the value of parameter ‘linking_priority’ is 11, the current video stream has lower priority than the opponent video stream. In other words, when the current video stream includes video information of the reference view, the value of parameter ‘linking_priority’ may be set to be 10. When the current video stream includes video information of the additional view, the value of parameter ‘linking_priority’ may be set to be 11.
  • Parameter ‘distribution_indicator_flag’ represents same transmission network method information that indicates whether mutually associated video streams are transmitted and received using the same transmission network method.
  • For example, when the value of parameter ‘distribution_indicator_flag’ is 0, it may indicate that the mutually associated video streams are transmitted and received using the same transmission network method. When the value of parameter ‘distribution_indicator_flag’ is 1, it may indicate that the mutually associated video streams are transmitted and received using different transmission network methods.
  • Parameter ‘channel_indicator_flag’ represents same channel information that indicates whether mutually associated pieces of video information are transmitted via the same channel.
  • For example, when the value of parameter ‘channel_indicator_flag’ is 0, it may indicate that the mutually associated video streams use the same channel. When the value of parameter ‘channel_indicator_flag’ is 1, it may indicate that the mutually associated video streams use different channels. In the second and third exemplary embodiments where left-view video information and right-view video information linked to each other are transmitted via a single channel as described above with reference to FIGS. 7, 8, and 9, the value of parameter ‘channel_indicator_flag’ may be set to be 0. In the other exemplary embodiments, the value of parameter ‘channel_indicator_flag’ may be set to be 1.
  • Parameter ‘pmt_indicator_flag’ represents same single-program transport stream information that represents whether the mutually associated video streams exist within the same single-program transport stream.
  • For example, when the value of parameter ‘pmt_indicator_flag’ is 0, it may indicate that the mutually associated video streams such as ESs, PESs, or the like exist within the same single-program transport stream. When the value of parameter ‘pmt_indicator_flag’ is 1, it may indicate that the mutually associated video streams such as ESs, PESs, or the like exist within different single-program transport streams, respectively. In the third exemplary embodiment described above with reference to FIG. 9 where left-view video information and right-view video information linked to each other are included in a single single-program, the value of parameter ‘pmt_indicator_flag’ may be set to be 0. In other exemplary embodiments, the value of parameter ‘pmt_indicator_flag’ may be set to be 1.
  • Parameter ‘simulcast_flag’ represents same view information that represents whether mutually associated video streams of the same view exist.
  • For example, when the value of parameter ‘simulcast_flag’ is 0, it may indicate that mutually associated video streams exist as video information of the same view. When the value of parameter ‘simulcast_flag’ is 1, it may indicate that mutually associated video streams do not exist at the same point of time but data may be provided later.
  • Parameter ‘linked_distribution_method’ represents linked video transmission network method information that represents the transmission network method of a channel through which the opponent video stream is transmitted. When the value of parameter ‘distribution_indicator_flag’ representing the same transmission network method information is 1, the linked video transmission network method information may be set to define a transmission network method for the opponent video stream. Table 6 shows an example of the linked video transmission network method information.
  • TABLE 6
    linked_distribution_method Description
    0x00 terrestrial broadcasting
    0x01 satellite broadcasting
    0x02 cable broadcasting
    0x03 IPTV broadcasting
    0x04~0xFF reserved
  • Parameter ‘linked_ts_id’ represents linked TS PID information that indicates a PID of a multi-program TS including mutually associated video streams.
  • Parameter ‘linked_channel’ represents linked video channel information that represents a channel through which an opponent video stream from among the mutually associated video streams is transmitted and received. When the value of parameter ‘channel_indicator_flag’ representing the same channel information is 1, the linked video channel information may be set.
  • In satellite broadcasting, terrestrial broadcasting, and cable broadcasting, frequency information may be provided as the linked video channel information. In IPTV broadcasting, Uniform Resource Locator (URL) information may be provided as the linked video channel information.
  • Parameter ‘linked_pmt_pid’ represents linked video PID information that indicates a PID of a PMT of a single-program transport stream through which the opponent video stream is transmitted. When the value of parameter ‘pmt_indicator_flag’ representing the same single-program transport stream information is 1, linked video packet identifier information may be defined.
  • Parameter ‘linked_stream_PID’ represents linked video stream PID information that indicates a PID of an opponent video stream from among the mutually associated video streams.
  • Parameter ‘linked_service_identification_time’ represents linked video service identifying time information that represents when a program for the opponent video stream from among the mutually associated video streams is provided. For example, the linked video service identifying time information may indicate, in units of months, days, hours, and minutes, when a linked program is provided.
  • The digital broadcasting stream transmitting apparatus 100 of FIG. 1 may transmit linking information such as a link descriptor, a link identifier, and the like for identifying existence and locations of the mutually associated video streams, together with the mutually associated video streams. The digital broadcasting stream receiving apparatus 200 of FIG. 2 may extract link information and video streams from a received TS and also extract mutually associated video streams according to the linking information, thereby properly reproducing 3D video.
  • An example in which a link descriptor is used between non-linked video streams is as follows. When the value of parameter ‘linking_priority’ representing link priority information is 00, since information used to identify an opponent video stream linked to a current video stream is not defined, the values of parameters ‘distribution_indicator_flag’, ‘channel_indicator_flag’, and ‘pmt_indicator_flag’ are set to be 0. The value of parameter ‘linked_stream_PID’ may be set to be a PID value newly defined for video information for an additional view video of a 3D video.
  • The PMT information according to an exemplary embodiment may include parameter ‘linked_network_id’ representing service provider information or broadcasting station information that provides a TS. In a data broadcasting system based on a Digital Video Broadcasting (DVB) method, a TS is identified using parameter ‘transport_stream_id’, and the service provider information or the broadcasting station information is identified using parameter ‘original_network_id’. Thus, programs inserted into the TS may be securely distinguished from each other. Accordingly, when the digital broadcasting stream receiving apparatus 200 of FIG. 2 follows the DVB method, stereo link information may be accurately determined using parameter ‘linked_network_id’ instead of using parameter ‘linked_channel’. The value of parameter ‘linked_channel’ denotes a channel of the opponent video information associated with the current TS, and may vary according to a broadcasting method or the type of a broadcasting system.
  • The link descriptor according to an exemplary embodiment is not limited to that shown in Table 4, and may be appropriately changed according to a case where the link descriptor is expanded to include a multi-view image or used for a predetermined purpose.
  • The linking information described up to now may be set for each video information. However, in cases where linking information about a link from an additional view to a reference view is not used, including cases where a single-program that can be selected by a user according to user inputs, system environments, communication environments, and the like is fixed to only the reference view, unidirectional linking information in which linking information is set for only base-view video information may be used.
  • The TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert a 3D video stream descriptor for a current video stream into PMT information for the current video stream. The ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 may extract the 3D video stream descriptor from the PMT information for the current video stream. The 3D video stream descriptor includes additional information that is used when the reproduction unit 230 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 performs 3D rendering to accurately restore and reproduce video data of the current video stream.
  • Table 7 shows parameter ‘3d_video_stream_descriptor’ including a 3D video stream descriptor according to an exemplary embodiment.
  • TABLE 7
    Syntax
    3d_video_stream_descriptor( ){
     descriptor_tag
     descriptor_length
     3d_video_property
     linked_stream_coding_mode
     full_image_size_indicator
     if(3d_video_property == 0x0f){
      3d_composite_format
      is_left_first
     }
     if(full_image_size_indicator == 0){
      additional_view_image_size
      scaling_method
      scaling_order
     }
     if(3d_video_property == 3D) {
      Is_Main
      picture_display_order
      view_info
     {
     else if(3d_video_property == 2D_Multi){
      view_index
     {
     es_icon_indicator
     transition_indicator
     transition_time_stamp
     transition_message
     Reserved
    }
  • Parameter ‘3d_video_property’ represents 3D video property information that represents video properties of the current video stream when a 3D video is constructed.
  • 3D video property information according to an exemplary embodiment may be defined with reference to Table 8 below.
  • TABLE 8
    3d_video_property Description
    0x00 Left video
    0x01 Right video
    0x02
    2D video
    0x03 depth
    0x04 disparity
    0x05 3D_DOT
    0x06 2D_Multi
    0x07~0x0e reserved
    0x0f 3d_composite_format
  • When the value of parameter ‘3d_video_property’ of the current video stream is 0x00, the current video stream is a left-view video of a stereo video. When the value of parameter ‘3d_video_property’ of the current video stream is 0x01, the current video stream is a right-view video of the stereo video.
  • When the value of parameter ‘3d_video_property’ of the current video stream is 0x02, the current video stream is a 2D video. When the value of parameter ‘3d_video_property’ of the current video stream is 0x03 or 0x04, the current video stream is depth information or disparity information, respectively, of an additional-view video for the left-view video of the stereo video.
  • If the stereo video is constructed with 2D video information and depth information, the value of parameter ‘3d_video_property’ of a video stream including the 2D video information may be set to be 0x02, and the value of parameter ‘3d_video_property’ of a video stream including the depth information may be set to be 0x03.
  • When the value of parameter ‘3d_video_property’ of the current video stream is 0x05, the 3D video is an image based on a 3D dot method or a random dot stereogram method. When the value of parameter ‘3d_video_property’ of the current video stream is 0x06, the current video stream is a plurality of 2D video streams for a multi-view video.
  • When the value of parameter ‘3d_video_property’ of the current video stream is 0x0f, the current video stream is a video stream in a 3D composite format that is obtained by composing a left-view image and a right-view image in a single frame, such as in a side-by-side format or a top-and-bottom format.
  • The other fields not defined in Table 8 may be reserved as reserved fields. Video formats of a new video property may be set in the reserved fields as a user demands. For example, a 3D composite format and a video format of a combination of depth information and disparity information may be allocated to reserved parameters as needed.
  • Parameter ‘linked_stream_coding_mode’ represents linked video encoding method information that represents a compressive encoding method between mutually associated pieces of video information. Linked video encoding method information according to an exemplary embodiment is set with reference to Table 9.
  • TABLE 91
    linked_stream_coding_mode Description
    000 independent coding
    001 scalable coding
    010 differential image coding
    011~111 reserved
  • When the value of parameter ‘linked_stream_coding_mode’ is 000, two mutually associated items of video data are independently encoded. In this case, the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may compress the two video data items by using two independent video encoders, respectively, and transmit the two video data items in the form of two video streams. The digital broadcasting stream receiving apparatus 200 of FIG. 2 may receive the two video streams and restore video data from each of the two video streams by using two independent video decoders.
  • When the value of parameter ‘linked_stream_coding_mode’ is 001, the mutually associated video data items are encoded using a scalable coding method. When the value of parameter ‘linked_stream_coding_mode’ is 010, a differential image between a left-view image and a right-view image is encoded as additional-view video information.
  • Parameter ‘full_image_size_indicator’ represents size indicator information that indicates whether a current video stream transmits current video information at the original size of the original video information. For example, the size indicator information may represent a rate at which video data of the current video stream is scaled with respect to the original size of the original video data. When the value of parameter ‘full_image_size_indicator’ is 0, the size of current video data is not the same as the full size of the original video data. When the value of parameter ‘full_image_size_indicator’ is 1, the size of current video data is the same as the full size of the original video data.
  • Some polarization-type display devices halve the vertical resolution of an image and display an image with halved vertical resolution. Even when receiving 3D video data at a full resolution, the polarization type display devices halve the vertical resolution of the 3D video data and display an image with halved vertical resolution. Since full-resolution base-view video data and full-resolution additional-view video data do not need to be provided to these display devices, providing half-size video data obtained by halving a vertical resolution is efficient in terms of the amount of transmission data and a data processing rate.
  • A base-view video is transmitted at a full resolution in consideration of compatibility with broadcasting systems for 2D contents services, and an additional-view video is transmitted and received at a half size because it is used during 3D video reproduction, thereby efficiently transmitting and receiving a video stream. In addition, depth information and disparity information may be transmitted and received at a ½ or ¼ size, and not at a full size.
  • In a side-by-side format from among 3D composite formats, the size of a composite frame is an original size of a frame, and the sizes of a left-view frame and a right-view frame that constitute the composite frame may be reduced to a half size. Thus, in this case, the value of parameter ‘full_image_size_indicator’ is not 1. If each of the left-view frame and the right-view frame is constructed to have the full size of the original frame and thus the size of a 3D composite format frame is twice the size of the original frame, the value of parameter ‘full_image_size_indicator’ may be set to be 1.
  • When 3D video property information of the current video stream is a 3D composite format, that is, when the value of parameter ‘3d_video_property’ represents ‘3d_composite_format’, parameter ‘3d_composite_format’ and parameter ‘is_left_first’ may be set as below.
  • Parameter ‘3d_composite_format’ represents 3D composite format information that represents a method of constructing 3D composite format images by composing images corresponding to a left-view video and a right-view video. The 3D composite format information corresponds to the types of 3D composite formats as in Table 10.
  • TABLE 101
    3d_composite_format Description
    0x00 Side by side format
    0x01 Top and bottom format
    0x02 vertical line interleaved format
    0x03 horizontal line interleaved format
    0x04 frame sequential format
    0x05 field sequential format
    0x06 checker board format
    0x07~0x7f reserved
  • When the value of parameter ‘3d_composite_format’ of the current video stream is 0x00, the 3D composite format of current video data is a side-by-side format. Similarly, when the value of parameter ‘3d_composite_format’ of the current video stream is 0x01, 0x02, 0x03, 0x04, 0x05, or 0x06, the 3D composite format of the current video data is a top-and-bottom format, a vertical line interleaved format, a horizontal line interleaved format, a frame sequential format, a field sequential format, or a checker board format, respectively.
  • The side-by-side format is an image format in which a left-view image and a right-view image respectively corresponding to a left region and a right region of a 3D composite format image are arranged side by side. The top-and-bottom format is an image format in which a left-view image and a right-view image respectively corresponding to an upper region and a lower region of the 3D composite format image are arranged.
  • The vertical line interleaved format is an image format in which a left-view image and a right-view image respectively corresponding to odd-numbered vertical lines and even-numbered vertical lines of the 3D composite format image are arranged. The horizontal line interleaved format is an image format in which a left-view image and a right-view image respectively corresponding to odd-numbered horizontal lines and even-numbered horizontal lines of the 3D composite format image are arranged.
  • The frame sequential format is an image format in which a left-view image and a right-view image respectively corresponding to odd-numbered frames and even-numbered frames of the 3D composite format image are arranged. The field sequential format is an image format in which a left-view image and a right-view image respectively corresponding to odd-numbered fields and even-numbered fields of the 3D composite format image are arranged.
  • The checker board format is an image format in which a left-view image and a right-view image respectively corresponding to pixels in a horizontal direction and pixels in a vertical direction of the 3D composite format image are arranged alternately in units of pixels.
  • Parameter ‘is_left_first’ represents format arrangement sequence information that represents a sequence in which a left-view image and a right-view image of the 3D composite format image are arranged. Parameter ‘is_left_first’ may represent which region is a left-view image of the 3D composite format image of the current video stream and which region is a right-view image thereof. Format arrangement sequence information according to an exemplary embodiment is linked with the 3D composite format information with reference to Table 11, so that positions of a left-view image and a right-view image of the 3D composite format image may be set as follows.
  • TABLE 11
    identification is_left_first = 0 is_left_first = 1
    Left view Right View Left view Right view Left view
    Side by side format Left side Right side Right side Left side
    Top and Upper side Lower side Lower side Upper
    bottom format side
    Vertical line Odd line Even line Even line Odd line
    interleaved format
    Horizontal line Odd line Even line Even line Odd line
    interleaved format
    Frame sequential Odd frame Even frame Even frame Odd
    format frame
    Field sequential Odd field Even field Even field Odd field
    format
    Checker Odd pixel Even pixel Even pixel Odd pixel
    board format
  • When the value of parameter ‘is_left_first’ is 0, left video data is arranged in a left region of a side by side format image, an upper region of a top and bottom format image, odd-numbered lines of a vertical line interleaved format image, odd-numbered lines of a horizontal line interleaved format image, odd-numbered frames of a frame sequential format image, odd-numbered fields of a field sequential format image, and odd-numbered pixels of a checker board format image. Thus, right-view video data is arranged in a region opposite to the region of each of the 3D composite format images where the left-view video data is arranged.
  • When the value of parameter ‘is_left_first’ is 1, the right-view video data and the left-view video data are arranged in a manner opposite to the arrangement manner when the value of parameter ‘is_left_first’ is 0.
  • When the size indicator information of the current video stream represents that left-view and right-view video information is transmitted at a size reduced from the full size of the original left-view and right-view video information, that is, when the value of parameter ‘full_image_size_indicator’ is 0, parameter ‘additional_view_image_size’, parameter ‘scaling_method’, and parameter ‘scaling_order’ may be set as follows.
  • Parameter ‘additional_view_image_size’ includes additional-view image size information that represents a rate at which the image size of additional-view video information of the current video stream is enlarged or reduced from the original image size. Additional-view video information according to an exemplary embodiment may be set as follows with reference to Table 12.
  • TABLE 12
    additional_view_image_size Description
    0x00 horizontal half
    0x01 vertical half
    0x02 quarter size
    0x03~0x0f reserved
  • When the value of parameter ‘additional_view_image_size’ is 0x00, it represents a method of halving the image size of the additional-view video information in a horizontal direction. This method is an additional-view image reducing method that can be efficiently used in 3D display devices for halving input video data in a horizontal direction and reproducing a result of the halving, including parallax barrier display devices.
  • When the value of parameter ‘additional_view_image_size’ is 0x01, it represents a method of halving the image size of the additional-view video information in a vertical direction. This additional-view image reducing method may be efficiently used in display devices for halving the resolution of an image in a vertical direction and reproducing a result of the halving, including display devices that perform reproduction while changing a polarization angle in units of horizontal lines of a 3D video.
  • When the value of parameter ‘additional_view_image_size’ is 0x02, it represents a method of reducing the image size by half in a vertical direction and in a horizontal direction, respectively, and thus reducing the image size to ¼ overall. This additional-view image reducing method may be used in depth information or disparity information rather than video data in order to reduce loss and increase compression efficiency.
  • Parameter ‘scaling_method’ represents down-scaling method information that represents a method of down-scaling the left-view image and the right-view image of a 3D composite format image. A 3D composite format scales down the video data of a frame for each of a plurality of views so that a single frame includes data pieces for the plurality of views. The down-scaling may be performed using any of various down-scaling methods. However, since the down-scaling method is performed in a pre-processing process occurring before compression, if the image size is restored according to an up-scaling method not corresponding to the down-scaling method during video decoding, artifacts may be generated on a restored image. An example where artifacts may be generated during image down-scaling and image restoration will now be described with reference to FIG. 13.
  • FIG. 13 illustrates an example in which down-scaling method information for 3D composite formats is used, according to an exemplary embodiment.
  • For example, only even-numbered lines 1314 and 1318 from among lines 1312, 1314, 1316, and 1318 of an original image 1310 are sub-sampled and arranged in an upper region 1320 of a top-and-bottom format image. In other words, lines 1322 and 1324 of the upper region 1320 of the top-and-bottom format image are the same as the even-numbered lines 1314 and 1318 of the original image 1310.
  • However, if the lines 1322 and 1324 of the top-and-bottom format image are arranged at locations of odd-numbered lines 1332 and 1336 of a restored image 1330, and not at locations of even-numbered lines 1334 and 1338 of the restored image 1330, during an up-scaling operation performed during image decoding, one-pixel mismatch in a vertical direction may occur in all pixels of the restored image 1330. If a left-view image and a right-view image are each mismatched in units of one pixel, mismatch in units of a maximum of two pixels may occur in a 3D video.
  • Thus, to achieve accurate restoration of a 3D video, the digital broadcasting stream transmitting apparatus 100 of FIG. 1 and the digital broadcasting stream receiving apparatus 200 of FIG. 2 may transmit and receive the down-scaling method information for 3D composite formats. Down-scaling method information according to an exemplary embodiment may be set as parameter ‘scaling_method’ as in Table 13.
  • TABLE 13
    scaling_method Description
    0x00 sampling method
    0x01 averaging method
    0x02~0x03 Reserved
  • When the value of parameter ‘scaling_method’ is 0x00, a current 3D composite format may include left-view image information and right-view image information that have been down-scaled according to a sampling method in which one of every two consecutive lines is selected and extracted.
  • When the value of parameter ‘scaling_method’ is 0x01, the current 3D composite format may include left-view image information and right-view image information that have been down-scaled with respect to two consecutive lines of each of an original left-view image and an original right-view image with a single line by replacing the two consecutive lines with a result of an arithmetic operation performed thereon. A representative method of forming a 3D composite format with a down-scaled left-view image and a down-scaled right-view image may be an averaging method in which an average value between pixels of two consecutive lines is determined as a pixel value of a single line.
  • Parameter ‘scaling_order’ represents down-scaling sampling order information that represents a sampling order of a 3D composite format image down-scaled according to a sampling method. When the value of parameter ‘scaling_method’ representing the down-scaling sampling order information is 0x00, it represents which lines from among odd-numbered and even-numbered lines of a left-view image and a right-view image are sampled to constitute a 3D composite format. An example of using down-scaling sampling order information according to an exemplary embodiment is shown in Table 14.
  • TABLE 14
    scaling_order left view right view
    0x00 odd line even line
    0x01 even line odd line
    0x02 odd line odd line
    0x03 even line even line
  • When the value of parameter ‘scaling_order’ is 0x00, an odd-numbered line of a left-view image is sampled and an even-numbered line of a right-view image is sampled to construct a 3D composite format image. Similarly, when the value of parameter ‘scaling_order’ is 0x01, an even-numbered line of a left-view image is sampled and an odd-numbered line of a right-view image is sampled to construct a 3D composite format image. When the value of parameter ‘scaling_order’ is 0x02, an odd-numbered line of a left-view image and an odd-numbered line of a right-view image are sampled to construct a 3D composite format image. When the value of parameter ‘scaling_order’ is 0x03, an even-numbered line of a left-view image and an even numbered line of a right-view image are sampled to construct a 3D composite format image.
  • Hereinafter, parameter ‘Is_Main’, parameter ‘picture_display_order’, and parameter ‘view_info’ are set when the video property information of the current video stream represents 3D video data, that is, when parameter ‘3d_video_property’ is ‘Left Video’ or ‘Right Video’.
  • Parameter ‘Is_Main’ represents base-view indicator information that represents whether the current video stream is a base-view video stream. For example, when parameter ‘3d_video_property’ representing the video property information of the current video stream is ‘Left video’, parameter ‘Is_Main’ may be set as in Table 15.
  • TABLE 15
    Is_Main Description
    0x0 Sub Video
    0x1 Main Video
  • That is, when the value of parameter ‘Is_Main’ is 0, left-view video data is set to be a sub video. When the value of parameter ‘Is_Main’ is 1, the left-view video data is set to be a main video.
  • Parameter ‘picture_display_order’ represents display order information that represents an order in which ESs of left-view and right-view are displayed.
  • Parameter ‘view_info’ represents 3D-video view-related information in which, if a current video stream is a 3D video, view information is set differently for children and adults. In consideration of the fact that children and adults are different in binocular variances, view-related information such as depth and disparity may be set differently for children and adults. For example, an image for adults is an image having a relatively large binocular parallax, and an image for children is an image having a relatively small binocular parallax. Due to the use of parameter ‘view_info’, images for adults may be distinguished from images for children, and also selective 3D reproduction may be performed according to the screen size of a 3D display device. Parameter ‘view_info’ may be used so that if the screen size of a 3D display device is relatively large, a 3D image having a relatively small binocular parallax is reproduced and if the screen size of a 3D display device is relatively small, a 3D image having a relatively large binocular parallax is reproduced. When 3D video contents are manufactured in consideration of this case, a stereoscopic effect may be uniformly provided to viewers regardless of the size of a 3D display.
  • Parameter ‘view_index’ represents video index information that indicates a current view from among multiple views of a current video stream if the current video stream is in regards to one of a plurality of 2D videos each including the multiple views. In other words, if parameter ‘3d_video_property’ representing the video property information of the current video stream is ‘2D_Multi’, parameter ‘view_index’ may be set. Parameter ‘2D_Multi’ may be used when a service including a plurality of 2D videos is received. In this case, parameter ‘view_index’ may be used to distinguish 2D moving pictures from one another.
  • Parameter ‘es_icon_indicator’ represents a 3D video service notification indicator that indicates provision of 3D service-related icons from a contents provider. Indication of notification of a 3D video service may be inserted into contents through the 3D video service notification indicator without overlapping with a settop box or a TV.
  • The digital broadcasting stream transmitting apparatus 100 of FIG. 1 and the digital broadcasting stream receiving apparatus 200 of FIG. 2 may transmit and receive information associated with conversion of a 2D or 3D reproduction mode in the current video stream. Hereinafter, a reproduction mode transition indicator, reproduction mode conversion time information, or the like may be set as the information associated with reproduction mode conversion.
  • Parameter ‘transition_indicator’ includes a reproduction mode transition indicator that indicates whether a 2D/3D video reproduction mode different from that set in current PMT information is set in PMT information following the current PMT information. The reproduction mode transition indicator indicates that, when different pieces of reproduction mode information are set in the following PMT information and the current PMT information, a reproduction mode transition has occurred.
  • Parameter ‘transition_time_stamp’ represents reproduction mode transition time information that represents, in units of Presentation Time Stamps (PTSs), the time during which reproduction mode transition occurs. The reproduction mode transition time information may be set as an absolute period of time or as a relative period time starting from a predetermined reference point of time.
  • Parameter ‘transition_message’ includes a variety of message information such as a text, an image, an audio, and the like for notifying viewers of a reproduction mode transition. For example, parameter ‘transition_message’ includes size information and message information of a reproduction mode transition message, and a reproduction mode transition message may be represented in an 8-bit character string by using a for statement that is repeated by a number of times corresponding to the value of a y with media such as digThe 3D video stream descriptor according to an
  • The 3D video stream descriptor according to an exemplary embodiment is not limited to the syntax shown in Table 7 and may be suitably changed when a 3D video is expanded into a multi-view image or used for predetermined purposes.
  • The TS generation unit 120 of the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may insert a multi-view video stream descriptor for accurate distinction among multi-view video streams into the PMT information for the current video stream. The ES extraction unit 220 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 may extract the multi-view video stream descriptor from the PMT information for the current video stream. The multi-view video stream descriptor includes additional information used by the reproduction unit 230 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 to accurately restore and reproduce multi-view video streams.
  • A multi-view video stream descriptor according to an exemplary embodiment may be set as in Table 16.
  • TABLE 16
    Syntax
    multiview_video_stream_descriptor( ){
     descriptor_tag
     descriptor_length
     number_of_views
     mv_numbering
    }
  • Parameter ‘number_of views’ represents information about the number of views of a multi-view video stream.
  • Parameter ‘mv_numbering’ represents video index information of the multi-view video stream. For example, the value of parameter ‘mv_numbering’ may be set to start at 0 for a leftmost view video stream and increase by 1 with each video stream to the right of the leftmost view video stream. The value of parameter ‘mv_numbering’ of the leftmost view video stream may be set to be 0, and the value of parameter ‘mv_numbering’ of the leftmost view video stream may be set to be a value obtained by subtracting 1 from the value of parameter ‘number_of views’.
  • A location of a base-view video stream in a multi-view video stream may be predicted from parameter ‘linking_priority’ of parameter ‘linking_descriptor’. The value of parameter ‘linking_priority’ of the base-view video stream may be set to be 10, and the values of parameter ‘linking_priority’ of the other-view video streams may be all set to be 11.
  • By referring to the aforementioned link descriptor and the 3D video stream descriptor, the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may set a partial link descriptor and a 3D video stream descriptor of a left-view video stream corresponding to the base-view video stream as in Table 17 and set a partial link descriptor and a 3D video stream descriptor of a right-view video stream corresponding to an additional-view video stream as in Table 18.
  • TABLE 17
    Syntax Value
    linking_priority 10
    3d_video_property 0000
    linked_stream_coding_mode 000
    full_image_size_indicator 1
  • TABLE 18
    Syntax value
    linking_priority 11
    3d_video_property 0001
    linked_stream_coding_mode 000
    full_image_size_indicator 1
  • If the digital broadcasting stream receiving apparatus 200 of FIG. 2 extracts a link descriptor and a 3D video stream descriptor for a first video stream set as in Table 17, since link priority information for the first video stream is ‘linking_priority==10’ and 3D video property information for the first video stream is ‘3d_video_property==0000’, it may be determined that the first video stream is a left-view video stream of a reference view having higher priority than the opponent video stream associated with the first video stream.
  • If the digital broadcasting stream receiving apparatus 200 of FIG. 2 extracts a link descriptor and a 3D video stream descriptor for a second video stream set as in Table 18, since link priority information for the second video stream is ‘linking_priority==11’ and 3D video property information for the second video stream is ‘3d_video_property==0001’, it may be determined that the second video stream is a right-view video stream of an additional view having lower priority than the opponent video stream associated with second first video stream.
  • The digital broadcasting stream receiving apparatus 200 of FIG. 2 may determine that, as the values of parameter ‘linked_stream_coding_mode’ for the first and second video streams are 000, the first and second video streams have been encoded according to an encoding method independent from that used to encode the opponent video stream, and that as the value of parameter ‘full_image_size_indicator’ is 1, the original image size is maintained.
  • FIGS. 14, 15, 16, 17, and 18 are schematic views of reproduction units of digital broadcasting stream receiving apparatuses according to exemplary embodiments. A method in which the reproduction unit 230 of the digital broadcasting stream receiving apparatus 200 of FIG. 2 restores and reproduces 3D or 2D video data from a video stream will now be described with reference to FIGS. 14 through 18.
  • According to a first exemplary embodiment 1400 of the reproduction unit 230, left-view video data 1415 and right-view video data 1425 may be restored by stream decoders 1410 and 1420, respectively. The left-view video data 1415 and the right-view video data 1425 may be converted into a 3D reproduction format that can be reproduced as a 3D video, by a 3D formatter or renderer 1430. A signal of the 3D reproduction format may be reproduced by a 3D display device 1440.
  • According to a second exemplary embodiment 1500 of the reproduction unit 230, 2D video data 1515 and depth/disparity video 1525 may be restored by stream decoders 1510 and 1520, respectively. The 2D video data 1515 and the depth/disparity video 1525 may be converted into the 3D reproduction format by a 3D renderer 1530. A signal of the 3D reproduction format may be reproduced by a 3D display device 1540.
  • According to a third exemplary embodiment 1600 of the reproduction unit 230, 2D video data 1615 and a 3D composite video 1625 may be restored by stream decoders 1610 and 1620, respectively. The 3D composite video 1625 is converted into the 3D reproduction format by a 3D formatter 1630. The 2D video data 1615 and a signal of the 3D reproduction format may undergo a reproduction mode conversion process 1640 based on a user input or an automatic reproduction mode conversion algorithm and then may be selectively reproduced by a 3D display device 1650.
  • According to a fourth exemplary embodiment 1700 of the reproduction unit 230, main 2D video data 1715 and sub 2D video data 1725 may be restored by stream decoders 1710 and 1720, respectively. The main 2D video data 1715 and the sub 2D video data 1725 may undergo a video conversion process 1730 based on a user input or a main/sub video selection algorithm and then may be selectively reproduced by a 2D display device 1740.
  • According to a fifth exemplary embodiment 1800 of the reproduction unit 230, main 2D video data 1815 and sub 2D video data 1825 may be restored by stream decoders 1810 and 1820. The main 2D video data 1815 and the sub 2D video data 1825 may be reproduced in a PIP mode by a 2D display device 1830. In other words, the main 2D video data 1815 may be reproduced on a screen 1840 of the 2D display device 1830 to cover the screen 1840 entirely, and the sub 2D video data 1825 may be reproduced on a partial screen 1850 of the 2D display device 1830.
  • A Main 2D video and a sub 2D video are not videos for achieving a stereoscopic effect of 3D contents but may be videos of mutually associated contents. For example, a 2D video of a main view may contain contents of a baseball game scene, and a 2D video of a sub view may contain contents of additional information of a baseball game, such as stand scenes, analysis information of the pitching posture of a current pitcher, batting average information of the current pitcher, and the like.
  • In this case, the digital broadcasting stream transmitting apparatus 100 of FIG. 1 may set and transmit link information and 3D video stream information for the main 2D video and the sub 2D video, and the digital broadcasting stream receiving apparatus 200 of FIG. 2 may restore the main 2D video and the sub 2D video linked with each other by using the link information and the 3D video stream information and selectively reproduce the main 2D video and the sub 2D video or reproduce the same in the PIP mode. Accordingly, a user may watch a variety of information about a baseball game in a sub view while continuously watching the baseball game in a main view.
  • The digital broadcasting stream transmitting apparatus 100 of FIG. 1 transmit and the digital broadcasting stream receiving apparatus 200 of FIG. 2 receive link information between mutually associated video streams for a data stream that provides 3D video and 2D video, thereby realizing a 3D digital video contents broadcasting system while securing compatibility with digital broadcasting systems for 2D contents services.
  • The digital broadcasting stream transmitting apparatus 100 of FIG. 1 and the digital broadcasting stream receiving apparatus 200 of FIG. 2 provide 3D moving picture services which are not interrupted by time and place, by providing 2D digital contents and 3D digital contents while securing compatibility with media such as digital video discs (DVDs).
  • FIG. 19 is a flowchart of a digital broadcasting stream transmitting method capable of providing 3D video services, according to an exemplary embodiment.
  • In operation 1910, a plurality of ESs for a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video of the 3D video, and a 2D video having a different view from that of the 3D video are generated.
  • In operation 1920, link information between the plurality of pieces of video information and the plurality of ESs are multiplexed to generate at least one TS.
  • In operation 1930, the at least one TS is transmitted via at least one channel.
  • FIG. 20 is a flowchart of a digital broadcasting stream receiving method capable of providing 3D video services, according to an exemplary embodiment.
  • In operation 2010, at least one TS is received via at least one channel. The at least one TS may include a plurality of pieces of video information including at least one of information about a base-view video of a 3D video, information about an additional-view video of the 3D video, and a 2D video having a different view from that of the 3D video.
  • In operation 2020, the at least one received TS is demultiplexed to extract linking information between pieces of video information and at least one ES for the plurality of video information pieces from the TS.
  • In operation 2030, at least one of 3D video data and 2D video data restored by decoding the extracted at least one ES is reproduced in consideration of a link represented by the linking information.
  • The digital broadcasting stream transmitting method of FIG. 19 and the digital broadcasting stream receiving method of FIG. 20 may be performed by computer program interactions, respectively. By doing this, operations of the digital broadcasting stream transmitting apparatus 100 and the digital broadcasting stream receiving apparatus 200 that are performed according to the exemplary embodiments described above with reference to FIGS. 1, 2, 5-12, and 14-18 may be implemented.
  • Exemplary embodiments can be written as computer programs and can be implemented in general-use digital computers that execute the programs using a computer readable recording medium. Examples of the computer readable recording medium include magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.) and optical recording media (e.g., CD-ROMs, or DVDs). Moreover, one or more units of the apparatuses 100, 200, 300, 400, 600, 800, 900, and 1000 can include a processor or microprocessor executing a computer program stored in a computer-readable medium.
  • While exemplary embodiments have been particularly shown and described above, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present inventive concept as defined by the following claims.

Claims (6)

What is claimed is:
1. A method of transmitting a datastream for providing a stereoscopic video broadcasting service, the method comprising:
generating a first elementary stream including base view video data;
generating a second elementary stream including additional view video data corresponding to the base view video data;
generating a program map table (PMT) including stereo service information indicating whether the stereoscopic service is included in the first and second elementary streams, view identification information indicating which one of the base-view video data and additional-view video data is included in the first elementary stream or the second elementary stream, and which one of the base-view video data and additional-view video data is included in the other, and additional-view resolution information indicating a first scaling ratio in a horizontal direction and a second scaling ratio in a vertical direction for the additional-view video data; and
transmitting the first elementary stream, the second elementary stream and the PMT.
2. The method of claim 1,
wherein the stereo service information is included in a first descriptor in the PMT; and
wherein the view identification information and the additional-view resolution information are included in a second descriptor in the PMT.
3. The method of claim 1,
wherein the base view video data is encoded using a MPEG2 coding technique; and
wherein the additional view video data is encoded using a H.264/AVC coding technique.
4. A method of receiving a datastream for providing a stereoscopic video broadcasting service, the method comprising:
receiving a first elementary steam and a second elementary stream for providing the stereoscopic video broadcasting service, and a Program Map table (PMT) for the first elementary steam and the second elementary stream;
extracting view stereo service information indicating whether the stereoscopic service is included in the first and second elementary streams in the PMT;
extracting view identification information indicating which one of base-view video data and additional-view video data the first elementary stream and second elementary stream respectively comprise, from a predetermined descriptor in the PMT;
extracting additional-view resolution information indicating a first scaling ratio in a horizontal direction and a second scaling ratio in a vertical direction for the additional-view video data, from the predetermined descriptor in the PMT; and
decoding the base-view video data and the additional-view video data respectively from the first and second elementary streams based on the stereo service information, the view identification information and the additional-view resolution information, and rendering the stereoscopic service using the extracted base- and additional-view video data.
5. The method of claim 4,
wherein the stereo service information is extracted from a first descriptor in the PMT; and
wherein the view identification information and the additional-view resolution information are extracted from a second descriptor in the PMT.
6. The method of claim 4, wherein the rendering the extracted base- and additional-view video data comprises:
decoding the base-view video data extracted from the first elementary stream using a MPEG2 coding technique; and
decoding the additional-view video data extracted from the second elementary stream using a H.264/AVC coding technique.
US14/028,190 2010-01-28 2013-09-16 Method and apparatus for transmitting digital broadcasting stream using linking information about multi-view video stream, and method and apparatus for receiving the same Abandoned US20140015927A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/028,190 US20140015927A1 (en) 2010-01-28 2013-09-16 Method and apparatus for transmitting digital broadcasting stream using linking information about multi-view video stream, and method and apparatus for receiving the same

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US29912110P 2010-01-28 2010-01-28
KR10-2010-0044056 2010-05-11
KR1020100044056A KR101694821B1 (en) 2010-01-28 2010-05-11 Method and apparatus for transmitting digital broadcasting stream using linking information of multi-view video stream, and Method and apparatus for receiving the same
US13/016,339 US9055280B2 (en) 2010-01-28 2011-01-28 Method and apparatus for transmitting digital broadcasting stream using linking information about multi-view video stream, and method and apparatus for receiving the same
US14/028,190 US20140015927A1 (en) 2010-01-28 2013-09-16 Method and apparatus for transmitting digital broadcasting stream using linking information about multi-view video stream, and method and apparatus for receiving the same

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/016,339 Continuation US9055280B2 (en) 2010-01-28 2011-01-28 Method and apparatus for transmitting digital broadcasting stream using linking information about multi-view video stream, and method and apparatus for receiving the same

Publications (1)

Publication Number Publication Date
US20140015927A1 true US20140015927A1 (en) 2014-01-16

Family

ID=44926961

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/016,339 Expired - Fee Related US9055280B2 (en) 2010-01-28 2011-01-28 Method and apparatus for transmitting digital broadcasting stream using linking information about multi-view video stream, and method and apparatus for receiving the same
US14/028,190 Abandoned US20140015927A1 (en) 2010-01-28 2013-09-16 Method and apparatus for transmitting digital broadcasting stream using linking information about multi-view video stream, and method and apparatus for receiving the same

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/016,339 Expired - Fee Related US9055280B2 (en) 2010-01-28 2011-01-28 Method and apparatus for transmitting digital broadcasting stream using linking information about multi-view video stream, and method and apparatus for receiving the same

Country Status (7)

Country Link
US (2) US9055280B2 (en)
EP (1) EP2517386A4 (en)
JP (1) JP5775884B2 (en)
KR (1) KR101694821B1 (en)
CN (1) CN102835047B (en)
MX (1) MX2012008818A (en)
WO (1) WO2011093677A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015152635A1 (en) * 2014-04-02 2015-10-08 엘지전자 주식회사 Apparatus for transmitting and receiving signal and method for transmitting and receiving signal
WO2017171391A1 (en) * 2016-03-30 2017-10-05 엘지전자 주식회사 Method and apparatus for transmitting and receiving broadcast signals

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102257823B (en) 2008-12-18 2017-03-08 Lg电子株式会社 For showing digital broadcast receiving method and the digital broacast receiver using the method for stereo-picture
KR20110088334A (en) * 2010-01-28 2011-08-03 삼성전자주식회사 Method and apparatus for generating datastream to provide 3-dimensional multimedia service, method and apparatus for receiving the same
KR101694821B1 (en) * 2010-01-28 2017-01-11 삼성전자주식회사 Method and apparatus for transmitting digital broadcasting stream using linking information of multi-view video stream, and Method and apparatus for receiving the same
CA2797619C (en) * 2010-04-30 2015-11-24 Lg Electronics Inc. An apparatus of processing an image and a method of processing thereof
KR101910192B1 (en) * 2010-11-12 2018-10-22 한국전자통신연구원 Method and apparatus determining image compression format of the 3dtv
KR20120058700A (en) 2010-11-27 2012-06-08 전자부품연구원 Method for transmission format providing of digital broadcasting
KR20120058702A (en) 2010-11-27 2012-06-08 전자부품연구원 Method for providing of service compatible mode in digital broadcasting
IT1403450B1 (en) * 2011-01-19 2013-10-17 Sisvel S P A VIDEO FLOW CONSISTING OF COMBINED FRAME VIDEO, PROCEDURE AND DEVICES FOR ITS GENERATION, TRANSMISSION, RECEPTION AND REPRODUCTION
JPWO2012157443A1 (en) * 2011-05-16 2014-07-31 ソニー株式会社 Image processing apparatus and image processing method
US20130047186A1 (en) * 2011-08-18 2013-02-21 Cisco Technology, Inc. Method to Enable Proper Representation of Scaled 3D Video
WO2013032221A1 (en) * 2011-08-31 2013-03-07 엘지전자 주식회사 Digital broadcast signal processing method and device
US20130070051A1 (en) * 2011-09-20 2013-03-21 Cheng-Tsai Ho Video encoding method and apparatus for encoding video data inputs including at least one three-dimensional anaglyph video, and related video decoding method and apparatus
US9432653B2 (en) * 2011-11-07 2016-08-30 Qualcomm Incorporated Orientation-based 3D image display
KR101998892B1 (en) * 2012-03-01 2019-07-10 소니 주식회사 Transmitter, transmission method and receiver
WO2013149655A1 (en) * 2012-04-04 2013-10-10 Naxos Finance Sa System for generating and receiving a stereoscopic-2d backward compatible video stream, and method thereof
US9584793B2 (en) 2012-04-09 2017-02-28 Intel Corporation Signaling three-dimensional video information in communication networks
US9596450B2 (en) * 2012-05-24 2017-03-14 Panasonic Corporation Video transmission device, video transmission method, and video playback device
US8730300B1 (en) * 2013-01-15 2014-05-20 Byron J. Willner Three dimensional television encoding and broadcasting method
CN104284191B (en) * 2013-07-03 2018-12-18 乐金电子(中国)研究开发中心有限公司 Video coding-decoding method and Video Codec
KR20150090432A (en) * 2014-01-29 2015-08-06 한국전자통신연구원 Broadcasting transmission apparatus and method thereof for simulcast broadcasting
US10271094B2 (en) * 2015-02-13 2019-04-23 Samsung Electronics Co., Ltd. Method and device for transmitting/receiving media data
BR112018009070A8 (en) * 2015-11-11 2019-02-26 Sony Corp coding and decoding apparatus, and methods for coding by a coding apparatus and for decoding by a decoding apparatus.
US20170208315A1 (en) * 2016-01-19 2017-07-20 Symbol Technologies, Llc Device and method of transmitting full-frame images and sub-sampled images over a communication interface
EP3223524A1 (en) * 2016-03-22 2017-09-27 Thomson Licensing Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices
US10771791B2 (en) * 2016-08-08 2020-09-08 Mediatek Inc. View-independent decoding for omnidirectional video
CN107645647A (en) * 2017-09-21 2018-01-30 京信通信系统(中国)有限公司 A kind of multichannel audio-video frequency transmission method and device
CN111435991B (en) * 2019-01-11 2021-09-28 上海交通大学 Point cloud code stream packaging method and system based on grouping
CN112929636A (en) * 2019-12-05 2021-06-08 北京芯海视界三维科技有限公司 3D display device and 3D image display method
CN112383816A (en) * 2020-11-03 2021-02-19 广州长嘉电子有限公司 ATSC system signal analysis method and system based on android system intervention

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110162A1 (en) * 2006-09-29 2010-05-06 Electronics And Telecomunications Research Institute Method and apparatus for providing 3d still image service over digital broadcasting
US20110012992A1 (en) * 2009-07-15 2011-01-20 General Instrument Corporation Simulcast of stereoviews for 3d tv
US20110090306A1 (en) * 2009-10-13 2011-04-21 Lg Electronics Inc. Broadcast receiver and 3d video data processing method thereof
US20110254920A1 (en) * 2008-11-04 2011-10-20 Electronics And Telecommunications Research Institute Apparatus and method for synchronizing stereoscopic image, and apparatus and method for providing stereoscopic image based on the same
US20110261158A1 (en) * 2008-12-30 2011-10-27 Lg Electronics Inc. Digital broadcast receiving method providing two-dimensional image and 3d image integration service, and digital broadcast receiving device using the same

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09247119A (en) * 1996-03-11 1997-09-19 Oki Electric Ind Co Ltd Multiplexer
GB9930787D0 (en) * 1999-12-30 2000-02-16 Koninkl Philips Electronics Nv Method and apparatus for convrerting data streams
JP4497497B2 (en) 2001-01-22 2010-07-07 日本テレビ放送網株式会社 3D video signal transmission method and system
EP1457697B1 (en) 2002-10-23 2005-08-17 BorgWarner Inc. Method of making a friction disc
JP4190357B2 (en) 2003-06-12 2008-12-03 シャープ株式会社 Broadcast data transmitting apparatus, broadcast data transmitting method, and broadcast data receiving apparatus
KR100585966B1 (en) * 2004-05-21 2006-06-01 한국전자통신연구원 The three dimensional video digital broadcasting transmitter- receiver and its method using Information for three dimensional video
KR20060069959A (en) 2004-12-20 2006-06-23 엘지노텔 주식회사 Base station antenna apparatus for mobile communication system
WO2006075902A1 (en) 2005-01-14 2006-07-20 Samsung Electronics Co., Ltd. Method and apparatus for category-based clustering using photographic region templates of digital photo
KR100790867B1 (en) 2005-01-14 2008-01-03 삼성전자주식회사 Method and apparatus for category-based photo clustering using photographic region templates of digital photo
KR100657322B1 (en) * 2005-07-02 2006-12-14 삼성전자주식회사 Method and apparatus for encoding/decoding to implement local 3d video
KR100818933B1 (en) 2005-12-02 2008-04-04 한국전자통신연구원 Method for 3D Contents Service based Digital Broadcasting
KR100747598B1 (en) * 2005-12-09 2007-08-08 한국전자통신연구원 System and Method for Transmitting/Receiving Three Dimensional Video based on Digital Broadcasting
KR100810318B1 (en) 2006-02-08 2008-03-07 삼성전자주식회사 Digital multimedia broadcasting conditional access system and method thereof
KR101328946B1 (en) * 2007-03-26 2013-11-13 엘지전자 주식회사 method for transmitting/receiving a broadcast signal and apparatus for receiving a broadcast signal
MY162861A (en) 2007-09-24 2017-07-31 Koninl Philips Electronics Nv Method and system for encoding a video data signal, encoded video data signal, method and system for decoding a video data signal
KR101390810B1 (en) * 2007-10-04 2014-05-26 삼성전자주식회사 Method and apparatus for receiving image data stream comprising parameters for displaying local three dimensional image, and method and apparatus for generating image data stream comprising parameters for displaying local three dimensional image
KR100993428B1 (en) * 2007-12-12 2010-11-09 한국전자통신연구원 Method and Apparatus for stereoscopic data processing based on digital multimedia broadcasting
EP2088789A3 (en) * 2008-02-05 2012-08-15 Samsung Electronics Co., Ltd. Apparatus and method for generating and displaying media files
KR101506219B1 (en) 2008-03-25 2015-03-27 삼성전자주식회사 Method and apparatus for providing and reproducing 3 dimensional video content, and computer readable medium thereof
KR101154051B1 (en) 2008-11-28 2012-06-08 한국전자통신연구원 Apparatus and method for multi-view video transmission and reception
US8290338B2 (en) * 2009-05-27 2012-10-16 Panasonic Corporation Recording medium, playback device, encoding device, integrated circuit, and playback output device
RU2535443C2 (en) * 2009-09-25 2014-12-10 Панасоник Корпорэйшн Recording medium, playback device and integrated circuit
US8462197B2 (en) * 2009-12-17 2013-06-11 Motorola Mobility Llc 3D video transforming device
KR101694821B1 (en) * 2010-01-28 2017-01-11 삼성전자주식회사 Method and apparatus for transmitting digital broadcasting stream using linking information of multi-view video stream, and Method and apparatus for receiving the same
WO2011122914A2 (en) * 2010-04-02 2011-10-06 삼성전자 주식회사 Method and apparatus for transmitting digital broadcast content for providing two-dimensional and three-dimensional content, and method and apparatus for receiving digital broadcast content
JP5577823B2 (en) 2010-04-27 2014-08-27 ソニー株式会社 Transmitting apparatus, transmitting method, receiving apparatus, and receiving method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110162A1 (en) * 2006-09-29 2010-05-06 Electronics And Telecomunications Research Institute Method and apparatus for providing 3d still image service over digital broadcasting
US20110254920A1 (en) * 2008-11-04 2011-10-20 Electronics And Telecommunications Research Institute Apparatus and method for synchronizing stereoscopic image, and apparatus and method for providing stereoscopic image based on the same
US20110261158A1 (en) * 2008-12-30 2011-10-27 Lg Electronics Inc. Digital broadcast receiving method providing two-dimensional image and 3d image integration service, and digital broadcast receiving device using the same
US20110012992A1 (en) * 2009-07-15 2011-01-20 General Instrument Corporation Simulcast of stereoviews for 3d tv
US20110090306A1 (en) * 2009-10-13 2011-04-21 Lg Electronics Inc. Broadcast receiver and 3d video data processing method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015152635A1 (en) * 2014-04-02 2015-10-08 엘지전자 주식회사 Apparatus for transmitting and receiving signal and method for transmitting and receiving signal
KR101838081B1 (en) * 2014-04-02 2018-04-26 엘지전자 주식회사 Apparatus for transmitting and receiving signal and method for transmitting and receiving signal
WO2017171391A1 (en) * 2016-03-30 2017-10-05 엘지전자 주식회사 Method and apparatus for transmitting and receiving broadcast signals

Also Published As

Publication number Publication date
JP2013518506A (en) 2013-05-20
EP2517386A4 (en) 2015-08-05
US9055280B2 (en) 2015-06-09
CN102835047B (en) 2016-02-24
WO2011093677A2 (en) 2011-08-04
KR20110088332A (en) 2011-08-03
MX2012008818A (en) 2012-09-28
WO2011093677A3 (en) 2012-01-05
JP5775884B2 (en) 2015-09-09
KR101694821B1 (en) 2017-01-11
EP2517386A2 (en) 2012-10-31
CN102835047A (en) 2012-12-19
US20110181694A1 (en) 2011-07-28

Similar Documents

Publication Publication Date Title
US9055280B2 (en) Method and apparatus for transmitting digital broadcasting stream using linking information about multi-view video stream, and method and apparatus for receiving the same
JP5785193B2 (en) Data stream generating method and apparatus for providing 3D multimedia service, data stream receiving method and apparatus for providing 3D multimedia service
JP6034420B2 (en) Method and apparatus for generating 3D video data stream in which additional information for playback of 3D video is inserted and apparatus thereof, and method and apparatus for receiving 3D video data stream in which additional information for playback of 3D video is inserted
JP6181848B2 (en) Method and apparatus for processing 3D broadcast signals
US9210354B2 (en) Method and apparatus for reception and transmission
US20110273532A1 (en) Apparatus and method of transmitting stereoscopic image data and apparatus and method of receiving stereoscopic image data
US20140078248A1 (en) Transmitting apparatus, transmitting method, receiving apparatus, and receiving method
US8953019B2 (en) Method and apparatus for generating stream and method and apparatus for processing stream
US9270972B2 (en) Method for 3DTV multiplexing and apparatus thereof
WO2013069608A1 (en) Transmitting apparatus, transmitting method, receiving apparatus and receiving method
KR20140000136A (en) Image data transmitter, image data transmission method, image data receiver, and image data reception method
WO2013054775A1 (en) Transmission device, transmission method, receiving device and receiving method
JP5928118B2 (en) Transmitting apparatus, transmitting method, receiving apparatus, and receiving method
JP5961717B2 (en) Receiving device, receiving method, and transmitting / receiving method
JP2011254277A (en) Reception device, reception method and transmission/reception method

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION