WO2012052610A1 - Multiplexing data over multiple transmission channels with time synchronization - Google Patents

Multiplexing data over multiple transmission channels with time synchronization Download PDF

Info

Publication number
WO2012052610A1
WO2012052610A1 PCT/FI2011/050787 FI2011050787W WO2012052610A1 WO 2012052610 A1 WO2012052610 A1 WO 2012052610A1 FI 2011050787 W FI2011050787 W FI 2011050787W WO 2012052610 A1 WO2012052610 A1 WO 2012052610A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
synchronization
service
service data
transmission
Prior art date
Application number
PCT/FI2011/050787
Other languages
French (fr)
Inventor
Imed Bouazizi
Lukasz Kondrad
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Publication of WO2012052610A1 publication Critical patent/WO2012052610A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/42Arrangements for resource management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/53Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers
    • H04H20/57Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for mobile receivers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4305Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/631Multimode Transmission, e.g. transmitting basic layers and enhancement layers of the content over different transmission paths or transmitting with different error corrections, different keys or with different transmission protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Definitions

  • Broadcast or multicast systems support the delivery of data services such as digital video broadcasting (DVB).
  • Data services delivered over broadcast or multicast systems may include separate service components, including audio components, video components, enhanced video components, data components, and so forth.
  • multiple video components may provide different quality levels through scalable video coding (SVC), or left and right views of multi-view video coding (MVC), when delivering a service to a user.
  • SVC scalable video coding
  • MVC multi-view video coding
  • a DVB system may packetize the data into frames, for example, frames having a preamble with PI and P2 symbols used for Layer 1 (LI) signaling.
  • LI Layer 1
  • a transmitter may use different physical layer pipes (PLPs) to carry the service data.
  • PLPs may denote different physical layer time division multiplex (TDM) channels that are carried by specified slices, where a slice is a group of cells from a single PLP, which before frequency interleaving, is transmitted on active Orthogonal Frequency Division Multiplexing (OFDM) cells with consecutive addresses over a single radio frequency (RF) channel.
  • TDM physical layer time division multiplex
  • OFDM active Orthogonal Frequency Division Multiplexing
  • RF radio frequency
  • Different PLPs may carry data that has been modulated using schemes based on different constellations or other modulation parameters, where data in different PLPs may be coded using different Forward Error Correction (FEC) schemes.
  • FEC Forward Error Correction
  • Embodiments may include apparatuses, computer media, and methods for receiving data over a plurality of transmission channels, and storing the received data in a plurality of synchronization buffers corresponding to the plurality of transmission channels.
  • the data may be received at, for example, a mobile device receiving service data via broadcast or multicast transmissions (e.g., DVB Handheld (DVB-H) or DVB Next Generation Handheld (DVB-NGH)) over a plurality of physical layer pipes (PLPs).
  • the receiver may also receive a synchronization tolerance value and may set a synchronization buffer delay timer based on the synchronization tolerance value. After the synchronization buffer delay timer has elapsed, the receiver may then forward the data from the synchronization buffers to a service data processing application.
  • Additional embodiments may include apparatuses, computer media, and methods for receiving service data at a transmitter, splitting the service data into a plurality of service components, and assigning each of the plurality of service components to a different transmission channel buffer.
  • the transmitter may be, for example, a broadcast or multicast transmitter providing service data (e.g., media data such as DVB-H, DVB-NGH, etc.) over multiple transmission channels (e.g., physical layer pipes (PLPs)).
  • service data e.g., media data such as DVB-H, DVB-NGH, etc.
  • PGPs physical layer pipes
  • the transmitter may determine a synchronization schedule, based on a synchronization tolerance value, for transmitting the plurality of service components from the plurality of transmission channel buffers to a receiver, may transmit the synchronization tolerance value to the receiver, and may transmit the plurality of service components to the receiver in accordance with the synchronization schedule.
  • FIG. 1 is a diagram of an example digital broadcast system in which one or more embodiments may be implemented.
  • FIG. 2 is a block diagram of an example communication device.
  • FIG. 3 is a block diagram illustrating data transmission over multiple physical layer pipes (PLPs) in accordance with at least some embodiments.
  • PLPs physical layer pipes
  • FIG. 4 is a block diagram illustrating data reception over multiple physical layer pipes (PLPs) in accordance with at least some embodiments.
  • FIG. 5 is a flow diagram illustrating a method of receiving and processing data over multiple physical layer pipes (PLPs) in accordance with at least some embodiments.
  • FIG. 6 is a flow diagram illustrating a method of transmitting a service over multiple physical layer pipes (PLPs) in accordance with at least some embodiments.
  • PLPs physical layer pipes
  • FIG. 7 shows an example of Ll-post dynamic signaling content.
  • FIG. 8 is a flow diagram illustrating a method of receiving and processing Ll-post dynamic signaling content in accordance with at least some embodiments.
  • FIG. 1 illustrates a suitable digital broadband broadcast system 102 in which one or more illustrative embodiments may be implemented.
  • Systems such as the one illustrated here may utilize a digital broadband broadcast technology, for example Digital Video Broadcast (DVB) - Handheld (DVB-H) or next generation DVB networks (DVB-NGH).
  • DVD Digital Video Broadcast
  • DVD-H Digital Video Broadcast
  • DVD-NGH next generation DVB networks
  • Examples of other digital broadcast standards which digital broadband broadcast system 102 may utilize include Digital Video Broadcast - Terrestrial (DVB-T), second generation digital terrestrial television broadcasting system (DVB-T2), Integrated Services Digital Broadcasting - Terrestrial (ISDB-T), Advanced Television Systems Committee (ATSC) Data Broadcast Standard, Digital Multimedia Broadcast-Terrestrial (DMB-T), Terrestrial Digital Multimedia Broadcasting (T-DMB), Satellite Digital Multimedia Broadcasting (S- DMB), Forward Link Only (FLO), Digital Audio Broadcasting (DAB), and Digital Radio Musice (DRM).
  • DMB-T Digital Multimedia Broadcast-Terrestrial
  • T-DMB Terrestrial Digital Multimedia Broadcasting
  • S- DMB Satellite Digital Multimedia Broadcasting
  • FLO Digital Audio Broadcasting
  • DMB Digital Radio Mondiale
  • Other digital broadcasting standards and techniques now known or later developed, may also be used.
  • aspects of the example embodiments may also be applicable to other multicarrier digital broadcast systems such as, for example, T-DAB, T/S-DMB, ISDB-T, and ATSC, proprietary systems such as QUALCOMM MediaFLO / FLO, and non-traditional systems such 3GPP MBMS (Multimedia Broadcast/Multicast Services) and 3GPP2 BCMCS (Broadcast/Multicast Service).
  • T-DAB Time Division Multiple Access
  • T/S-DMB Time Division Multiple Access
  • ISDB-T ISDB-T
  • ATSC ATSC
  • proprietary systems such as QUALCOMM MediaFLO / FLO
  • non-traditional systems such as 3GPP MBMS (Multimedia Broadcast/Multicast Services) and 3GPP2 BCMCS (Broadcast/Multicast Service).
  • 3GPP MBMS Multimedia Broadcast/Multicast Services
  • 3GPP2 BCMCS Broadcast/Multicast Service
  • Digital content sources 104 may create and/or provide service data, that is, digital content corresponding to a data service.
  • Service data may include any digital content, for example, media data such as video signals, audio signals, data, metadata, and so forth.
  • Digital content sources 104 may provide service data to digital transmitter 103 in the form of digital packets, for example, Internet Protocol (IP) packets.
  • IP Internet Protocol
  • a group of related data packets sharing a certain unique data address (for example, IP address) or other source identifier may be referred to as a data flow.
  • the data fiows may be data streams such as, for example, IP streams.
  • Digital transmitter 103 may receive, process, and forward for transmission, the service data, for example, as a transport stream that may include multiple data fiows from multiple digital content sources 104.
  • the transmitter 103 may be a communication terminal including at least one processor 120 and at least one memory 122 or other computer readable media configured to store instructions that, when executed by the processor, are configured to cause the transmitter 103 to perform the operations described herein.
  • the transport stream may then be passed, for example, through a specific DVB transmitter 124, to digital broadcast tower 105 (or other physical transmission component) for wireless transmission.
  • other communication terminals such as mobile terminals or devices 112, or other types of receivers, may selectively receive and consume digital content originating from digital content sources 104.
  • transport streams may deliver compressed audio and video and data to a mobile device 112 via third party delivery networks.
  • Moving Picture Expert Group MPEG is a technology by which encoded video, audio, and data within a single program is multiplexed, with other programs, into the transport stream.
  • the transport stream may be a packetized data stream, with fixed length packets, including a header.
  • the individual elements of a program, audio and video are each carried within packets having a unique packet identification (PID).
  • PID packet identification
  • PSI Program Specific Information
  • ESG Electronic Service Guide
  • SDP Session Description Protocol
  • the ESG fragments describe one or several aspects of currently available (or future) service or broadcast programs. Such aspects may include for example: free text description, schedule, geographical availability, price, purchase method, genre, and supplementary information such as preview images or clips. Audio, video and other types of data including the ESG fragments may be transmitted through a variety of types of networks according to many different protocols. For example, data can be transmitted through a collection of networks usually referred to as the "Internet” using protocols of the Internet protocol suite, such as Internet Protocol (IP) and User Datagram Protocol (UDP). Data is often transmitted through the Internet addressed to a single user. It can, however, be addressed to a group of users, commonly known as multicasting. In the case in which the data is addressed to all users, it is called broadcasting.
  • IP Internet Protocol
  • UDP User Datagram Protocol
  • IPDC IP datacasting
  • ESG electronic service guide
  • DVB-H Digital Video Broadcasting-handheld
  • the DVB-H is designed to deliver 10 Mbps of data to a battery-powered terminal device.
  • ESG fragments may be transported by IPDC over a network, such as for example, DVB-H to destination devices.
  • the DVB-H may include, for example, separate audio, video and data flows.
  • the mobile device 112 may determine the ordering of the ESG fragments upon receipt and assemble them into useful information.
  • data services for example, IP services
  • MPE Multi-Protocol Encapsulation
  • GSE Generic Stream Encapsulation
  • MPE may form an MPE section by encapsulating protocol data units (PDUs, for example, IP data packets). Each MPE section may be sent as a series of transport stream packets in a single transport stream. MPE may support data broadcast services that require transmission of datagrams or communication protocols via DVB compliant broadcast networks.
  • PDUs protocol data units
  • IP data packets for example, IP data packets
  • the GSE protocol may provide network layer packet encapsulation and fragmentation functions over generic streams.
  • GSE may provide efficient encapsulation of IP datagrams over variable length Layer 2 packets, which may then be directly scheduled on the physical layer (for example, base-band frames in DVB-T2).
  • GSE may support encapsulation of multiple protocols (Internet Protocol version 4 (IPv4), Internet Protocol version 6 (IPv6), Moving Picture Experts Group (MPEG), asynchronous transfer mode (ATM), Ethernet, etc.) and permits inclusion of new protocol types.
  • IPv4 Internet Protocol version 4
  • IPv6 Internet Protocol version 6
  • MPEG Moving Picture Experts Group
  • ATM asynchronous transfer mode
  • Ethernet etc.
  • GSE also supports several addressing modes. IP datagrams may be encapsulated in one or more GSE Packets.
  • apparatus e.g., mobile device or other communication terminal
  • apparatus may include at least one processor 128 connected to user interface 130, at least one memory 134 and/or other storage, and display 136, which may be used for displaying video content, service guide information, and the like to a mobile-device user.
  • Mobile device 112 may also include battery 150, speaker 152 and antennas 154.
  • User interface 130 may further include a keypad, touch screen, voice interface, one or more arrow keys, joy-stick, data glove, mouse, roller ball, or the like. In some embodiments, there may be several processors and/or memories in mobile device 112.
  • Computer executable instructions and data used by processor 128 and other components within mobile device 112 may be stored in a computer readable memory 134.
  • the memory may be implemented with any combination of read only memory modules or random access memory modules, optionally including both volatile and nonvolatile memory.
  • Software 140 may be stored within memory 134 and/or storage to provide instructions to processor 128 for enabling mobile device 112 to perform various functions as described herein.
  • some or all of mobile device 112 computer executable instructions may be embodied in hardware or firmware (not shown).
  • Mobile device 112 may be configured to receive, decode and process digital broadband broadcast transmissions that are based, for example, on the Digital Video Broadcast (DVB) standard, such as DVB-H (ETSI EN 302 304, VI .1.1 (2004-11)) or DVB-NGH or DVB-T (ETSI EN 300 744, Vl .6.1 (2009-01)) or DVB-T2 (ETSI EN 302 755, VI.1.1 (2009-09)), the contents of all of which are incorporated herein by reference in their entireties, through a specific DVB receiver 141.
  • the mobile device 112 may also be provided with other types of receivers for digital broadband broadcast and/or multicast transmissions.
  • mobile device 112 may also be configured to receive, decode and process transmissions through frequency modulated (FM) / amplitude modulated (AM) radio receiver 142, wireless local area network (WLAN) transceiver 143, and telecommunications transceiver 144.
  • FM frequency modulated
  • AM amplitude modulated
  • mobile device 112 may receive radio data stream (RDS) messages.
  • RDS radio data stream
  • the digital transmission may be time sliced, such as in DVB-H or DVB-NGH technology.
  • Time slicing may reduce the average power consumption of the mobile device 112 and may enable smooth and seamless handover.
  • Time-slicing entails sending data in bursts using a higher instantaneous bit rate as compared to the bit rate required if the data were transmitted using a traditional streaming mechanism.
  • the mobile device 112 may have one or more buffer memories for storing the decoded time sliced transmission before presentation. Referring now to FIG. 3, an example is shown of a time-sliced data service transmitted over three physical layer pipes (PLPs) 301-303 using the DVB-NGH digital broadcast standard.
  • PPPs physical layer pipes
  • the data service has been split by the digital transmitter 103 into three separate service components 311-313.
  • the first service component 311 may comprise a base component that provides audio and a base layer of SVC video (e.g., QVGA data format)
  • the second component 312 may comprise a first enhancement layer to increase the video resolution from QVGA to VGA format
  • the third component 313 may comprise a second enhancement layer to improve the fidelity of the VGA video.
  • PLPs may be used (e.g., 4, 5, 6, 7, and so on) to transmit different service components of a service.
  • the service components 311-313 may be delivered to the DVB- NGH transmitter 124 as continuous data streams over separate real-time transport protocol (RTP) sessions using multiple session transmission (MST) mode.
  • RTP real-time transport protocol
  • MST multiple session transmission
  • each PLP may have different coding and modulation parameters for transmitting data. Therefore, the data streams for service components 311-313 may be separately processed by the DVB-NGH transmitter 124 in accordance with the coding and modulation schemes used by the corresponding PLPs 301-303.
  • the data transmissions may be scheduled by the DVB-NGH system in time sliced mode.
  • the data streams / RTP sessions 311-313 associated with a service may be stored in separate PLP buffers associated with the service within the DVB-NGH transmitter 124 or elsewhere within the memory 122 of the transmitter 103.
  • the service component data for each PLP 301-303 may then be time-sliced and then sent over the transmission channels as data bursts at a higher bit rates than the bit rates of the service components being transmitted.
  • the bit rates of transmission bursts 321-323 are higher than the bit rates of the data streams / RTP sessions 311-313 (e.g., audio-video streams) from which the bursts are generated.
  • Time slicing enables the DVB receiver 141 to stay active only during a fraction of the time while receiving bursts of a requested service, thus potentially saving power / battery life at the receiver device 112.
  • the bursts 321-323 are sent over PLPs 301-303 at different times by buffering and scheduling transmissions within the DVB-NGH transmitter 124.
  • the receiver device 112 only requests a level of service (i.e., operation point) corresponding to the base service component (e.g., audio and QVGA data)
  • the DVB-NGH receiver 141 need only stay active to receive transmissions over PLP1 301.
  • the receiver device 112 requests a higher operation point, then the DVB-NGH receiver 141 may stay active longer to receive additional service components, for example, a first enhancement layer 312 over PLP2 (302) and/or a second enhancement layer 313 over PLP3 (303).
  • FIG. 4 an example is shown corresponding to FIG. 3 in which a receiver device 112 buffers and processes a time-sliced service transmitted as three separate service components over three PLPs 301-303.
  • the DVB-NGH receiver 141 of the receiver device 112 receives three different data bursts 321-323 corresponding to different service components of a single service.
  • the data in bursts 321-323 may correspond to the same or overlapping time periods within the service.
  • bursts 321-323 may contain different service components (e.g., audio track, base video layer, enhanced video layers, etc.) for the same time window within the service broadcast program.
  • different service components e.g., audio track, base video layer, enhanced video layers, etc.
  • FIG. 4 illustrates an example of re-synchronizing the service components at the receiver 112 that were time-sliced by the transmitter 103 in FIG. 3.
  • the receiver 112 includes three synchronization buffers 411-413 associated with the three PLPs 301-303.
  • FIG. 4 illustrates the arrival of the data bursts 321-323, and the input / output of the synchronization buffers 411-413. Burst 321 is received and stored in buffer 411 at time tl, burst 322 is received and stored in buffer 412 at time t2, and burst 323 is received and stored in buffer 413 at time t3.
  • each of the synchronized buffers 411-413 output their respective data bursts 321-323 to a service data processing application (e.g., upper layer processing 420) on the receiver device 112. Therefore, using the example described above in FIGS. 3 and 4, data bursts 321-323 from different service components that correspond to the same time window within the service broadcast program may be time-sliced and then re-synchronized so the synchronization process is transparent to the upper layer processing 420 of the receiver 412.
  • a service data processing application e.g., upper layer processing 420
  • a receiver 112 receives or determines/selects an operation point for receiving one or more services on the device.
  • Certain embodiments allow different receivers 112 to receive services at different operation points, corresponding to different levels of media quality and/or other limitations (e.g., temporal or spatial restrictions) on a received service.
  • services may be split into multiple service components to allow for quality scalability, for example, using Coarse Granular Scalability (CGS) or Medium Granular Scalability (MGS) quality scalability systems.
  • CGS Coarse Granular Scalability
  • MCS Medium Granular Scalability
  • a first receiver may select a lower operation point to receive fewer service components and have a lower video quality, while a second receiver may select a higher operation point to receive more service components and have a higher video quality.
  • Operation points may correspond to quality levels of video (e.g., QVGA, VGA, etc.) or temporal or spatial limitations of video (e.g., full-screen, reduced screen) in a received service, and may also apply to other data components of the service such as audio, data, metadata, etc.
  • a mobile receiver 112 without audio capability may select an operation point that does not include an audio track, while another mobile receiver 112 with audio capability may select an operation point that includes audio.
  • the selection or determination of an operation point in step 501 may be based on, for example, a Session Description Protocol (SDP) file (IP use case), or program-specific information (PSI/SI) tables (transport stream use case) stored in the memory 134 of the receiver 112.
  • SDP Session Description Protocol
  • PSI/SI program-specific information tables
  • the operation point also may be determined automatically based on the hardware capabilities of the receiver 1 12, or may be selected by a user of the receiver device 112 based on the desired quality level for receiving the service.
  • service data (e.g., a DVB media program) arrives at a receiver 112 from a plurality of PLPs.
  • the data may be a broadcast or multicast data transmitting using the DVB-NGH digital broadcast standard, and the plurality of PLPs (e.g., 301-303) may correspond to different service components (e.g., audio and video components, video enhancement layers, etc.) of the service.
  • the receiver 112 determines which PLPs should be received based on the operation point selected in step 501, and receives those PLPs for the incoming service.
  • the receiver 1 12 may use frame headers, metadata, or other signaling data transmitted with the service, for example, the LI or L2 signaling of the DVB-NGH standard broadcast system.
  • An example signaling technique using LI -post dynamic signaling is discussed below in reference to FIGS. 7 and 8.
  • the receiver 112 may also identify (using signaling or other techniques) a base PLP that corresponds to a lowest quality base service component (e.g., audio and QVGA video).
  • the base PLP may be required for all receivers 112 that receive the service, and all other PLPs for the same service may correspond to service components that build on and improve the quality of the required base PLP.
  • the receiver 112 receives a data unit from one of the PLPs and determines if that PLP is the base PLP for the service.
  • the individual data units transmitted over the PLPs may take any number of different forms, for example, IP datagrams, RTP sessions for SVC data, packetized elementary stream (PES) packets for MPEG-2 Transport Streams, and other well-known data transmission techniques. Accordingly, the determination of whether a PLP is a base PLP may differ depending on the type of transmitted data.
  • a PLP identifier may be transmitted with the service data, for example, frame headers, metadata, or signaling data, of a media frame. In the LI -post signaling example discussed below in FIGS.
  • the receiver 112 may determine whether a PLP is the base PLP by comparing a PLP ID field to a PLP INIT SYNC BASE PLP ID field. If the service data was received from the base PLP (503 : Yes), then the example method proceeds to step 504. In step 504, the receiver 112 determines whether a synchronization buffer delay timer has been initialized for this service at the receiver 112.
  • the synchronization buffer delay timer defines a period of time during which all of the synchronization buffers for the service (e.g., 411-413) will continue to process and buffer service data.
  • the method in this example will proceed to initialize the timer in step 505.
  • the duration of the synchronization buffer delay timer may depend on the transmission schedule of the PLPs from the transmitter 103. As shown in the examples of FIGS. 3 and 4, a transmitter 103 may time-slice the transmission of service data between PLPs 301-303, so that a first data burst is transmitted over the base PLP 301 at time tl, a second data burst is transmitted over a second PLP 302 at time t2, and so on.
  • the synchronization buffer delay timer may be initialized for an amount of time sufficient for the receiver 112 to receive and buffer a single data burst from each PLP.
  • the receiver 112 may determine this amount of time from the transmitter 103, for example, using the frame headers, metadata, or other signaling data transmitted with the service data.
  • the receiver 112 may calculate the length of the synchronization buffer delay timer using the PLP INIT SYNC BUF fields for the PLPs of the receiver's selected operation point.
  • the transmitter 103 may use other techniques to provide to the receiver 112 a synchronization tolerance (or 'synchronization skew') value to inform the receiver 112 of the synchronization delay that may be used when transmitting the service components over the different PLPs. Since the synchronization skew time at the transmitter 103 might not necessarily equal the difference in reception time at the receiver 112, the receiver 112 may further adjust the synchronization buffer delay timer to account for potential delays during physical transmission of the data, delays during processing and buffering at the receiver 112, and other possible delays.
  • a synchronization tolerance or 'synchronization skew'
  • the receiver 112 need not receive the synchronization skew value from the transmitter 103, but might calculate this value independently by measuring the time differences of received data bursts (e.g., 321-323) over the different PLPs (e.g., t3 minus tl).
  • step 506 the receiver 112 determines again whether the synchronization buffer delay timer has been initialized for this service at the receiver 112. If the synchronization buffer delay timer has not yet been initialized (506:No), then the data received from the non-base PLP is dropped from the receiver 1 12 in step 507. Thus, in this example, it is the arrival of data from the base PLP 301 that triggers the timer to synchronize the synchronization buffers 411-413 at the receiver 112.
  • the service data received from the base PLP 301 as well as the other PLPs 302-303 may be stored in the respective synchronization buffers 411-413 as shown in the example of FIG. 4.
  • the process of receiving service data at the receiver 112, processing and buffering the data in the appropriate synchronization buffer 411-414 may continue while the synchronization buffer delay timer has not yet elapsed (509:No).
  • the receiver 112 may empty the synchronization buffers 411-413 for all PLPs 301-303 being received for the service.
  • the data from the synchronization buffers 411-413 may be transferred to a service data processing application, that is, any software component configured to receive and process digital content (e.g., upper layer processing 420), so that the multiple components of the service data can be processed together.
  • the receiver 112 may empty each synchronization buffer 411-413 at its own average bit rate, which may be calculated by the receiver 112 based on the size of the data bursts for a PLP and the length of time in between data bursts for that PLP.
  • the size of data bursts 321-323 and length of time between data bursts may be different for different PLPs / service components of a service.
  • the average bit rate of a low-quality audio service component may be much lower than the average bit rate of a high-quality enhanced video service component.
  • the time slicing and transmission schedules used in the synchronization process may be transparent to the upper layers 420 processing the service data.
  • certain embodiments may provide means for the multiplexer to verify the conformance of a transmission scheduling algorithm.
  • transmitter 103 receives an incoming service (e.g., DVB broadcast or multicast programming) from a content source 104, and splits the service into a plurality of service components.
  • service components may correspond to different aspects of the service, for example, content types, audio or video quality levels, spatial or temporal boundaries, etc.
  • a service corresponding to a video broadcast may be split into a two service components: an audio component and a video component.
  • a transmitter 103 using the scalable video coding (SVC) standard may split a video broadcast service into a base service component comprising an audio track and a base video quality level, and one or more enhancement service components that build on the base service component and provide higher quality audio and/or video for the same broadcast.
  • Other service components may correspond to spatial or temporal enhancements for a DVB video broadcast or other service, and other service components may provide metadata or other types of data associated with the service.
  • the transmitter 103 may assign the different service components to different physical layer pipes (PLPs). For example, the transmitter 103 may create a set of dedicated PLP buffers for each service component of each service, and may route the different service components to their respective buffers. For example, if the transmitter 103 split a video broadcast service into audio and video components, then the audio track may be routed to a first buffer for one PLP, while the video track may be routed to a second buffer for a different PLP. It should be understood that transmitters 103 may support multiple services simultaneously. Further, PLP buffers at the transmitter 103 may be dedicated to a single service component of a single service only, or may be used to buffer multiple different service components from one or more different services.
  • PLP buffers at the transmitter 103 may be dedicated to a single service component of a single service only, or may be used to buffer multiple different service components from one or more different services.
  • the transmitter 103 synchronizes a transmission schedule for the PLPs to transmit the various components of the service.
  • the transmission of service data may be time-sliced, such as in DVB-H or DVB-NGH technology, so that data bursts from different service components are transmitted at different times.
  • Time slicing may enable receivers 112 to reduce average power consumption by only staying active when necessary to receive a desired subset of the transmitted service components.
  • the transmitter 103 may use a synchronization tolerance (or 'synchronization skew') value to synchronize the transmission schedule of the service components over their respective PLPs.
  • the synchronization tolerance value may correspond to a time delay constant value used to stagger the data bursts sent by PLPs.
  • the transmitter 103 may select a synchronization tolerance value of 500ms, and may implement an algorithm scheduling the PLPs to transmit their data bursts in succession at 500ms increments.
  • a synchronization tolerance value may relax the constraints on the transmission scheduler(s).
  • the synchronization tolerance may also be limited by the size of the synchronization buffers and the different PLP bit rates.
  • an additional buffering component for re-synchronization at the receiver 112 may be introduced, and may be reflected by the additional initial synchronization buffering delay.
  • the initial synchronization buffering delay at the receiver 112 may be equivalent to the tolerated synchronization skew that is introduced during the transmission at the transmitter 103.
  • the transmitter 103 may have to reconstruct the presentation timeline for the each media session. Therefore, the transmitter 103 may be aware of the synchronization information for each of the media streams. For example, the Real-Time Control Protocol (RTCP) Sender Reports, as well as a description of the multimedia session, may be made available to the scheduler in IP base media transmission, or timestamps of packetized elementary stream (PES) streams which are relative to a program clock reference (PCR) of the transport stream (TS) may be made available to the scheduler in transport stream based media transmission.
  • RTCP Real-Time Control Protocol
  • PES packetized elementary stream
  • PCR program clock reference
  • TS transport stream
  • a service may have N service components, and each service component may be transmitted over a different PLP. Additionally, the transmissions over the PLPs may be controlled by separate dedicated PLP schedulers.
  • the PLP schedulers may use the example message passing algorithm illustrated by the following steps to implement the synchronized transmission scheduling in step 603:
  • Scheduler S i schedules data unit j with timestamp TS j to be transmitted at time T_j
  • Scheduler S i informs all other schedulers S I to S_n (excluding itself) of (TS J, T J)
  • the transmitter 103 may synchronize the PLP transmission schedules using a synchronization tolerance value. Additionally, transmission schedules among the multiple schedulers may be adjusted to respect synchronization skew constraints by the lagging schedulers.
  • the transmitter 103 transmits the synchronization tolerance value to the receiver 112.
  • the transmitter 103 may transmit the synchronization tolerance value within the service data itself, for example, in a transmission channel data frame, that is, data frames transmitted from or associated with specific transmission channels (e.g., PLPs).
  • the transmitter 103 may also transmit the synchronization tolerance value as a separate transmission to the receiver 112.
  • the transmitter 103 provides the synchronization tolerance value to the receiver 112 by setting the PLP INIT SYNC BUF field in the Ll-post dynamic signaling data structure. As discussed above in reference to FIG.
  • the receiver 1 12 may use this synchronization tolerance value to set the synchronization buffer delay timer and control the input / output of service data from the synchronization buffers 411-413.
  • the transmitter 103 transmits to the different service components to the receiver 112 in accordance with the synchronized transmission schedule(s) determined in step 603.
  • the transmission of service components corresponds to the data bursts (e.g., 321-323) sent over PLPs 301-303 according to the time-slicing schedule at staggered times tl , t2, t3, and so on.
  • the scheduler(s) at the transmitter 103 may coordinate so that the base PLP data is transmitted first among the other PLPs, therefore, the receiver 112 will initialize the synchronized buffer delay timer without any dropped service data (see FIG. 5).
  • certain embodiments may include transmitting additional data relating to the time synchronization of the service components over the multiple transmission channels.
  • the transmitter 103 may designate a specific transmission channel (e.g., PLP) as the base PLP, and may use that PLP to transmit a base service component.
  • the transmitter 103 may inform the receiver 1 12 of the base PLP to allow the receiver 1 12 to control its synchronization buffers 411-413 using the base PLP.
  • the transmitter 103 may use a synchronization tolerance (or 'synchronization skew') value to synchronize the delivery of time-sliced data bursts over the PLPs. Accordingly, the transmitter 103 may inform the receiver 112 of the synchronization tolerance value to allow the receiver 1 12 set a synchronization buffer delay timer and control the input / output of the service data from its synchronization buffers 411-413. Additionally, as described above in step 501, the receiver 112 may determine or select an operation point that determines which PLPs the receiver 112 should receive for the incoming service. Accordingly, in certain embodiments, the receiver 112 may transmit the operation point to the transmitter 103 and/or the transmitter 103 may transmit PLP identifier data to the receiver 112 allowing the receiver 112 to determine the desired PLPs for the receiver's operation point.
  • a synchronization tolerance or 'synchronization skew'
  • the transmitter 103 and receiver 112 may use any of the data communication techniques well known to one of ordinary skill. Additionally, this data may be transmitted as part of the service data, for example, within frame headers, metadata, or other signaling data transmitted with the service.
  • Ll-post dynamic signaling content e.g., for the DVB-NGH broadcast standard
  • additional parameters defined to indicate the base PLP e.g., for the DVB-NGH broadcast standard
  • the synchronization tolerance value used for scheduling PLP transmissions e.g., the DVB-NGH broadcast standard
  • the two new parameters relating to time synchronization over multip le PLPs in this examp le are PLP INIT SYNC BUF ( 1 6 bits) and PLP INIT SYNC BASE PLP ID (8 bits).
  • These two signaling fields may be populated by the transmitter 103 and used by the receiver 1 12 to synchronize the service component data transmissions over the multiple PLPs.
  • the receiver 112 receives a selected operation point for the reception of the service from the transmitter 103.
  • the receiver's 112 operation point may determine the set of PLPs that the receiver 112 should receive for a service, and may correspond to a level of service quality (e.g., broadcast audio or video quality, temporal or spatial restrictions / enhancements, etc.).
  • the receiver 112 begins to receive the service data over the multiple PLPs, and may receive and inspect the following fields of the Ll-post dynamic signaling: PLP ID, PLP INIT SYNC BUF, and PLP INIT SYNC BASE PLP ID.
  • a service may comprise three service components: a base component with audio and video, a first enhanced video component, and second enhanced video component. Each service component may be transmitted over a different PLP having a different PLP ID, as shown in the example signaling data below: PLP1 :
  • the receiver 112 may process data from PLPl only, and may ignore data from PLP2 and PLP3 (e.g., audio and base layer of SVC video). If level "2" is the selected operation point, then the receiver 112 may process data from PLPl and PLP2, and may ignore PLP3 (e.g., audio, base layer, and first enhancement layer of SVC video). If level "3" is the selected operation point, then the receiver 112 may process data from all three PLPs (e.g., audio, base layer, first enhancement layer, and second enhancement layer of SVC video).
  • the receiver 112 determines the synchronization tolerance (or 'synchronization skew') value set by the transmitter 103 based on the selected operation point.
  • the appropriate synchronization tolerance value i.e., PLP INIT SYNC BUF
  • the appropriate synchronization tolerance value for the receiver 103 may be 10 milliseconds.
  • the appropriate synchronization tolerance value for the receiver 103 may be 510 milliseconds.
  • the appropriate synchronization tolerance value for the receiver 112 may be 1010 milliseconds.
  • the signaled buffering delays for each PLP may correspond to the maximum required initial buffering delays for the corresponding PLP from the group of the associated PLPs, that is, the PLPs that convey service components of one service.
  • the receiver 112 may use these values to set a synchronization buffer delay timer and control the input / output of the service data from its synchronization buffers 411-413.
  • the receiver 112 may simply initialize the synchronization buffer delay timer equal to highest PLP INIT SYNC BUF value of its processed PLPs.
  • the receiver may perform additional calculations to account for transmission delays, processing delays, etc., to determine to exact length of the synchronization buffer delay timer.
  • the receiver 112 identifies which of the PLPs processed by the receiver 112 is the base PLP (i.e., the PLP to which the synchronization tolerance value is calculated).
  • the base PLP identifier i.e., PLP INIT SYNC BASE PLP ID
  • the receiver 1 12 may use the base PLP to initialize the synchronize buffer delay timer, which allows the PLP service data to be stored in the synchronization buffers 411-413.

Abstract

Systems and methods are disclosed for synchronizing the transmission and reception of data over a plurality of transmission channels. Service data, for example, broadcast or multicast programming data, may be split into multiple service components and transmitted over separate transmission channels using a synchronized transmission schedule based on a synchronization tolerance. The received data may be stored in a plurality of buffers, synchronized in accordance with the synchronization tolerance of the transmitter, and forwarded for processing at the receiver.

Description

MULTIPLEXING DATA OVER MULTIPLE TRANSMISSION CHANNELS WITH TIME SYNCHRONIZATION
BACKGROUND OF THE INVENTION
Broadcast or multicast systems support the delivery of data services such as digital video broadcasting (DVB). Data services delivered over broadcast or multicast systems may include separate service components, including audio components, video components, enhanced video components, data components, and so forth. For example, multiple video components may provide different quality levels through scalable video coding (SVC), or left and right views of multi-view video coding (MVC), when delivering a service to a user. When transmitting data services for broadcast or multicast transmissions, a DVB system may packetize the data into frames, for example, frames having a preamble with PI and P2 symbols used for Layer 1 (LI) signaling.
When transmitting broadcast or multicast transmissions, a transmitter may use different physical layer pipes (PLPs) to carry the service data. Different PLPs may denote different physical layer time division multiplex (TDM) channels that are carried by specified slices, where a slice is a group of cells from a single PLP, which before frequency interleaving, is transmitted on active Orthogonal Frequency Division Multiplexing (OFDM) cells with consecutive addresses over a single radio frequency (RF) channel. Different PLPs may carry data that has been modulated using schemes based on different constellations or other modulation parameters, where data in different PLPs may be coded using different Forward Error Correction (FEC) schemes.
BRIEF SUMMARY
The following presents a simplified summary in order to provide a basic understanding of some aspects of at least some example embodiments. The summary is not an extensive overview. It is neither intended to identify key or critical elements nor to delineate the claim scope. The following summary merely presents some concepts in a simplified form as a prelude to the more detailed description below. Embodiments may include apparatuses, computer media, and methods for receiving data over a plurality of transmission channels, and storing the received data in a plurality of synchronization buffers corresponding to the plurality of transmission channels. The data may be received at, for example, a mobile device receiving service data via broadcast or multicast transmissions (e.g., DVB Handheld (DVB-H) or DVB Next Generation Handheld (DVB-NGH)) over a plurality of physical layer pipes (PLPs). The receiver may also receive a synchronization tolerance value and may set a synchronization buffer delay timer based on the synchronization tolerance value. After the synchronization buffer delay timer has elapsed, the receiver may then forward the data from the synchronization buffers to a service data processing application.
Additional embodiments may include apparatuses, computer media, and methods for receiving service data at a transmitter, splitting the service data into a plurality of service components, and assigning each of the plurality of service components to a different transmission channel buffer. The transmitter may be, for example, a broadcast or multicast transmitter providing service data (e.g., media data such as DVB-H, DVB-NGH, etc.) over multiple transmission channels (e.g., physical layer pipes (PLPs)). The transmitter may determine a synchronization schedule, based on a synchronization tolerance value, for transmitting the plurality of service components from the plurality of transmission channel buffers to a receiver, may transmit the synchronization tolerance value to the receiver, and may transmit the plurality of service components to the receiver in accordance with the synchronization schedule.
BRIEF DESCRIPTION OF THE DRAWINGS
Certain embodiments are illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
FIG. 1 is a diagram of an example digital broadcast system in which one or more embodiments may be implemented.
FIG. 2 is a block diagram of an example communication device. FIG. 3 is a block diagram illustrating data transmission over multiple physical layer pipes (PLPs) in accordance with at least some embodiments.
FIG. 4 is a block diagram illustrating data reception over multiple physical layer pipes (PLPs) in accordance with at least some embodiments. FIG. 5 is a flow diagram illustrating a method of receiving and processing data over multiple physical layer pipes (PLPs) in accordance with at least some embodiments.
FIG. 6 is a flow diagram illustrating a method of transmitting a service over multiple physical layer pipes (PLPs) in accordance with at least some embodiments.
FIG. 7 shows an example of Ll-post dynamic signaling content. FIG. 8 is a flow diagram illustrating a method of receiving and processing Ll-post dynamic signaling content in accordance with at least some embodiments.
DETAILED DESCRIPTION OF THE INVENTION
In the following description of the various embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration various embodiments may be practiced. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope and spirit of the present disclosure.
FIG. 1 illustrates a suitable digital broadband broadcast system 102 in which one or more illustrative embodiments may be implemented. Systems such as the one illustrated here may utilize a digital broadband broadcast technology, for example Digital Video Broadcast (DVB) - Handheld (DVB-H) or next generation DVB networks (DVB-NGH). Examples of other digital broadcast standards which digital broadband broadcast system 102 may utilize include Digital Video Broadcast - Terrestrial (DVB-T), second generation digital terrestrial television broadcasting system (DVB-T2), Integrated Services Digital Broadcasting - Terrestrial (ISDB-T), Advanced Television Systems Committee (ATSC) Data Broadcast Standard, Digital Multimedia Broadcast-Terrestrial (DMB-T), Terrestrial Digital Multimedia Broadcasting (T-DMB), Satellite Digital Multimedia Broadcasting (S- DMB), Forward Link Only (FLO), Digital Audio Broadcasting (DAB), and Digital Radio Mondiale (DRM). Other digital broadcasting standards and techniques, now known or later developed, may also be used. Aspects of the example embodiments may also be applicable to other multicarrier digital broadcast systems such as, for example, T-DAB, T/S-DMB, ISDB-T, and ATSC, proprietary systems such as QUALCOMM MediaFLO / FLO, and non-traditional systems such 3GPP MBMS (Multimedia Broadcast/Multicast Services) and 3GPP2 BCMCS (Broadcast/Multicast Service).
Digital content sources 104 may create and/or provide service data, that is, digital content corresponding to a data service. Service data may include any digital content, for example, media data such as video signals, audio signals, data, metadata, and so forth. Digital content sources 104 may provide service data to digital transmitter 103 in the form of digital packets, for example, Internet Protocol (IP) packets. A group of related data packets sharing a certain unique data address (for example, IP address) or other source identifier may be referred to as a data flow. In various embodiments, the data fiows may be data streams such as, for example, IP streams.
Digital transmitter 103 may receive, process, and forward for transmission, the service data, for example, as a transport stream that may include multiple data fiows from multiple digital content sources 104. The transmitter 103 may be a communication terminal including at least one processor 120 and at least one memory 122 or other computer readable media configured to store instructions that, when executed by the processor, are configured to cause the transmitter 103 to perform the operations described herein. The transport stream may then be passed, for example, through a specific DVB transmitter 124, to digital broadcast tower 105 (or other physical transmission component) for wireless transmission. Ultimately, other communication terminals such as mobile terminals or devices 112, or other types of receivers, may selectively receive and consume digital content originating from digital content sources 104.
In certain embodiments, transport streams may deliver compressed audio and video and data to a mobile device 112 via third party delivery networks. Moving Picture Expert Group (MPEG) is a technology by which encoded video, audio, and data within a single program is multiplexed, with other programs, into the transport stream. The transport stream may be a packetized data stream, with fixed length packets, including a header. The individual elements of a program, audio and video, are each carried within packets having a unique packet identification (PID). To enable the mobile device 112 to locate the different elements of a particular program within the transport stream, Program Specific Information (PSI), which is embedded into the transport stream, is supplied. In addition, additional Service Information (SI), a set of tables adhering to the MPEG private section syntax, may be incorporated into the transport stream. This enables the mobile device 112 to correctly process the data contained within the transport stream. The transport stream may include an Electronic Service Guide (ESG) to provide program or service related information to the mobile device 112. Generally, an Electronic Service Guide (ESG) enables the transmitter 103 to communicate what services are available to end users and how the services may be accessed. The ESG includes independently existing pieces of ESG fragments. Traditionally, ESG fragments include XML and/or binary documents, but more recently, they have encompassed a vast array of items, such as for example, a SDP (Session Description Protocol) description, textual file, or an image. The ESG fragments describe one or several aspects of currently available (or future) service or broadcast programs. Such aspects may include for example: free text description, schedule, geographical availability, price, purchase method, genre, and supplementary information such as preview images or clips. Audio, video and other types of data including the ESG fragments may be transmitted through a variety of types of networks according to many different protocols. For example, data can be transmitted through a collection of networks usually referred to as the "Internet" using protocols of the Internet protocol suite, such as Internet Protocol (IP) and User Datagram Protocol (UDP). Data is often transmitted through the Internet addressed to a single user. It can, however, be addressed to a group of users, commonly known as multicasting. In the case in which the data is addressed to all users, it is called broadcasting.
One way of broadcasting or multicasting data is to use an IP datacasting (IPDC) network. IPDC is a combination of digital broadcast and Internet Protocol. Through such an IP- based broadcasting network, one or more service providers can supply different types of IP services including on-line newspapers, radio, and television. These IP services are organized into one or more data flows in the form of audio, video and/or other types of data. To determine when and where these data flows occur, users refer to an electronic service guide (ESG).
One type of DVB is Digital Video Broadcasting-handheld (DVB-H). The DVB-H is designed to deliver 10 Mbps of data to a battery-powered terminal device. ESG fragments may be transported by IPDC over a network, such as for example, DVB-H to destination devices. The DVB-H may include, for example, separate audio, video and data flows. The mobile device 112 may determine the ordering of the ESG fragments upon receipt and assemble them into useful information. In DVB systems, data services (for example, IP services) may be carried in the transport stream over Multi-Protocol Encapsulation (MPE) Sections or over Generic Stream Encapsulation (GSE) Protocol. MPE may form an MPE section by encapsulating protocol data units (PDUs, for example, IP data packets). Each MPE section may be sent as a series of transport stream packets in a single transport stream. MPE may support data broadcast services that require transmission of datagrams or communication protocols via DVB compliant broadcast networks.
The GSE protocol may provide network layer packet encapsulation and fragmentation functions over generic streams. GSE may provide efficient encapsulation of IP datagrams over variable length Layer 2 packets, which may then be directly scheduled on the physical layer (for example, base-band frames in DVB-T2). GSE may support encapsulation of multiple protocols (Internet Protocol version 4 (IPv4), Internet Protocol version 6 (IPv6), Moving Picture Experts Group (MPEG), asynchronous transfer mode (ATM), Ethernet, etc.) and permits inclusion of new protocol types. GSE also supports several addressing modes. IP datagrams may be encapsulated in one or more GSE Packets. The encapsulation process may delineate a start and end of each network-layer PDU, add control information such as the network protocol type and address label, and provide an overall integrity check when needed. Data broadcast specifications may support a standard mechanism for signaling data services deployed within DVB networks and enable the implementation of DVB mobile devices 112 that are completely self tuning when accessing data flows on one or more transport streams. As shown in FIG. 2, apparatus (e.g., mobile device or other communication terminal) 1 12 may include at least one processor 128 connected to user interface 130, at least one memory 134 and/or other storage, and display 136, which may be used for displaying video content, service guide information, and the like to a mobile-device user. Mobile device 112 may also include battery 150, speaker 152 and antennas 154. User interface 130 may further include a keypad, touch screen, voice interface, one or more arrow keys, joy-stick, data glove, mouse, roller ball, or the like. In some embodiments, there may be several processors and/or memories in mobile device 112.
Computer executable instructions and data used by processor 128 and other components within mobile device 112 may be stored in a computer readable memory 134. The memory may be implemented with any combination of read only memory modules or random access memory modules, optionally including both volatile and nonvolatile memory. Software 140 may be stored within memory 134 and/or storage to provide instructions to processor 128 for enabling mobile device 112 to perform various functions as described herein. Alternatively, some or all of mobile device 112 computer executable instructions may be embodied in hardware or firmware (not shown).
Mobile device 112 may be configured to receive, decode and process digital broadband broadcast transmissions that are based, for example, on the Digital Video Broadcast (DVB) standard, such as DVB-H (ETSI EN 302 304, VI .1.1 (2004-11)) or DVB-NGH or DVB-T (ETSI EN 300 744, Vl .6.1 (2009-01)) or DVB-T2 (ETSI EN 302 755, VI.1.1 (2009-09)), the contents of all of which are incorporated herein by reference in their entireties, through a specific DVB receiver 141. The mobile device 112 may also be provided with other types of receivers for digital broadband broadcast and/or multicast transmissions. Additionally, mobile device 112 may also be configured to receive, decode and process transmissions through frequency modulated (FM) / amplitude modulated (AM) radio receiver 142, wireless local area network (WLAN) transceiver 143, and telecommunications transceiver 144. In one aspect, mobile device 112 may receive radio data stream (RDS) messages.
Additionally, the digital transmission may be time sliced, such as in DVB-H or DVB-NGH technology. Time slicing may reduce the average power consumption of the mobile device 112 and may enable smooth and seamless handover. Time-slicing entails sending data in bursts using a higher instantaneous bit rate as compared to the bit rate required if the data were transmitted using a traditional streaming mechanism. In this case, the mobile device 112 may have one or more buffer memories for storing the decoded time sliced transmission before presentation. Referring now to FIG. 3, an example is shown of a time-sliced data service transmitted over three physical layer pipes (PLPs) 301-303 using the DVB-NGH digital broadcast standard. In this example, the data service has been split by the digital transmitter 103 into three separate service components 311-313. For instance, the first service component 311 may comprise a base component that provides audio and a base layer of SVC video (e.g., QVGA data format), the second component 312 may comprise a first enhancement layer to increase the video resolution from QVGA to VGA format, and the third component 313 may comprise a second enhancement layer to improve the fidelity of the VGA video. Additionally, although only three PLPs are shown in this example, it should be understood that many more PLPs may be used (e.g., 4, 5, 6, 7, and so on) to transmit different service components of a service.
In certain embodiments, the service components 311-313 may be delivered to the DVB- NGH transmitter 124 as continuous data streams over separate real-time transport protocol (RTP) sessions using multiple session transmission (MST) mode. As discussed above, each PLP may have different coding and modulation parameters for transmitting data. Therefore, the data streams for service components 311-313 may be separately processed by the DVB-NGH transmitter 124 in accordance with the coding and modulation schemes used by the corresponding PLPs 301-303.
After receiving and processing the data streams (e.g., RTP sessions) for the service components 311-313, the data transmissions may be scheduled by the DVB-NGH system in time sliced mode. Prior to transmitting the data over PLPs 301-303, the data streams / RTP sessions 311-313 associated with a service may be stored in separate PLP buffers associated with the service within the DVB-NGH transmitter 124 or elsewhere within the memory 122 of the transmitter 103.
The service component data for each PLP 301-303 may then be time-sliced and then sent over the transmission channels as data bursts at a higher bit rates than the bit rates of the service components being transmitted. For example, as shown in FIG. 3, the bit rates of transmission bursts 321-323 are higher than the bit rates of the data streams / RTP sessions 311-313 (e.g., audio-video streams) from which the bursts are generated. Time slicing enables the DVB receiver 141 to stay active only during a fraction of the time while receiving bursts of a requested service, thus potentially saving power / battery life at the receiver device 112. For example, the bursts 321-323 are sent over PLPs 301-303 at different times by buffering and scheduling transmissions within the DVB-NGH transmitter 124. Thus, if the receiver device 112 only requests a level of service (i.e., operation point) corresponding to the base service component (e.g., audio and QVGA data), then the DVB-NGH receiver 141 need only stay active to receive transmissions over PLP1 301. However, if the receiver device 112 requests a higher operation point, then the DVB-NGH receiver 141 may stay active longer to receive additional service components, for example, a first enhancement layer 312 over PLP2 (302) and/or a second enhancement layer 313 over PLP3 (303). Referring now to FIG. 4, an example is shown corresponding to FIG. 3 in which a receiver device 112 buffers and processes a time-sliced service transmitted as three separate service components over three PLPs 301-303. In this example, the DVB-NGH receiver 141 of the receiver device 112 receives three different data bursts 321-323 corresponding to different service components of a single service. As discussed above in reference to FIG. 3, the data in bursts 321-323 may correspond to the same or overlapping time periods within the service. For example, bursts 321-323 may contain different service components (e.g., audio track, base video layer, enhanced video layers, etc.) for the same time window within the service broadcast program. However, through the processes discussed above in FIG. 3, the bursts 321-323 may be transmitted over different PLPs 301-303 and during different time slices. Therefore, FIG. 4 illustrates an example of re-synchronizing the service components at the receiver 112 that were time-sliced by the transmitter 103 in FIG. 3. In this example, the receiver 112 includes three synchronization buffers 411-413 associated with the three PLPs 301-303. FIG. 4 illustrates the arrival of the data bursts 321-323, and the input / output of the synchronization buffers 411-413. Burst 321 is received and stored in buffer 411 at time tl, burst 322 is received and stored in buffer 412 at time t2, and burst 323 is received and stored in buffer 413 at time t3. Then, at time t4, each of the synchronized buffers 411-413 output their respective data bursts 321-323 to a service data processing application (e.g., upper layer processing 420) on the receiver device 112. Therefore, using the example described above in FIGS. 3 and 4, data bursts 321-323 from different service components that correspond to the same time window within the service broadcast program may be time-sliced and then re-synchronized so the synchronization process is transparent to the upper layer processing 420 of the receiver 412.
Referring now to FIG. 5, a flow diagram is shown illustrating a method of receiving and processing data over multiple physical layer pipes (PLPs) in accordance with at least some embodiments. In step 501, a receiver 112, for example, a mobile device, receives or determines/selects an operation point for receiving one or more services on the device. Certain embodiments allow different receivers 112 to receive services at different operation points, corresponding to different levels of media quality and/or other limitations (e.g., temporal or spatial restrictions) on a received service. As discussed above, services may be split into multiple service components to allow for quality scalability, for example, using Coarse Granular Scalability (CGS) or Medium Granular Scalability (MGS) quality scalability systems. Using quality scalability, a first receiver may select a lower operation point to receive fewer service components and have a lower video quality, while a second receiver may select a higher operation point to receive more service components and have a higher video quality. Operation points may correspond to quality levels of video (e.g., QVGA, VGA, etc.) or temporal or spatial limitations of video (e.g., full-screen, reduced screen) in a received service, and may also apply to other data components of the service such as audio, data, metadata, etc. For example, a mobile receiver 112 without audio capability may select an operation point that does not include an audio track, while another mobile receiver 112 with audio capability may select an operation point that includes audio. The selection or determination of an operation point in step 501 may be based on, for example, a Session Description Protocol (SDP) file (IP use case), or program-specific information (PSI/SI) tables (transport stream use case) stored in the memory 134 of the receiver 112. The operation point also may be determined automatically based on the hardware capabilities of the receiver 1 12, or may be selected by a user of the receiver device 112 based on the desired quality level for receiving the service.
In step 502, service data (e.g., a DVB media program) arrives at a receiver 112 from a plurality of PLPs. As in the above examples, the data may be a broadcast or multicast data transmitting using the DVB-NGH digital broadcast standard, and the plurality of PLPs (e.g., 301-303) may correspond to different service components (e.g., audio and video components, video enhancement layers, etc.) of the service. In this step, the receiver 112 determines which PLPs should be received based on the operation point selected in step 501, and receives those PLPs for the incoming service.
In order to determine which PLPs should be received for the selected operation point, the receiver 1 12 may use frame headers, metadata, or other signaling data transmitted with the service, for example, the LI or L2 signaling of the DVB-NGH standard broadcast system. An example signaling technique using LI -post dynamic signaling is discussed below in reference to FIGS. 7 and 8. In step 502, the receiver 112 may also identify (using signaling or other techniques) a base PLP that corresponds to a lowest quality base service component (e.g., audio and QVGA video). In some examples, the base PLP may be required for all receivers 112 that receive the service, and all other PLPs for the same service may correspond to service components that build on and improve the quality of the required base PLP.
In step 503, the receiver 112 receives a data unit from one of the PLPs and determines if that PLP is the base PLP for the service. The individual data units transmitted over the PLPs may take any number of different forms, for example, IP datagrams, RTP sessions for SVC data, packetized elementary stream (PES) packets for MPEG-2 Transport Streams, and other well-known data transmission techniques. Accordingly, the determination of whether a PLP is a base PLP may differ depending on the type of transmitted data. A PLP identifier may be transmitted with the service data, for example, frame headers, metadata, or signaling data, of a media frame. In the LI -post signaling example discussed below in FIGS. 7 and 8, the receiver 112 may determine whether a PLP is the base PLP by comparing a PLP ID field to a PLP INIT SYNC BASE PLP ID field. If the service data was received from the base PLP (503 : Yes), then the example method proceeds to step 504. In step 504, the receiver 112 determines whether a synchronization buffer delay timer has been initialized for this service at the receiver 112. The synchronization buffer delay timer, described in more detail below, defines a period of time during which all of the synchronization buffers for the service (e.g., 411-413) will continue to process and buffer service data.
If the synchronization buffer delay timer has not yet been initialized (504:No), then the method in this example will proceed to initialize the timer in step 505. The duration of the synchronization buffer delay timer may depend on the transmission schedule of the PLPs from the transmitter 103. As shown in the examples of FIGS. 3 and 4, a transmitter 103 may time-slice the transmission of service data between PLPs 301-303, so that a first data burst is transmitted over the base PLP 301 at time tl, a second data burst is transmitted over a second PLP 302 at time t2, and so on. Thus, the synchronization buffer delay timer may be initialized for an amount of time sufficient for the receiver 112 to receive and buffer a single data burst from each PLP. The receiver 112 may determine this amount of time from the transmitter 103, for example, using the frame headers, metadata, or other signaling data transmitted with the service data. In the LI -post signaling example discussed below in FIGS. 7 and 8, the receiver 112 may calculate the length of the synchronization buffer delay timer using the PLP INIT SYNC BUF fields for the PLPs of the receiver's selected operation point. In other examples, the transmitter 103 may use other techniques to provide to the receiver 112 a synchronization tolerance (or 'synchronization skew') value to inform the receiver 112 of the synchronization delay that may be used when transmitting the service components over the different PLPs. Since the synchronization skew time at the transmitter 103 might not necessarily equal the difference in reception time at the receiver 112, the receiver 112 may further adjust the synchronization buffer delay timer to account for potential delays during physical transmission of the data, delays during processing and buffering at the receiver 112, and other possible delays. In still other examples, the receiver 112 need not receive the synchronization skew value from the transmitter 103, but might calculate this value independently by measuring the time differences of received data bursts (e.g., 321-323) over the different PLPs (e.g., t3 minus tl).
Returning to step 503, if the service data was not received from the base PLP (503 :No), then the example method proceeds to step 506. In step 506, the receiver 112 determines again whether the synchronization buffer delay timer has been initialized for this service at the receiver 112. If the synchronization buffer delay timer has not yet been initialized (506:No), then the data received from the non-base PLP is dropped from the receiver 1 12 in step 507. Thus, in this example, it is the arrival of data from the base PLP 301 that triggers the timer to synchronize the synchronization buffers 411-413 at the receiver 112.
After the synchronization buffer delay timer has been initialized (step 505), the service data received from the base PLP 301 as well as the other PLPs 302-303 may be stored in the respective synchronization buffers 411-413 as shown in the example of FIG. 4. The process of receiving service data at the receiver 112, processing and buffering the data in the appropriate synchronization buffer 411-414 may continue while the synchronization buffer delay timer has not yet elapsed (509:No).
Once the synchronization buffer delay timer has elapsed (509:Yes), the receiver 112 may empty the synchronization buffers 411-413 for all PLPs 301-303 being received for the service. Thus, in step 510, the data from the synchronization buffers 411-413 may be transferred to a service data processing application, that is, any software component configured to receive and process digital content (e.g., upper layer processing 420), so that the multiple components of the service data can be processed together. Further, the receiver 112 may empty each synchronization buffer 411-413 at its own average bit rate, which may be calculated by the receiver 112 based on the size of the data bursts for a PLP and the length of time in between data bursts for that PLP. This may be advantageous, because the size of data bursts 321-323 and length of time between data bursts may be different for different PLPs / service components of a service. For example, the average bit rate of a low-quality audio service component may be much lower than the average bit rate of a high-quality enhanced video service component. However, regardless of the different bit rates for PLPs / service components in a service, the time slicing and transmission schedules used in the synchronization process may be transparent to the upper layers 420 processing the service data. Additionally, certain embodiments may provide means for the multiplexer to verify the conformance of a transmission scheduling algorithm.
Referring now to FIG. 6, a flow diagram is shown illustrating a method of transmitting a service over multiple physical layer pipes (PLPs) in accordance with at least some embodiments. In step 601, transmitter 103 receives an incoming service (e.g., DVB broadcast or multicast programming) from a content source 104, and splits the service into a plurality of service components. As discussed above, service components may correspond to different aspects of the service, for example, content types, audio or video quality levels, spatial or temporal boundaries, etc. For example, a service corresponding to a video broadcast may be split into a two service components: an audio component and a video component. As another example, a transmitter 103 using the scalable video coding (SVC) standard may split a video broadcast service into a base service component comprising an audio track and a base video quality level, and one or more enhancement service components that build on the base service component and provide higher quality audio and/or video for the same broadcast. Other service components may correspond to spatial or temporal enhancements for a DVB video broadcast or other service, and other service components may provide metadata or other types of data associated with the service.
In step 602, the transmitter 103 may assign the different service components to different physical layer pipes (PLPs). For example, the transmitter 103 may create a set of dedicated PLP buffers for each service component of each service, and may route the different service components to their respective buffers. For example, if the transmitter 103 split a video broadcast service into audio and video components, then the audio track may be routed to a first buffer for one PLP, while the video track may be routed to a second buffer for a different PLP. It should be understood that transmitters 103 may support multiple services simultaneously. Further, PLP buffers at the transmitter 103 may be dedicated to a single service component of a single service only, or may be used to buffer multiple different service components from one or more different services.
In step 603, the transmitter 103 synchronizes a transmission schedule for the PLPs to transmit the various components of the service. As discussed above (see FIG. 3), the transmission of service data may be time-sliced, such as in DVB-H or DVB-NGH technology, so that data bursts from different service components are transmitted at different times. Time slicing may enable receivers 112 to reduce average power consumption by only staying active when necessary to receive a desired subset of the transmitted service components. Thus, in order to provide receivers 112 with an accessible service and the advantages of time slicing, the transmitter 103 may use a synchronization tolerance (or 'synchronization skew') value to synchronize the transmission schedule of the service components over their respective PLPs. The synchronization tolerance value may correspond to a time delay constant value used to stagger the data bursts sent by PLPs. For example, the transmitter 103 may select a synchronization tolerance value of 500ms, and may implement an algorithm scheduling the PLPs to transmit their data bursts in succession at 500ms increments. A synchronization tolerance value may relax the constraints on the transmission scheduler(s). The synchronization tolerance may also be limited by the size of the synchronization buffers and the different PLP bit rates. As discussed above, an additional buffering component for re-synchronization at the receiver 112 may be introduced, and may be reflected by the additional initial synchronization buffering delay. The initial synchronization buffering delay at the receiver 112 may be equivalent to the tolerated synchronization skew that is introduced during the transmission at the transmitter 103.
As an example of synchronizing the PLP transmission schedules is illustrated by the message passing algorithm below. In order for the transmitter 103 to implement a proposed scheduling algorithm, it may have to reconstruct the presentation timeline for the each media session. Therefore, the transmitter 103 may be aware of the synchronization information for each of the media streams. For example, the Real-Time Control Protocol (RTCP) Sender Reports, as well as a description of the multimedia session, may be made available to the scheduler in IP base media transmission, or timestamps of packetized elementary stream (PES) streams which are relative to a program clock reference (PCR) of the transport stream (TS) may be made available to the scheduler in transport stream based media transmission.
In the below example algorithm, a service may have N service components, and each service component may be transmitted over a different PLP. Additionally, the transmissions over the PLPs may be controlled by separate dedicated PLP schedulers. The PLP schedulers may use the example message passing algorithm illustrated by the following steps to implement the synchronized transmission scheduling in step 603:
1. Scheduler S i schedules data unit j with timestamp TS j to be transmitted at time T_j
2. Scheduler S i informs all other schedulers S I to S_n (excluding itself) of (TS J, T J)
3. Scheduler S_k schedules data units with timestamp TS m < TSj to be transmitted at time T_m, where T_m <= (TS m - T S j ) + synchronization tolerance value
Thus, in step 603, using either the above example algorithm or another algorithm, the transmitter 103 may synchronize the PLP transmission schedules using a synchronization tolerance value. Additionally, transmission schedules among the multiple schedulers may be adjusted to respect synchronization skew constraints by the lagging schedulers.
In step 604, the transmitter 103 transmits the synchronization tolerance value to the receiver 112. The transmitter 103 may transmit the synchronization tolerance value within the service data itself, for example, in a transmission channel data frame, that is, data frames transmitted from or associated with specific transmission channels (e.g., PLPs). The transmitter 103 may also transmit the synchronization tolerance value as a separate transmission to the receiver 112. In the Ll-post signaling example discussed below in FIGS. 7 and 8, the transmitter 103 provides the synchronization tolerance value to the receiver 112 by setting the PLP INIT SYNC BUF field in the Ll-post dynamic signaling data structure. As discussed above in reference to FIG. 5, the receiver 1 12 may use this synchronization tolerance value to set the synchronization buffer delay timer and control the input / output of service data from the synchronization buffers 411-413. In step 605, the transmitter 103 transmits to the different service components to the receiver 112 in accordance with the synchronized transmission schedule(s) determined in step 603. For example, as shown in FIG. 3, the transmission of service components corresponds to the data bursts (e.g., 321-323) sent over PLPs 301-303 according to the time-slicing schedule at staggered times tl , t2, t3, and so on. In certain examples, the scheduler(s) at the transmitter 103 may coordinate so that the base PLP data is transmitted first among the other PLPs, therefore, the receiver 112 will initialize the synchronized buffer delay timer without any dropped service data (see FIG. 5).
As discussed in the various examples above, in addition to transmitting the service data (e.g., DVB programming) from the transmitter 103 to the receiver(s) 1 12, certain embodiments may include transmitting additional data relating to the time synchronization of the service components over the multiple transmission channels. For example, the transmitter 103 may designate a specific transmission channel (e.g., PLP) as the base PLP, and may use that PLP to transmit a base service component. Thus, the transmitter 103 may inform the receiver 1 12 of the base PLP to allow the receiver 1 12 to control its synchronization buffers 411-413 using the base PLP. As another example, the transmitter 103 may use a synchronization tolerance (or 'synchronization skew') value to synchronize the delivery of time-sliced data bursts over the PLPs. Accordingly, the transmitter 103 may inform the receiver 112 of the synchronization tolerance value to allow the receiver 1 12 set a synchronization buffer delay timer and control the input / output of the service data from its synchronization buffers 411-413. Additionally, as described above in step 501, the receiver 112 may determine or select an operation point that determines which PLPs the receiver 112 should receive for the incoming service. Accordingly, in certain embodiments, the receiver 112 may transmit the operation point to the transmitter 103 and/or the transmitter 103 may transmit PLP identifier data to the receiver 112 allowing the receiver 112 to determine the desired PLPs for the receiver's operation point.
For these examples and other data transmissions, the transmitter 103 and receiver 112 may use any of the data communication techniques well known to one of ordinary skill. Additionally, this data may be transmitted as part of the service data, for example, within frame headers, metadata, or other signaling data transmitted with the service. Referring now to FIG. 7, an example is shown of Ll-post dynamic signaling content (e.g., for the DVB-NGH broadcast standard) including additional parameters defined to indicate the base PLP, and the synchronization tolerance value used for scheduling PLP transmissions. Specifically, the two new parameters relating to time synchronization over multip le PLPs in this examp le are PLP INIT SYNC BUF ( 1 6 bits) and PLP INIT SYNC BASE PLP ID (8 bits). These two signaling fields may be populated by the transmitter 103 and used by the receiver 1 12 to synchronize the service component data transmissions over the multiple PLPs.
Referring now to FIG. 8, a flow diagram is shown illustrating a method of receiving and processing the Ll-post dynamic signaling content parameters shown in FIG. 7. In step 801, the receiver 112 receives a selected operation point for the reception of the service from the transmitter 103. As discussed above in step 501, the receiver's 112 operation point may determine the set of PLPs that the receiver 112 should receive for a service, and may correspond to a level of service quality (e.g., broadcast audio or video quality, temporal or spatial restrictions / enhancements, etc.).
In step 802, the receiver 112 begins to receive the service data over the multiple PLPs, and may receive and inspect the following fields of the Ll-post dynamic signaling: PLP ID, PLP INIT SYNC BUF, and PLP INIT SYNC BASE PLP ID.
In step 803, the receiver 112 uses the received operation point and the PLP IDs from the different PLPs to determine which PLPs the receiver 1 12 should process and buffer for the service. As in the example of FIG. 3, a service may comprise three service components: a base component with audio and video, a first enhanced video component, and second enhanced video component. Each service component may be transmitted over a different PLP having a different PLP ID, as shown in the example signaling data below: PLP1 :
PLP ID = 1
PLP INIT SYNC BUF = 10
PLP INIT SYNC BASE PLP ID = 1
PLP2:
PLP ID = 2
PLP INIT SYNC BUF = 510 PLP INIT SYNC BASE PLP ID = 1
PLP3:
PLP ID = 3
PLP INIT S YNC BUF = 1010
PLP INIT SYNC BASE PLP ID = 1
In this example, if level "1" is the selected operation point, then the receiver 112 may process data from PLPl only, and may ignore data from PLP2 and PLP3 (e.g., audio and base layer of SVC video). If level "2" is the selected operation point, then the receiver 112 may process data from PLPl and PLP2, and may ignore PLP3 (e.g., audio, base layer, and first enhancement layer of SVC video). If level "3" is the selected operation point, then the receiver 112 may process data from all three PLPs (e.g., audio, base layer, first enhancement layer, and second enhancement layer of SVC video). In step 804, the receiver 112 determines the synchronization tolerance (or 'synchronization skew') value set by the transmitter 103 based on the selected operation point. Using the above example signaling data, if level "1" is the selected operation point, then the appropriate synchronization tolerance value (i.e., PLP INIT SYNC BUF) for the receiver 103 may be 10 milliseconds. If level "2" is the selected operation point, then the appropriate synchronization tolerance value for the receiver 103 may be 510 milliseconds. If level "3" is the selected operation point, then the appropriate synchronization tolerance value for the receiver 112 may be 1010 milliseconds. The signaled buffering delays for each PLP may correspond to the maximum required initial buffering delays for the corresponding PLP from the group of the associated PLPs, that is, the PLPs that convey service components of one service. As discussed above, the receiver 112 may use these values to set a synchronization buffer delay timer and control the input / output of the service data from its synchronization buffers 411-413. In certain examples, the receiver 112 may simply initialize the synchronization buffer delay timer equal to highest PLP INIT SYNC BUF value of its processed PLPs. However, in other examples, the receiver may perform additional calculations to account for transmission delays, processing delays, etc., to determine to exact length of the synchronization buffer delay timer. In step 805, the receiver 112 identifies which of the PLPs processed by the receiver 112 is the base PLP (i.e., the PLP to which the synchronization tolerance value is calculated). In this example, the base PLP identifier (i.e., PLP INIT SYNC BASE PLP ID) indicates that the PLP with a PLP ID field equal to "1" is the base PLP. As discussed above, the receiver 1 12 may use the base PLP to initialize the synchronize buffer delay timer, which allows the PLP service data to be stored in the synchronization buffers 411-413.
While the disclosure has been described with respect to specific examples, those skilled in the art will appreciate that there are numerous variations and permutations of the above described systems and techniques that fall within the spirit and scope of the disclosure. Additionally, numerous other embodiments, modifications and variations within the scope and spirit of the disclosure will occur to persons of ordinary skill in the art.

Claims

1. A method, comprising:
receiving at a communication terminal a synchronization tolerance value;
setting a synchronization buffer delay timer for an amount of time based on the received synchronization tolerance value;
receiving service data at the communication terminal, wherein said service data is received over a plurality of transmission channels;
storing the received service data in a plurality of synchronization buffers corresponding to the plurality of transmission channels;
determining that the synchronization buffer delay timer has elapsed; and
in response to the synchronization buffer delay timer elapsing, forwarding the service data from the plurality of synchronization buffers to a service data processing application.
2. The method of embodiment 1, wherein the plurality of transmission channels are different physical layer pipes (PLPs), and wherein the received service data is transmitted over the different PLPs during different time slices.
3. The method of embodiment 1, further comprising:
receiving at the communication terminal a selected operation point for the reception of the service data;
receiving a plurality of transmission channel data frames, wherein each transmission channel data frame comprises an operation point identifier and a channel synchronization tolerance value; identifying a first transmission channel data frame wherein the operation point identifier of the first transmission channel data frame corresponds to the selected operation point for the reception of the service data; and
selecting the synchronization tolerance value as the channel synchronization tolerance value of the first transmission channel data frame.
4. The method of embodiment 3, wherein the selected operation point corresponds to a selected quality level of viewing for the service data.
5. The method of embodiment 3, further comprising:
selecting the plurality of transmission channels from a larger set of transmission channels available to the communication terminal for receiving the service data, wherein the plurality of transmission channels are selected based on the selected operation point.
6. The method of embodiment 1, further comprising:
receiving a base transmission channel identifier corresponding to a first transmission channel in the plurality of transmission channels;
receiving a portion of said service data in a first data transmission over the first transmission channel; and
starting the synchronization buffer delay timer in response to receiving the first data transmission over the first transmission channel.
7. The method of embodiment 1, wherein forwarding the service data from the plurality of synchronization buffers comprises: determining a first bit rate for forwarding service data from a first synchronization buffer based on a size of a plurality of data bursts in the first synchronization buffer and a length of time between the arrival of the plurality of the data bursts in the first synchronization buffer.
8. An apparatus comprising:
at least one processor; and
at least one memory storing computer-executable instructions that, when executed, cause the apparatus at least to:
receive at the apparatus a synchronization tolerance value;
set a synchronization buffer delay timer for an amount of time based on the received synchronization tolerance value;
receive service data at the apparatus, wherein said service data is received over a plurality of transmission channels;
store the received service data in a plurality of synchronization buffers corresponding to the plurality of transmission channels;
determine that the synchronization buffer delay timer has elapsed; and in response to the synchronization buffer delay timer elapsing, forward the service data from the plurality of synchronization buffers to a service data processing application.
9. The apparatus of embodiment 8, wherein the plurality of transmission channels are different physical layer pipes (PLPs), and wherein the received service data is transmitted over the different PLPs during different time slices.
10. The apparatus of embodiment 8, wherein the computer-executable instructions, when executed, cause the apparatus at least to:
receive a selected operation point for the reception of the service data;
receive a plurality of transmission channel data frames, wherein each transmission channel data frame comprises an operation point identifier and a channel synchronization tolerance value;
identify a first transmission channel data frame wherein the operation point identifier of the first transmission channel data frame corresponds to the selected operation point for the reception of the service data; and
select the synchronization tolerance value as the channel synchronization tolerance value of the first transmission channel data frame.
11. The apparatus of embodiment 10, wherein the selected operation point corresponds to a selected quality level of viewing for the service data.
12. The apparatus of embodiment 10, wherein the computer-executable instructions, when executed, cause the apparatus at least to:
select the plurality of transmission channels from a larger set of transmission channels available to the apparatus for receiving the service data, wherein the plurality of transmission channels are selected based on the selected operation point.
13. The apparatus of embodiment 8, wherein the computer-executable instructions, when executed, cause the apparatus at least to:
receive a base transmission channel identifier corresponding to a first transmission channel in the plurality of transmission channels; receive a portion of said service data in a first data transmission over the first transmission channel; and
start the synchronization buffer delay timer in response to receiving the first data transmission over the first transmission channel.
14. The apparatus of embodiment 8, wherein forwarding the service data from the plurality of synchronization buffers comprises:
determining a first bit rate for forwarding service data from a first synchronization buffer based on a size of a plurality of data bursts in the first synchronization buffer and a length of time between the arrival of the plurality of the data bursts in the first synchronization buffer.
15. A metho d, comprising :
receiving service data at a communication terminal;
splitting the service data into a plurality of service components;
assigning each of the plurality of service components to a different transmission channel buffer;
determining a synchronization schedule for transmitting the plurality of service components from the plurality of transmission channel buffers to a receiver, the synchronization schedule based on a synchronization tolerance value;
transmitting the synchronization tolerance value to the receiver; and
transmitting the plurality of service components to the receiver in accordance with the synchronization schedule.
16. The method of embodiment 15, wherein transmitting the synchronization tolerance value to the receiver comprises:
transmitting a plurality of transmission channel data frames, each of the plurality of transmission channel data frames having a different operation point identifier and a different channel synchronization tolerance value.
17. The method of embodiment 16, wherein each of the different operation point identifiers corresponds to a selected quality level of viewing for the service data.
18. The method of embodiment 15, wherein transmitting the plurality of service components in accordance with the synchronization schedule comprises:
transmitting a first data unit of a first service component during a first time slice; and
transmitting a second data unit of a second service component during a second time slice different from the first time slice,
wherein the first data unit and the second data unit correspond to a same time window within the service data.
19. The method of embodiment 18, wherein the first service component corresponds to a base service component of the service data and the second service component corresponds to an enhanced video quality component of the service data.
20. The method of embodiment 15, wherein each of the plurality of transmission channel buffers is associated with a different physical layer pipe (PLP).
21. The method of embodiment 20, wherein a different scheduler is used for each PLP, and wherein transmitting the plurality of service components to the receiver in accordance with the synchronization schedule comprises:
at a first scheduler, determining a first transmission time for a first data unit having a first timestamp within the service data;
notifying a second scheduler of the first transmission time and the first timestamp; at the second scheduler, determining a transmission time range for a second data unit having a second timestamp within the service data, wherein determining the transmission time range comprises calculating a time difference between the first timestamp and the second timestamp and adding the synchronization tolerance value to said time difference.
22. An apparatus comprising:
at least one processor; and
at least one memory storing computer-executable instructions that, when executed, cause the apparatus at least to:
receive service data;
split the service data into a plurality of service components; assign each of the plurality of service components to a different transmission channel buffer;
determine a synchronization schedule for transmitting the plurality of service components from the plurality of transmission channel buffers to a receiver, the synchronization schedule based on a synchronization tolerance value;
transmit the synchronization tolerance value to the receiver; and transmit the plurality of service components to the receiver in accordance with the synchronization schedule.
23. The apparatus of embodiment 22, wherein transmitting the synchronization tolerance value to the receiver comprises:
transmitting a plurality of transmission channel data frames, each of the plurality of transmission channel data frames having a different operation point identifier and a different channel synchronization tolerance value.
24. The apparatus of embodiment 23, wherein each of the different operation point identifiers corresponds to a selected quality level of viewing for the service data.
25. The apparatus of embodiment 22, wherein transmitting the plurality of service components in accordance with the synchronization schedule comprises:
transmitting a first data unit of a first service component during a first time slice; and
transmitting a second data unit of a second service component during a second time slice different from the first time slice,
wherein the first data unit and the second data unit correspond to a same time window within the service data
26. The apparatus of embodiment 25, wherein the first service component corresponds to a base service component of the service data and the second service component corresponds to an enhanced video quality component of the service data.
27. The apparatus of embodiment 22, wherein each of the plurality of transmission channel buffers is associated with a different physical layer pipe (PLP).
28. The apparatus of embodiment 27, wherein a different scheduler is used for each PLP, and wherein transmitting the plurality of service components to the receiver in accordance with the synchronization schedule comprises:
at a first scheduler, determining a first transmission time for a first data unit having a first timestamp within the service data;
notifying a second scheduler of the first transmission time and the first timestamp; at the second scheduler, determining a transmission time range for a second data unit having a second timestamp within the service data, wherein determining the transmission time range comprises calculating a time difference between the first timestamp and the second timestamp and adding the synchronization tolerance value to said time difference.
PCT/FI2011/050787 2010-10-19 2011-09-14 Multiplexing data over multiple transmission channels with time synchronization WO2012052610A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US39442210P 2010-10-19 2010-10-19
US61/394,422 2010-10-19

Publications (1)

Publication Number Publication Date
WO2012052610A1 true WO2012052610A1 (en) 2012-04-26

Family

ID=45974742

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2011/050787 WO2012052610A1 (en) 2010-10-19 2011-09-14 Multiplexing data over multiple transmission channels with time synchronization

Country Status (1)

Country Link
WO (1) WO2012052610A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1914932A1 (en) * 2006-10-19 2008-04-23 Thomson Licensing, Inc. Method for optimising the transmission of DVB-IP service information by partitioning into several multicast streams
WO2009078546A1 (en) * 2007-12-18 2009-06-25 Electronics And Telecommunications Research Institute Scalable video broadcasting apparatus and method over multiband satellite channel
US20100262708A1 (en) * 2009-04-08 2010-10-14 Nokia Corporation Method and apparatus for delivery of scalable media data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1914932A1 (en) * 2006-10-19 2008-04-23 Thomson Licensing, Inc. Method for optimising the transmission of DVB-IP service information by partitioning into several multicast streams
WO2009078546A1 (en) * 2007-12-18 2009-06-25 Electronics And Telecommunications Research Institute Scalable video broadcasting apparatus and method over multiband satellite channel
US20100262708A1 (en) * 2009-04-08 2010-10-14 Nokia Corporation Method and apparatus for delivery of scalable media data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KONDRAD, L. ET AL.: "Optimizing the DVB-T2 system for mobile broadcast", INT. SYMP. ON BROADBAND MULTIMEDIA SYSTEMS AND BROADCASTING (BMSB'09), 24 March 2010 (2010-03-24) - 26 March 2010 (2010-03-26), SHANGHAI, CHINA, pages 5 P *
SCHIERL, T. ET AL.: "Mobile video transmission using scalable video coding", IEEE TRANS. ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 17, no. 9, September 2007 (2007-09-01), pages 1204 - 1217 *

Similar Documents

Publication Publication Date Title
US8218559B2 (en) Providing best effort services via a digital broadcast network using data encapsulation
US8261308B2 (en) Mapping of network information between data link and physical layer
US8498220B2 (en) Service discovery mechanism in broadcast telecommunication network
US6977914B2 (en) Broadcast hand-over in a wireless network
JP4423263B2 (en) Transmission method and apparatus for portable terminal
KR101789641B1 (en) Apparatuses and methods for transmitting or receiving a broadcast content via one or more networks
WO2018210411A1 (en) Low latency media ingestion system, devices and methods
US20110103300A1 (en) Data encapsulation and service discovery over a broadcast or multicast system
KR101814403B1 (en) Broadcast signal transmitting/receiving method and device
US20050097624A1 (en) System and associated terminal, method and computer program product for providing broadcast content
KR20080073330A (en) Codec and session parameter change
US20080225838A1 (en) Common Rate Matching Slot for Variable Bit Rate Services
US7924876B2 (en) Time slicing and statistical multiplexing in a digital wireless network
Concolato et al. Synchronized delivery of multimedia content over uncoordinated broadcast broadband networks
JP2017518662A (en) Broadcast signal transmitting apparatus, broadcast signal receiving apparatus, broadcast signal transmitting method, and broadcast signal receiving method
US8243659B2 (en) DVB low bit rate services
WO2012052610A1 (en) Multiplexing data over multiple transmission channels with time synchronization
US9232264B2 (en) System and devices for distributing content in a hierarchical manner
Väre et al. Overview of the System and Upper Layers of DVB-NGH
Chernock Dealing with IP for Mobile in an MPEG World

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11833914

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11833914

Country of ref document: EP

Kind code of ref document: A1