GB2551674A - Adaptive data streaming method with push messages control - Google Patents

Adaptive data streaming method with push messages control Download PDF

Info

Publication number
GB2551674A
GB2551674A GB1714236.5A GB201714236A GB2551674A GB 2551674 A GB2551674 A GB 2551674A GB 201714236 A GB201714236 A GB 201714236A GB 2551674 A GB2551674 A GB 2551674A
Authority
GB
United Kingdom
Prior art keywords
client
video segment
data
server
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1714236.5A
Other versions
GB201714236D0 (en
GB2551674B (en
Inventor
Fablet Youenn
Bellessort Romain
Maze Frédéric
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1714236.5A priority Critical patent/GB2551674B/en
Priority claimed from GB1312561.2A external-priority patent/GB2516116B/en
Publication of GB201714236D0 publication Critical patent/GB201714236D0/en
Publication of GB2551674A publication Critical patent/GB2551674A/en
Application granted granted Critical
Publication of GB2551674B publication Critical patent/GB2551674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/27Evaluation or update of window size, e.g. using information derived from acknowledged [ACK] packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/613Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for the control of the source by the destination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/65Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/762Media network packet handling at the source 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234354Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering signal-to-noise ratio parameters, e.g. requantization
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/23439Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26258Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for generating a list of items to be played back in a given order, e.g. playlist, or scheduling item distribution according to such list
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
    • H04N21/4383Accessing a communication channel
    • H04N21/4384Accessing a communication channel involving operations to reduce the access time, e.g. fast-tuning for reducing channel switching latency
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/637Control signals issued by the client directed to the server or network components
    • H04N21/6377Control signals issued by the client directed to the server or network components directed to server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/24Negotiation of communication capabilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A method for managing streaming over communication networks. A server pushes an initial video segment to the client to initiate the streaming. In response to feedback from the client a later video segment is sent by the server at an optimum quality level. The quality level may correspond to a video segment of particular resolution or bit rate. The feedback may be an acknowledgement or window_update message from the client. The arrangement may allow a fast start for video streaming wherein a low resolution segment is initially sent and the quality is subsequently adapted based on the determined conditions to send later segments at a higher quality level. The optimum/suitable quality may be based on the acknowledgement message or timing or receipt of the acknowledgement message. The initial video segment may be pushed to the client before the Media Presentation Description (MPD) is transmitted.

Description

ADAPTIVE DATA STREAMING METHOD WITH PUSH MESSAGES
CONTROL
The present invention relates to data streaming over HTTP communication networks.
More particularly, the present invention relates to adaptive data streaming for satisfying network constraints. The invention may have appplications in DASH networks. DASH (acronym for Dynamic Adaptive Streaming over HTTP) is a communication standard allowing media content streaming (typically audio/video content) over HTTP. According to DASH, media presentations are described as XML files, called “media presentation description” files (MPD in what follows). MPD files provide client devices with information allowing them to request and control the delivery of media contents.
The general principle of Media streaming over HTTP is illustrated in Figure 3. Most of the new protocols and standards for adaptive media streaming over HTTP are based on this principle. A media server 300 streams data to a client 310. The media server stores media presentations. For example, media presentation 301 contains audio and video data. Audio and video may be interleaved in a same file. The way the media presentation is built is described in what follows with reference to Figure 4a. The media presentation is temporally split into small independent and consecutive temporal segments 302a, 302b and 302c, such as MP4 segments, that can be addressed and downloaded independently. The downloading addresses (HTTP URLs) of the media content for each of these temporal segments are set by the server to the client. Each temporal segment of the audio/video media content is associated with one HTTP address.
The media server also stores a manifest file document 304 (described in what follows with reference to Figure 5) that describes the content of the media presentation including the media content characteristics (e.g. the type of media: audio, video, audio-video, text etc.), the encoding format (e.g. the bitrate, the timing information etc.), the list of temporal media segments and associated URLs. Alternatively, the document contains template information that makes it possible to rebuild the explicit list of the temporal media segments and associated URLs. This document may be written using the extensible Markup Language (XML).
The manifest file is sent to the client. Upon receipt of the manifest file during a step 305, the client is informed of the association between temporal segments of the media contents and HTTP addresses. Also, the manifest file provides the client with the information concerning the content of the media presentation (interleaved audio/video in the present example). The information may include the resolution, the bit-rate etc.
Based on the information received, the HTTP client module 311 of client can emit HTTP requests 306 for downloading temporal segments of the media content described in the manifest file. The server’s HTTP responses 307 convey the requested temporal segments. The HTTP client module 311 extracts from the responses the temporal media segments and provides them to the input buffer 307 of the media engine 312. Finally, the media segments can be decoded and displayed during respective steps 308 and 309.
The media engine 312 interacts with the DASH control engine 313 in order to have the requests for next temporal segments to be issued at the appropriate time. The next segment is identified from the manifest file. The time at which the request is issued depends on whether or not the reception buffer 307 is full. The DASH control engine 313 controls the buffer in order to prevent from being overloaded or completely empty.
The generation of the media presentation and the manifest file is described with reference to Figure 4a. During steps 400 and 401, audio and video data are acquired. Next, the audio data are compressed during 402. For example, the MP3 standard can be used. Also, the video are compressed in parallel during step 403. Video compression algorithms such as MPEG4, MPEG/AVC, SVC, HEVC or scalable HEVC can be used. Once compression of audio and video data is performed, audio and video elementary streams 404, 405 are available. The elementary streams are encapsulated during a step 406 into a global media presentation. For example, the ISO BMFF standard (or the extension of the ISO BMFF standard to AVC, SVC, FIEVC, scalable extension of FIEVC etc.) can be used for describing the content of the encoded audio and video elementary streams as a global media presentation. The encapsulated media presentation 407 thereby obtained is used for generating, during step 408, an XML manifest file 409. Several representations of video data 401 and audio data 400 can be acquired, compressed, encapsulated and described in the media presentation 407.
For the specific case of MPEG/DASH streaming protocol illustrated in Figure 4b, the manifest file is called “Media Presentation Description” (or “MPD” file). The root element of the file is the MPD element that contains attributes applying to all the presentation plus DASH information like profile or schema. The media presentation is split into temporal periods represented by a Period element. The MPD file 410 contains all the data related to each temporal period. By receiving this information, the client is aware of the content for each period of time. For each Period 411 AdaptationSet elements are defined. A possible organization is to have one or more AdaptationSet per media type contained in the presentation. An AdaptationSet 412 related to video contains information about the different possible representations of the encoded videos available at the server. Each representation is described in a Representation element. For example, a first representation can be a video encoded with a spatial resolution of 640x480 and compressed with a bit rate of 500 kbits/s. A second representation can be the same video but compressed with a bit rate of 250 kbits/s. Each video can then be downloaded by HTTP requests if the client knows the HTTP addresses related to the video. The association between the content of each representation and the HTTP addresses is done by using an additional level of description: the temporal segments. Each video representation is split into temporal segments 413 (typically few seconds). Each temporal segment comprises content stored at the server that is accessible via an HTTP address (URL or URL with one byte range). Several elements can be used for describing the temporal segments in the MPD file: SegmentList, SegmentBase or SegmentTemplate. In addition, a specific segment is available: the initialization segment. The initialization segment contains MP4 initialization information (if the video has been encapsulated using the ISO BMFF or extensions thereof) that describes the encapsulated video stream. For example, it helps the client to instantiate the decoding algorithms related to the video. The HTTP addresses of the initialization segment and the media segments are indicated in the MPD file.
In Figure 5, there is shown an exemplary MPD file. Two media are described in the MPD file shown. The first one is an English audio stream and the second one is a video stream. The English audio stream is introduced using the AdaptationSet tag 500. Two alternative representations are available for this audio stream: • the first representation 501 is an MP4 encapsulated elementary audio stream with a bit-rate of 64000 bits/sec. The codec to be used for handling this elementary stream (after MP4 parsing) is defined in the standard by the attribute codecs having the value: ‘mp4a.0x40’. It is accessible via a request at the address formed by the concatenation of the BaseURL elements in the segment hierarchy: <BaseURL>7657412348.mp4</BaseURL>, which is a relative URL The <BaseURL> being defined at the top level in the MPD element by ‘http://cdn1.example.com/’ or by ‘http://cdn2.example.com/’ (two servers are available for streaming the same content) is the absolute URL The client can then request the English audio stream from the request to the address ‘http://cdn1.example.com/7657412348.mp4’ or to the address ‘http://cdn2.example.com/7657412348.mp4’. • the second representation 502 is an MP4 encapsulated elementary audio stream with a bit-rate of 32000 bits/sec.
The adaptation set 503 related to the video contains six representations. These representations contain videos with different spatial resolutions (320x240, 640x480, 1280x720) and with different bit rates (from 256000 to 2048000 bits per second). For each of these representations, a respective URL is associated through a BaseURL element. The client can therefore choose between these alternative representations of the same video according to different criteria like estimated bandwidth, screen resolution etc.
The current DASH version does not provide description of Region-Of-lnterest within the manifest files. Several approaches have been proposed for such description.
In particular, Xomponents of media contents can be described using SubRepresentation elements. These elements describe the properties of one or several components that are embedded in a Representation. In Figure 6, there is shown an example of a DASH manifest file describing tile tracks as components of a video. For the sake of conciseness and clarity, only one Period 600 is represented. However, subsequent period elements would be organized in a same fashion. In part 601, a first adaptation set element is used for describing a base layer of the scalable video. For example, the video is encoded according to SVC or HEVC scalable. In part 602, a second adaptation set is used for describing the highest resolution layer of the scalable video. For non-scalable video, only the second adaptation set 602 would be present, without dependency to the base layer, i.e. the dependencyld attribute. In this second adaptation set 602, a single representation 603 is described, namely the one that corresponds to the displayable video. The representation is described as a list of segments 610 with respective URLs for client requests.
Thus, the representation depends on another representation identified by ‘RT, actually the base layer representation from the first adaptation set 601. The dependency forces the streaming client to first request the current segment for base layer before getting the current segment for the enhancement layer. This cannot be used to express dependencies with respect to tile tracks because the tracks that would be referenced this way would be automatically loaded by the client. This is something to be avoided, since it is up to the user to select the tiles of interest for him anytime during the media presentation. Therefore, in order to indicate the dependencies between the composite track and the tile tracks the SubRepresentation element is used. The displayable video is described as a list of sub-representations 604 to 608. Each sub representation actually represents a track in the encapsulated MP4 file. Thus, there is one sub-representation per tile (four tiles in the present example) plus one sub-representation for the composite track 608. Each sub-representation is described by a content component element 614 to 618 in order to indicate whether it corresponds to a tile track 614, 615, 616 and 617 or to the composite track 618. The Role descriptor type available in DASH/MPD is used with a specific scheme for tiling. The Role descriptor also indicates the position of the tile in the full-frame video. For example the component 614 describes the tile located at the top left of the video (1:1 for first in row and first in column). The dimensions of the tiles, width and height, are specified as attributes of the sub representation as made possible by MPD. Bandwidth information can also be put here for helping the DASH client in the determination of the number of tiles and the selection of the tiles, according to its bandwidth. Concerning the composite track, it has to be signalled in a different way than the tile tracks since it is mandatory to be able, at the end of the download, to build a video stream that can be decoded. To that purpose, two elements adder added into the description. Firstly, the descriptor in the related content component 618 indicates that it is the main component among all the components. Secondly, in the sub representation, a new attribute ‘required’ is added in order to indicate to the client that the corresponding data have to be requested. All requests for the composite track or for one or more of the tile tracks are computed from the URL provided in the segment list 610 (one per time interval). In the example, “URL_X’ combined with “BaseURL” at the beginning of the MPD provides a complete URL which the client can use for performing an HTTP GET request. With this request, the client would gets the data for the composite track and all the data for all the tile tracks. In order to optimize the transmission, instead of the request, the client can first request the segment index information (typically the “ss/x” and/or “sidx" information), using the data available from the index_range attribute 620. This index information makes it possible to determine the byte ranges for each of the component. The DASH client can then send as many HTTP GET requests with appropriate byte range as selected tracks (including the required composite track).
When starting a streaming session, a DASH client requests the manifest file. Once received, the client analyzes the manifest file, selects a set of AdaptationSets suitable for its environment. Next, the client selects in the MPD, within each AdaptationSet, a Representation compatible with its bandwidth, decoding and rendering capabilities. Next, it builds in advance the list of segments to be requested, starting with initialization information for the media decoders. When initialization information is received by the decoders, they are initialized and the client requests first media data and buffers a minimum data amount before actually starting the display.
These multiple requests/responses may introduce delay in the startup of the streaming session. The risk is for service providers to see their clients leaving the service without starting to watch the video. It is common to name this time between the initial HTTP request for the first media data chunk, performed by the client, and the time when the media data chunk actually starts playing as the start-up delay. It depends on the network round-trip time but also on the size of the media segments.
Server Push is a useful feature for decreasing web resource loading time. Such servers are discussed with reference to Figures 1a to 1e.
In Figure 1b, there is shown that in HTTP 2.0 exchanges, a request must be sent for every resource needed: resources R1 to R4 and sub-resources A to I. However, when using the push feature by servers, as illustrated in Figure 1c, the number of requests is limited to elements R1 to R4. Elements A to I are “pushed” by the server to the client, thereby making the associated requests unnecessary.
Thus, as illustrated in Figures 1b and 1c, when servers use the push feature, the number of HTTP round-trips (request + response) necessary for loading a resource with its sub-resources is reduced. This is particularly interesting for high-latency networks such as mobile networks. HTTP is the protocol used for sending web resources, typically web pages. HTTP implies a client and a server: • The client sends a request to the server; • The server replies to the client’s request with a response that contains a representation of the web resource.
Requests and responses are messages comprising various parts, notably the HTTP headers. An HTTP header comprises a name along with a value. For instance, “Host: en.wikipedia.org” is the “Host” header, and its value is “en.wikipedia.org”. It is used for indicating the host of the resource queried (for instance, the Wikipedia page describing HTTP is available at http://en.wikipedia.org/wiki/HTTP). HTTP headers appear on client requests and server responses. HTTP/2.0 makes it possible to exchange requests/responses through streams. A stream is created inside an HTTP/2.0 connection for every HTTP request and response. Frames are exchanged within a stream in order to convey the content and headers of the requests and responses. HTTP/2.0 defines a limited set of frames with different meanings, such as: - HEADERS: which is provided for transmission of HTTP headers - DATA: which is provided for transmission of HTTP message content - PUSH_PROMISE: which is provided for announcing pushed content - PRIORITY: which is provided for setting the priority of a stream - WINDOW_UPDATE: which is provided for updating the value of the control flow window
Push by servers has been introduced in HTTP/2.0 for allowing servers to send unsolicited web resource representations to clients. Web resources such as web pages generally contain links to other resources, which themselves may contain links to other resources. To fully display a web page, all the linked and sub-linked resources generally need to be retrieved by a client. This incremental discovery may lead to a slow display of a web page, especially on high latency networks such as mobile networks.
When receiving a request for a given web page, the server may know which other resources are needed for the full processing of the requested resource. By sending the requested resource and the linked resources at the same time, the server allows reducing the load time of the web page. Thus, using the push feature, a server may send additional resource representations at the time it is requested a given resource.
With reference to the flowchart of Figure 1e, an exemplary mode of operation of a server implementing the push feature is described.
During step 100, the server receives an initial request. Next, the server identifies during step 101 the resources to push as part of the response and starts sending the content response during step 102. In parallel, the server sends push promise messages to the client during step 103. These messages identify the other resources that the server is planning to push. These messages are sent in order to let the client know in advance which pushed resources will be sent. In particular, this reduces the risk that a client sends a request for a resource that is being pushed at the same time. In order to further reduce this risk, a server should send a push promise message before sending any part of the response referring to the resource described in the push promise. This also allows clients to cancel the promised resources if clients do not want those resources. Next, the server sends the response and all promised resources during step 104. The process ends during a step 105.
The flowchart of Figure 1d illustrates the process on the client side.
When the client has identified a resource to retrieve from the server, it first checks during a step 106 whether or not the corresponding data is already in its cache memory. In case the resource is already the cache memory (Yes), it is retrieved from it during a step 107. Cached data may be either data retrieved from previous requests or data that was pushed by the server previously. In case it is not in the cache memory (No), the client sends a request during step 108 and waits for the server’s response. Upon receipt of a frame from the server, the client checks during step 109 whether or not the frame corresponds to a PUSH promise. If the data frame corresponds to the PUSH promise (Yes), during step 110, the client processes the push promise. The client identifies the resource to be pushed. If the client does not wish to receive the resource, the client may send an error message to the server so the server does not push that resource. Otherwise, the client stores the push promise until receiving the corresponding push content. The push promise is used so that the client does not request the promised resource while the server is pushing it. In case the data frame does not correspond to the PUSH promise (No), it is checked, during step 111, whether or not, the frame is a data frame related to push data. In case it is related to push data (Yes), the client processes the pushed data during step 112. The pushed data is stored within the client cache. The client sends the response data to the application for further processing. In case the frame is not a data frame related to push data (No), it is checked, during step 113, whether it corresponds to a response received from the server. In case the frame corresponds to a response from the server (Yes), the response is processed during step 114. Otherwise (No), it is checked during step 115 whether or not the frame identifies the end of a response (Yes). In this case, the process is terminated during step 116. Otherwise, the process goes back to step 109.
Thus, it appears that the client receives the response and the promised resources. The promised resources are therefore generally stored in the client cache while the response is used by the application such as a browser displaying a retrieved web page. When a client application requests one of the resources that were pushed, the resource is immediately retrieved from the client cache, without incurring any network delay.
The storage of pushed resources in the cache is controlled using the cache control directives. The cache control directives are also used for controlling of the responses. These directives are in particular applicable to proxies: any resource pushed or not, may be stored by proxies or by the client only.
Figure 1a is a graph of a set of resources owned by a server with their relationships. The set of resources is intertwined: R-ι, R2, R3, and R4 are resources that need to be downloaded together to be properly processed by a client. In addition, sub-resources A to H are defined. These sub-resources are related to 1, 2 or 3 resources. For instance, A is linked to Ri and C is linked to R1, R2 and R4.
Figure 1b, already discussed hereinabove, shows an HTTP exchange without using the server PUSH feature: the client requests R1, next it discovers R2, A, B, C and D and request them. After receiving them, the client requests R3, R4, F and G. Finally the client requests FI and I sub-resources. This requires four round-trips to retrieve the whole set of resources.
Figure 1c, already discussed hereinabove, illustrates the FITTP exchange using the feature of pushing directly connected sub-resources by the server. After requesting R-i, the server sends R1 and pushes A, B, C and D. The client identifies R2 and requests it. The server sends R2 and pushes F and G. Finally the client identifies R3, R4 and requests these resources. The server sends R3, R4 and pushes H and I. This process requires three round-trips to retrieve the whole set of resources.
In order to decrease the loading time of a set of resources, typically a web page and its sub-resources, FITTP/2.0 allows exchanging multiple request and response priorities in parallel. As illustrated in Figure 2, a web page may require the download of several resources, like JavaScript, images etc. During an initial FITTP exchange 200, the client retrieves an HTML file. This HTML file contains links to two JavaScript files (JS1, JS2), two images (IMG1, IMG2), one CSS file and one HTML file. During an exchange 201, the client sends a request for each file. The order given in the exchange 201 of Figure 2 is based on the web page order: the client sends a request as soon as a link is found. The server then receives requests for JS1, CSS, IMG1, HTML, IMG2 and JS2 and processes these requests according that order. The client then retrieves these resources in that order. HTTP priorities make it possible for the client to state which requests are more important and should be treated sooner than other requests. A particular use of priorities is illustrated in exchange 202. JavaScript files are assigned the highest priority. CSS and HTML files are assigned medium priority and images are assigned low priority. This approach allows receiving blocking files or files that may contain references to other resources sooner than other files. In response, the server is expected to try sending sooner the JavaScript files, the CSS and HTML files afterwards and the images at the end, as described in exchange 202. Servers are not mandated to follow client priorities.
In addition to priorities, HTTP/2.0 provides that the amount of data being exchanged simultaneously can be controlled. Client and server can specify which amount of data they can buffer on a per connection basis and a per stream basis. This is similar to TCP congestion control: a window size, which specifies an available buffer size, is initialized to a given value; each time the emitter sends data, the window size is decremented; the emitter must stop sending data so that the window size never goes below zero. The receiver receives the data and sends messages to acknowledge that the data was received and removed from the buffer; the message contains the amount of data that was removed from the buffer; the window size is then increased from the given value and the emitter can restart sending data.
In view of the above, it appears that DASH is based on the assumption that the client leads the streaming since the client can generally select the best representation of the content for the purpose of the application it is performing. For instance, a client may know whether to request HD or SD content based on its form-factor and screen resolution.
Server-based streaming is typically done using RTP. Contrary to DASH, RTP does not use HTTP and cannot directly benefit from the web infrastructures, in particular proxies and caches. Web socket based media streaming has the same drawbacks. With HTTP/1.1, server-based streaming cannot be easily implemented since the server can generally only answer to client requests. With HTTP/2.0, in particular with the introduction of the push feature, DASH-based servers can lead the streaming. Thus, servers can use their knowledge of the characteristics of the content they are streaming for optimizing the user experience. For instance, a server may push a film as SD (due to limited bandwidth) but advertisements as HD since advertisements take an additional limited amount of bandwidth. Another example is the case of a server that starts to do fast start with a low-resolution video and switches to the best possible representation once bandwidth is well estimated.
In order to enable a server to lead the streaming, one approach is to let the server push data (in particular DASH data) as preferred. The client then uses whatever data is available to display the video. The server typically announces the push of several segments at once. The server then sends the segments in parallel or successively. A problem that occurs is that client and server may not know if the promised data will be transmitted and received at the desired time: the client may not know when and in which order the video segments will be sent. The server may not know if its video segment ordering decisions are matching the client constraints.
Thus, there is a need for enhancing data streaming especially in the context of DASH-based communications.
The present invention lies within this context.
According to a first aspect of the invention there is provided a method of streaming media data by a server device to a client device, the method comprising the following steps: - receiving, from the client device, a request relating to first media data, - identifying second media data to be sent to the client device without having been requested, - transmitting to said client device, in response to said request, data relating to said first media data, and at least one announcement message respectively identifying said second media data, and wherein the method further comprises the following steps: - defining by the server device an order of transmission of the second media data, - transmitting information related to the order of transmission with said announcement messages, said information enabling the client device to determine the order of transmission defined by the server.
The request relating to first media data may concern first media data and/or other data related to this first media data.
The second media data may be associated with said first media data, for example by the server device.
Embodiments of the invention provide a lightweight mechanism for server-guided streaming. Embodiments may be implemented in the context of DASH networks.
Server devices can make content recommendations to the client devices. Also, they can optimize the network usage.
Embodiments of the invention are compatible with existing HTTP/2.0 features. These features can advantageously be used for implementing embodiments of the invention.
Network performances are generally increased.
For example, the order of transmission of said second media is defined according to priority values according to the client device, the media data having the highest priority value being transmitted first.
Said priority values may be defined according to the HTTP/2.0 protocol.
According to embodiments, at least one priority value is associated with a network bandwidth estimation mechanism, and the method further comprises the following steps: - transmitting to the client device second media data with a priority value associated with said mechanism, - receiving from the client device, in response to said second media data, at least one control flow message, and - estimating an available bandwidth based on said at least one control flow message received.
For example, the server device transmits said second media data according to a plurality of data frames having respective and different sizes.
The method may further comprise defining by the server device, based on said bandwidth estimation, an updated order of transmission of the second media data.
According to embodiments said request from the client device comprises a request for receiving a description file related to media data comprising said first media data, the description file containing description information concerning said first media data, the method further comprising determining the second non-requested media data based on said description file.
For example, requested first media data are video segments.
The streaming may be performed according to the DASH standard.
For example, the method further comprises the following steps: - receiving, from the client device, an ordering update request, - defining, based on said ordering update request, a new order of transmission of the second media data and updating the information related to said new order of transmission of the second media data, and - transmitting said second media data to the client according to said updated information related to the order of transmission.
The method may further comprise transmitting to the client device, an ordering update confirmation message.
For example, said updated order is defined for the second media data for which transmission to the client device has not been initiated at the time of receipt of said ordering update request.
For example, said ordering update request comprises an ordering value for at least part of second media data.
According to embodiments, the order of transmission of said second media is defined according to priority values, and when a priority value is updated for at least part of a first media data, the priority values for at least part of second media data to be sent to the client device without having been requested and associated with said at least part of first media data, are updated accordingly.
For example, said first and second media are associated according to at least one of a temporal relationship, a spatial relationship and a quality relationship.
According to embodiments: - said second media data comprises enhancement data for enhancing quality of the first media data, and - when a priority value is updated for a media data of an enhancement layer, priority values are updated for all the media data of said enhancement layer.
For example, the first and second media data comprise video temporal segments, and the starting time of the enhancement media data is based on information related to a video content of the first media data.
For example, said information related to a video content of the first media data is stored in said description file.
For example, said order of transmission is based at least on decoding relationships between first a second media data.
For example, said order of transmission is based at least on statistical popularities of the media data.
For example, said order of transmission is based at least on a playing time of the media data on the client device’s end.
For example, said order of transmission is based at least on an estimated transmission time of the media data.
For example, said order of transmission is based at least on user-defined interests for the media data.
The method may further comprise the following steps: - receiving, from the client device, control messages, said control messages enabling the server device to identify media data currently being played, - defining by the server, based on said control messages, an updated order of transmission of the second media data, and - transmitting said second media data to the client according to said updated order of transmission.
The method may further comprise a step of transmitting to the client device, an ordering update confirmation message.
For example, said control messages relate to a use of a buffer memory of the client device, said buffer memory storing media data for them to be played by the client.
For example, the server device keeps record of first requested media data sent, and identification of the second media data is performed based on said use of the buffer memory and said record.
For example, said order of transmission information is transmitted within said announcement messages.
For example, said order of transmission information is transmitted within dedicated messages after said announcement messages.
According to a second aspect of the invention, there is provided a method of accessing by a client device, media data streamed by a server device, the method comprising the following steps: - transmitting, to the server device, a request relating to first media data, - receiving from said server device, in response to said request, data relating to said first media data, and at least one announcement message respectively identifying second media to be sent to the client device without having been requested, wherein the method further comprises the following step: - receiving information related to an order of transmission of the second media data with said announcement messages, said information enabling the client device to determine an order of transmission of the second media data defined by the server.
The method may further comprise determining by the client device whether the order of transmission of the second media data defined by the server device satisfies streaming constraints at the client device’s end, and if said constraints are not satisfied, transmitting, to the server device, an ordering update request.
For example, the order of transmission of said second media data is defined according to priority values according to the client device, the media data having the highest priority value being transmitted first.
For example, said priority values are defined according to the HTTP/2.0 protocol.
According to embodiments, at least one priority value is associated with a network bandwidth estimation mechanism, the method further comprises the following steps: - receiving from the server device second media data with a priority value associated with said mechanism, - transmitting to said server device, in response to said second media data, at least one control flow message, thereby enabling the server device to estimate an available bandwidth based on said at least one control flow message transmitted.
For example, the client device receives said second media data according to a plurality of data frames having respective and different sizes.
For example, an updated order of transmission of the second media data is defined, by the server device, based on said bandwidth estimation.
For example, said request from the client device comprises a request for receiving a description file related to media data comprising said first media data, the description file containing description information concerning said first media data, the method further comprising determining the second non-requested media data based on said description file.
For example, requested first media data are video segments.
For example, said streaming is performed according to the DASFI standard.
The method may further comprise receiving said second media data from the server device according to updated information related to a new order of transmission of the second media data defined by the server device.
The method may further comprise a step of receiving from the server device, an ordering update confirmation message.
According to embodiments, said updated order is defined for the second media data for which transmission from the server device has not been initiated at the time of receipt of said ordering update request by the server device.
According to embodiments, said ordering update request comprises an ordering value for at least part of the second media data.
According to embodiments, the order of transmission of said second media is defined according to priority values, and when a priority value is updated for at least part of a first media data, the priority values for at least part of second media data to be sent to the client device without having been requested and associated with said at least part of first media data, are updated accordingly.
For example, said first and second media data are related according to at least one of a temporal relationship, a spatial relationship and a quality relationship.
According to embodiments: - said second media data comprise enhancement data for enhancing quality of the first media data, and - when a priority value is updated for at least part of first media data of an enhancement layer, priority values are updated for all the media data of said enhancement layer.
For example, the first and second media data comprise video temporal segments, and the starting time of the enhancement media data is based on information related to a video content of the first media data.
According to embodiments, said information related to a video content of the first media data is stored in said description file.
According to embodiments, said order of transmission is based at least on decoding relationships between first and second media data.
According to embodiments, said order of transmission is based at least on statistical popularities of the media data.
According to embodiments, said order of transmission is based at least on a playing time of the media data on the client device’s end.
According to embodiments, said order of transmission is based at least on an estimated transmission time of the media data.
According to embodiments, said order of transmission is based at least on user-defined interests for the media data.
The method may comprise the following steps: - transmitting, to the server device, control messages, said control message enabling the server device to identify a media data currently being played, and - receiving said second media data from the server device according to an updated order of transmission defined, by the server device, based on said control messages.
The method may comprise a step of receiving from the server device, an ordering update confirmation message.
For example, said control messages relate to a use of a buffer memory of the client device, said buffer memory storing media data for them to be played by the client device.
According to embodiments, the server device keeps record of first media data sent, and identification of the media being currently played is performed based on said use of the buffer memory and said record.
For example, said order of transmission information is received within said announcement messages.
For example, said order of transmission information is received within dedicated messages after said announcement messages.
According to a third aspect of the invention, there is provided a method of managing, by a proxy server, data exchanges between client devices and server devices, the method comprising the following steps: - receiving, from a server implementing a method according to the first aspect, media data to be retransmitted to a client device, - determining, based on the order of transmission of the media data, a retransmission priority for the media data, and - performing retransmission of the media data received to the client device, based on said transmission priority determined.
The method may further comprise storing said media data received, based on said retransmission priority determined.
The method may further comprise the following steps: - receiving, from a client device implementing a method according to the second aspect, an ordering update request, - updating said retransmission priority according to said ordering update request, if said request is related to a media data to be retransmitted, and - performing retransmission of the media data according to the updated retransmission priority.
The method may further comprise the following steps: - receiving from a first client device, a request to a first server device, for media data, wherein said media data is stored by the proxy server for retransmission to a second client device from a second server device, - determining priority values respectively associated with said media data by said first and second server devices, - updating said priority values according to respective streaming constraints for the first and second client devices, and - retransmitting said media data to said first and second client devices according to said updated priority values, wherein said first and second server devices implement a method according to the first aspect and said first and second client devices implement a method according to the second aspect.
The method may further comprise sending to the first and second server devices update notifications relating to the updated priority values.
According to a fourth aspect of the invention there is provided a method of streaming data between a server device and a client device comprising: - performing a method according to the first aspect by a server device, and - performing a method according to the second aspect by a client device.
The method may further comprise performing, by a proxy server, a method according to the third aspect.
According to a fifth aspect of the invention there are provided computer programs and computer program products comprising instructions for implementing methods according to the first, second and/or third aspect(s) of the invention, when loaded and executed on computer means of a programmable apparatus.
According to a sixth aspect of the invention, there is provided a server device configured for implementing methods according to the first aspect.
According to a seventh aspect of the invention, there is provided a client device configured for implementing methods according to the second aspect.
According to an eight aspect of the invention, there is provided a proxy device configured for implementing methods according to the third aspect.
According to a ninth aspect of the invention, there is provided a system comprising at least one server device according to the sixth aspect and at least one client device according to the seventh aspect.
The system may further comprise a proxy device according to the eighth aspect.
The objects according to the second, third, fourth, fifth, sixth, seventh, eighth and ninth aspects of the invention provide at least the same advantages as those provided by the method according the first aspect of the invention.
Other features and advantages of the invention will become apparent from the following description of non-limiting exemplary embodiments, with reference to the appended drawings, in which, in addition to Figures 1a to 6: - Figures 7a and 7b illustrate media segment reordering according to embodiments; - Figure 8 is a flowchart of exemplary steps performed by servers according to embodiments; - Figure 9 is a flowchart of exemplary steps performed by clients according to embodiments; - Figure 10 is a flowchart of exemplary steps performed by proxies according to embodiments; - Figure 11 illustrates bandwidth measurement according to embodiments; - Figure 12 illustrates video playing initialization according to embodiments; and - Figure 13 is a schematic illustration of devices according to embodiments.
In what follows, embodiments of the invention are described in the context of DASH-based networks implementing the HTTP 2.0 protocol. The data streamed is, for example, video data. Embodiments of the invention are not limited to DASH networks. A server device of a communication network that stream data to a client device implements a push feature according to which it can transmit data elements to the client without explicit requests from the client for the data elements transmitted. The server can indicate in its push promises, by which it announces transmission of the not explicitly requested data elements, ordering information concerning the order in which the server intend to transmit the data elements. The order of the data elements may be defined using priority values, for example the priority values according to HTTP/2.0.
Upon receipt of the push promises, the client device can determine in advance the order of transmission intended by the server, thereby enabling the client to react to the proposed order in case it does not match its own desired order. For example, the client device can update the priority values and send the updated priority values to the server. The server can thus change the transmission ordering based on the new priority values in order to better match the client’s needs. The server can use the updated priorities into account for future data transmissions.
According to embodiments, the client may request a full reordering or a partial reordering of the transmission of the data elements to the server.
Full reordering is described with reference to Figure 7a. A client requests, during a step 700, a Media Presentation Description (MPD hereinafter) to a server. The server retrieves the MPD to send back to the client and identifies corresponding data elements to push during a step 701. In the example of Figure 7a, the server identifies “Data 1.1”, “Data 1.2” and “Data 1.3” as data elements to push. These elements are for example data segments. Element “Data X.1” represents the base layer for data X, element “Data X.2” represents the enhancement layer for data X and “Data X.3” represents the additional enhancement layer for data X. The server defines a specific order of transmission for the data elements. The server associates respective priority values with the PUSFI_PROMISE frames to be sent to the client for announcing the upcoming push data elements. The server then sends the PUSFI_PROMISE frames “P1.1 ”, “P1.2” and “P1.3” with the associated priorities and the MPD during a step 702. Next, shortly after sending the MPD and the push promise, during a step 703, the server sends to the client a data frame corresponding to the “Data 1.1” element and a PUSH_PROMISE messages “P2.1”, “P2.2” and “P2.3” respectively corresponding to the elements “Data 2.1”, “Data 2.2” and “Data 2.3”, which are segments following “Data 1.1”, “Data 1.2” and “Data 1.3” in the transmission order defined. In parallel to the receipt of the data frame and the push promise of step 703, the client decides, after receipt of the MPD and the “P1.1”, “P1.2” and “P1.3” PUSH_PROMISE frames, that the enhancement layer “Data 1.2” is of lower priority compared to the additional enhancement layer “Data 1.3”. Thus, the client sends a priority update frame to lower “Data 1.2” priority during a step 704. Upon receipt of the priority update request, the server changes the schedule of the transmission during a step 705. Hence, transmission of “Data 1.2” is postponed after “Data 1.3” is transmitted. In addition, the server uses the MPD to link the segments associated with “Data 1.2”. It identifies “Data 2.2” and lowers its priority as well.
Partial reordering is described with reference to Figure 7b. Steps 710 to 714 of Figure 7b are substantially the same as steps 700 to 704 of Figure 7a. After receipt of the priority update frame, the server behaviour differs as compared to step 705 previously described. During step 715, the server already started transmission of “Data 1.2” and proceeds further with the transmission. For that segment, there is no change in the priority. The server nevertheless updates the priority of the connected segments, namely “Data 2.2” in the present example. In order to announce the fact that the priority change has been taken into account, the server may send a priority update massage for “Data 2.2”. The client can thus be informed of the change.
Embodiments of the invention may be implemented in use cases wherein servers can push high quality video parts well enough in advance so that the whole part of the video can be played as high quality. For instance, the video can be split into a part 1, played as low quality, a part 2, played as high quality and a part 3 played as low quality. The bandwidth between the client and server allows real-time streaming of the low quality but not the high quality. In that case, the server may interleave part 1 with the enhancement of part 2. Once part 1 has been played, the enhanced part 2 is also available and the server sends the low quality part 2 to be played as high quality jointly with the enhancement of part 2. Thus, the server makes sure that the whole part 2 is played as high quality. Quality flickering, which disturbs the user experience, can be alleviated and quality switching only occurs at a limited number of moments. The server is in the best position to know when to switch to a different quality level since it knows the video content.
Figure 8 is a flowchart of steps performed by a server implementing a push-based DASH media streaming according to embodiments. Steps 800 to 812 describe the general principles. Steps 820 to 827 more specifically deal with the management of the priority feedback from the client.
During a step 800, the client sends a request R to the server. This request identifies a specific media, typically by referring to an MPD file. Next, the server performs an iterative process comprising steps 801 to 810. The process comprises sending data according a defined order. The order of transmission is updated according to the client’s feedback. Once the data is sent, it is received and played by the client. Next the server identifies new data to send and the process continues so on.
The first iteration starts with step 801, during which the data to be sent is identified. In case of the first performance of the iterative process, a fast start approach may be used in order to enable the client to start video playing as quickly as possible. In addition, the server may also identify subdivision of the media into chapters. In case the server knows that the client generally navigates using chapters, the server may actually select not only the segments that correspond to the beginning of the media but also the segments corresponding to the start of the first chapters in the media. After the first performance of the iteration, the server may also detect that the connection may support the transmission of a higher quality representation of the media. Thus, the server may identify when the resolution or quality switch should be done.
Once the server identified a list of segments to push, the server defines a transmission order for these segments. The transmission order is used for computing initial priority values for each pushed segment during a step 802. The ordering may be based on several parameters. A first parameter may be the relationships between the different segments: for example some segments must be available for correctly decoding other segments. A second parameter may be the popularity of video segments, which may be gathered from past statistics. As an example, with YouTube URLs specific times in a video may be addressed. When clicking on the links associated with these URLS, only the video needed to start the video playing at the specified time is retrieved. In addition, if a video is being chaptered, the beginning of each chapter is generally more often retrieved from users than segments in between chapter starts. A third parameter may be the timeline: the priority of a video segment that is closer to being played is higher than the priority of a video segment that is to be played later. A fourth parameter may be the estimated time spent to actually transmit the segment. When the video segment is large, it takes a long time to be transmitted and therefore, transmission should start as soon as possible.
In case two segments have identical priorities, the corresponding data frames can be interleaved during transmission.
In case regions of interests are identified in the media content, if the bandwidth is not large enough for a high quality representation but is large enough for a low quality representation, the server may select an enhancement layer only for the region of interest.
Once the priorities are computed, the server sends PUSH_PROMISE frames containing the priority values during step 803. Identification of all segments is not needed for starting transmission of PUSH_PROMISE frames. In case an MPD is to be sent for the segments to be pushed (step 804), the MPD is sent (step 805). The segment transmission starts in parallel during step 806.
Once PUSH_PROMISE frames are received by the client, the server may receive priority update changes and then change its transmission schedule accordingly (steps 807 to 808 and steps 820 to 828). While sending segments, the server awaits receipt of priority change messages. In case a priority change message is received (step 807), the server reorders the segments accordingly and continue the segment transmission (step 808). Once all segments are sent, the server restarts an iteration process in order to continue streaming the media until the end of the media. When the end of a media is reached (step 809), the server checks whether or not it should automatically start streaming another media (step 810). In case another media should be streamed (Yes), the server identifies the new media to stream (step 811) and restarts the process from step 801. In case no new data should be streamed, the process is stopped (step 812).
The management of the priority feedback from the client starts with the receipt of a priority update change message during step 820. The following steps may also be performed in case the client that cancels a segment push: this case may be seen in practice as equivalent to assigning the lowest priority to that segment.
Upon receipt of the priority update change message, the server identifies the related segment during step 821. The server then proceeds with the reordering of the segment transmission (steps 822, 823). If the segment is already transmitted, the process ends. If the segment is being transmitted, depending on the server implementation, it may refuse to change the transmission (for example because it is too complex) or it may actually reschedule the remaining data to be sent.
The rescheduling of the data may be performed as follows. The server stores a list of video segments to push (and/or video segments that are being pushed). This list is ordered according to the priorities set by the server. The server then sets the new priority value for the segment. The list is then reordered and the corresponding video segment transmission is made earlier or later accordingly.
Once the video segment is reordered, the server may actually decide to apply this priority change to other related video segments. If a client raised the priority of a video segment which is part of an enhancement layer, the server may raise the priority of all the segments of this enhancement layer. Conversely, if the client lowers the priority of a base video segment layer, the priority of all segments temporally related to this segment may be lowered. This process is described in steps 824 to 827. Based on the MPD and the rescheduled video segment, the server identifies a list of related segments (step 824) . The relationship may be temporal, spatial, quality-based etc. The MPD may be enhanced in order to better show the potential relationships. In particular, when the priority of an initialization segment (which is necessary to play more than one video segment) is lowered or raised, all related segments may be rescheduled. This can be the case as well for base layer segments and enhancement segments. For each identified related segment, the server tests whether or not the transmission of the related segment should be changed (step 825) . In case it should be changes, the server computes the new priority value for each segment (step 826) and reschedules the segment transmission accordingly (step 827). The new priority value may be computed by adding to the old value the difference between the new priority value received during step 820 and the initial priority value of the segment identified during step 821. The process stops when each related segment has been tested (step 828).
The server may also receive control flow messages. These messages may enable the server to identify what the client is currently playing. When some additional buffer space is available on the client’s end, it may be inferred that some data has been removed from the buffer, typically the oldest data. If the server keeps a history of the data sent, the server is able to identify which data has been removed. Thus, provided the server knows the client’s cache ordering, the server can have knowledge of which video segments the client is currently playing. This ordering may be based on the MPD that makes it possible to order the cached data according to the timeline. A server may then detect client time skipping for instance. The server may react by quickly sending the start of the next chapter in advance so that the client can continue skipping video chapters.
It should be noted that the sending of a PUSH_PROMISE frame with priorities may be done in various ways. A PUSH_PROMISE frame must relate to an opened stream which is initiated by the client. According to embodiments, the initial stream made by the client during step 800 may be always left open. According to other embodiments, a PUSH_PROMISE frame is sent within a stream opened by the server. In this case, the client considers the PUSH_PROMISE frame as it is sent by the parent client-initiated stream. Thus, it can compute the right headers of the virtual request corresponding to the particular PUSH_PROMISE frame.
According to other embodiments, a priority message is sent jointly with a PUSH_PROMISE. A first possibility is to send it as a header within the PUSH_PROMISE frame. Another possibility is to send a PRIORITY frame with the stream ID reserved by the corresponding PUSH_PROMISE frame. A third possibility is to send the PUSH_PROMISE frame, then the corresponding HEADERS frame (to open the stream) and then the PRIORITY frame on this newly opened stream.
In order to further control the client’s buffer, the server may send a new representation of a segment cached by the client. Within the headers sent as part of this new representation, HTTP cache directives may be used for request the client to actually remove the segment, for instance by marking it as not cacheable. This may make it possible to recover buffer space on the client’s end. HTTP/2.0 control flow may be used. The server can then push additional data. A server may send priority values for each video segment. The server may also send priority values for specific segments. In case the server did not send a priority value for a current PUSH_PROMISE frame, the client can compute a priority value from the last priority value sent from the server. For instance, the client may increment the priority value each time a new PUSH_PROMISE frame with no priority value associated with. Hence, PUSH_PROMISE frames can be grouped so that updating the priority of the specific segment will also update the priorities of all segments of the group.
The process on the client’s side is described with reference to
Figure 9.
The client should be able to play the content available at a given time. However, the client has to cope with potential buffer limitations and processing time. The client has to check whether or not the transmission ordering proposed by the server matches the memory space available in the client’s buffer and matches the content currently played by the client.
During a first step 900, the client connects to the server and requests an MPD file. The client then retrieves the MPD file during a step 901 and waits (step 902) for the receipt of data. When data is received, the client checks (step 903) whether the data is a push promise. In case a push promise has been received, this means that a new video segment is being sent by the server. The client processes the push promise. In particular, the client may validate the priority values that proposed by the server during step 904. In case the client wishes to change the priority values (step 905) for the current segment or another received segment, the client compute a new priority value and sends it to the server (step 906).
In case the client receives video data (step 907), the client links the video segment to the MPD file (step 908) and stores the video data (step 909). Linking the video data to the MPD file makes it possible for the client to retrieve the video segment when it will be further used for decoding the video (step 911). This may also provide efficient storage of the video data (step 909), for example if contiguous video segments are grouped.
The buffer storage constraints may further change the priority. Thus, the client may check again whether a priority value has to be changed and may communicate with the server if needed (steps 905, 906).
Once the client is ready to start or continue playing video (step 910), the client retrieves from its cache the next time slot video segments (step 911) and decodes and plays the video (step 912). As part of step 911, the client may query its cache in order to know which video segments are available. By default, the client may use all video segments available, in particular all enhancement segments if any. The client may let the server select the content: generally speaking, all segments should be used by the client. If some segments cannot be used jointly (like audio English tracks and French tracks), the client should dismiss the unused segments in the first place. It should be noted that not all clients may get access to the cache state: web applications in particular do not usually have access to the web browser cache. In such a case, the server may directly send the list of pushed segments to the web application client. For instance, this information may be exchanged from the server to the client using a web socket connection.
As the video is played and decoded, the corresponding video segments may be removed from the buffer. Hence, the client updates its available buffer size using a WINDOW_SIZE frame. The client may keep video segments that have been recently played in order to enable the user to rewind the video during a limited period of time. The flow control update mechanism may also be used when the user does a fast forward/time skip. The client may remove old stored video content to make room for new content and announces this change to the server using a WINDOW_SIZE frame. When the server receives the WINDOW_SIZE frame, the server may be able to compute which video segments were removed and then identify what the client is actually playing.
In what follows, step 904 is described in more details.
The client holds a list of all push promised video segments. This list is ordered according to the priority information found in the push promise frames. First, it is checked for potential frozen video issues. Based on an estimation of the available bandwidth and the ordered video segment list, transmission beginning and end times of each segment can be estimated. Based on these times, it may be tested whether each video segment will be available at the time it should be used for video playing. If a promised video segment is expected to be delivered after its corresponding video playing use, its priority should be increased. Thus, the video segment is moved up in the push promised video segment list order. In order to compute the exact priority value, it is searched for the position in the video segment list that makes it possible to have the video segment delivered on time and that is the closest to the current video segment position. The priority is then set to a value between the priorities of the video segments in the list that are before and after the video segment new position.
Other factors may also be used by the client for changing the video segment priorities. For instance, if the client is expecting to do some chapterswitching, the client may actually increase the priority of all video segments that start the chapters, in particular the corresponding initialization segments.
According to embodiments, the client-side flow control comprises disabling the per-stream flow control and keeping only a per-connection flow control. The per-connection window size defines the maximum amount of video that a client may actually store at any given time. The client and the server may negotiate at initialization time and during the connection in order to decrease or increase this window size. If the server wants to push some HD content, the server may request the client to increase the window size. If the connection bandwidth is low, the server may need to anticipate well in advance the sending of HD content for a specific part of the video, in which case the buffer size should be made larger.
The order of transmission may be an important issue when the buffer has a single size. In particular, as the buffer is filled with data, the priority ordering becomes more and more important. An important constraint is that the video never freezes. As long as the buffer is largely empty, the server may push various video segments, like segments largely in advance in order to provide an efficient fast forward or chapter skipping. Once the buffer is almost fully filled, the video segments to push should be as close as possible to the video segments being played. This push behaviour may be done by the server if the server has accurate information concerning the client buffer. It may also be implemented by the client using the priority update mechanism.
In case of automated video switching, the flowchart of Figure 9 may be extended by detecting the push of a new MPD as part of the push promise check (step 903). When an MPD push is detected, the client may start receiving segments of a new video as part of step 908. The client must therefore identify the MPD related to the video data. Once the video playing is finished for a given MPD (step 902), the new MPD may be used for continuing video playing. The client may actually flush all video segments from the previous MPD.
With reference to Figure 10, the behaviour of a DASH-aware proxy is described. When receiving a segment pushed from a server, a proxy is not mandated to push it to the end-client. In case of DASH streaming though, it can be considered good practice (or default behaviour) to do so.
The proxy may be able to adjust the server and client behaviours, both in terms of priority processing as well as pushed data to be sent. A proxy may in fact handle independently the priorities with the client from the priorities with the server. In addition, the server may push more data than needed for a given client and the proxy may retrieve that additional pushed data to fulfil requests from other clients. A server may push a video segment for several reasons. For example, a video segment may be pushed in case it is believed to be useful for the end-client. A video segment may also be pushed in case it is believed that the video segment can be used several times and that it is worth pushing it to proxies.
In the first case, proxies generally send the video segment to the client. Proxies may postpone its transmission in order to optimize the client or proxy network state, for instance the client radio state. An exemplary case may be the segment push for fast start video playing and bandwidth estimation, in which case data should be sent as fast as possible to the client. In case the server is interested in pushing data to proxies, proxies may not automatically send the video segment to the client, except if they have means to know that the video segment will be useful to the client. In order to make possible the identification of video segments that may not be sent to clients, a specific priority value may be used. Using a priority value makes it possible to have the proxy always check the priority value for optimizing the processing of the various frames that arrive.
Figure 10 comprises three flowcharts. One flowchart relates to the process of filtering pushed segments (steps 1000 to 1008). Another flowchart relates to the process performed when a segment is requested by a client while it is already promised to another client (steps 1010 to 1015). Anther flowchart relates to the management of priority changes (steps 1020 to 1026).
The process of filtering pushed segments starts with the receipt (step 1000) of a pushed data event, typically when receiving a PUSH_PROMISE frame or a related DATA frame. The proxy checks whether the frame is of high priority or not (step 1001). A frame may be considered as of high priority if its priority value is much larger than priority values of other segments being transmitted. A frame may also be considered as of high priority if its priority value has a special meaning, such as fast start or bandwidth estimation. If the frame is of high priority, the data is sent as quickly as possible to the client (step 1002). The proxy then decides whether or not to store the data (steps 1003, 1004). This decision may be made once when receiving the corresponding PUSH_PROMISE frame or the corresponding HEADERS frame that opens the pushed data stream. This decision may also be based on the proxy cache state, the envisioned use of the video, the popularity of the video source or other criteria. The proxy stores the video segment if the segment is pushed while being requested by one or more clients at the same time. The video segments may also be stored if segments are identified as fast start.
If the data is not of high priority, the proxy checks whether it is of low priority (step 1005). Data of low priority may be data for which transmission to the client may be skipped but that are considered by the server as interesting for network intermediaries like proxies. The proxy first decides whether or not to send the data to the client (step 1006). This decision may be made once when receiving the corresponding PUSH_PROMISE frame or the corresponding HEADERS frame that opens the pushed data stream. If it is decided so, the proxy sends the corresponding frame to the client (step 1002). The process then stops after deciding whether or not to store the data.
The priority value negotiated between the server and proxy may be different from the priority value negotiated between the client and proxy. Therefore, in case the data is of usual priority (i.e. not of low priority and not of high priority), the proxy checks whether the segment priority value is managed by the proxy. As illustrated in Figure 10 (steps 1020 to 1026), the proxy uses the client-to-proxy value for scheduling the time when the data should be transmitted: the proxy holds a list of all to-be-transmitted video-related frames. These frames are ordered according to the priority values before being sent following that order.
In the case the proxy is receiving a priority update frame (step 1010), the proxy identifies the related video segment (step 1011). It its priority value is not being managed by the proxy (step 1012) the proxy forwards the priority update frame to the server (step 1013). Otherwise, the proxy stores this new priority value and reorders the video segment transmission (step 1014) accordingly. In case a potential conflict appears, in particular in case the video segment delivery from the server is expected to be too late for the client needs, the proxy can then forward the priority value to the server.
Steps 1020 to 1026 relate to the case of a proxy that receives a request from a client to a video segment (step 1020) that is already promised by the server to another client (step 1021). Depending on the priority given to that request, the proxy computes the minimum proxy-to-server priority that would fulfil the client’s request (step 1022). This computation is done by computing the proxy-to-server priority value that will ensure that the server-to-proxy delivery time is earlier than the proxy-to-client expected delivery time. The priority is changed if the computed priority is below the currently set priority (step 1023), in which case the proxy will send a priority update message to the server (step 1024) and the proxy will mark this video segment priority as managed by the proxy so that the proxy sends the video segment to its two clients at the best time for their needs. Similarly to this process, a proxy may receive several priority updates to the same segment from several clients, in which case the proxy may actually send the lowest priority value that satisfies all clients.
With reference to Figure 11 there is described an embodiment according to which a client receives a pushed data event whose priority value indicates that the server wants to use it for measuring bandwidth. Measuring bandwidth may be done using TCP/IP packets through active or passive measurements for computing round trip times. Based on round trip times, the available bandwidth may be computed as found in document Saubhasik et al. “Bandwidth Estimation and Rate Control in BitVampire”. This computation may potentially take into account effects of HTTP/2.0 control flow. By making notification that some data frames are used for bandwidth estimation possible, the bandwidth available without HTTP/2.0 control flow can be estimated.
The process starts with step 1100 during which a pushed data frame is received from the server. Next, it is checked whether the associated priority of the stream indicates that the server is measuring bandwidth (step 1101). In that case, the dedicated buffer is maximized (step 1102). Alternatively the stream flow control can be disabled. If the receiving node is a proxy (step 1103), it may forward the segment data. Otherwise, the client decides whether to store the segment (step 1104). The client stores the pushed segment (step 1105). In any case, the client sends an acknowledgement to the server in the form of a WINDOWSJJPDATE (step 1106) for the per-connection window. This acknowledgment will then be used by the server for estimating the connection bandwidth. In the case the client is a proxy, it forwards the pushed data (step 1108) as quickly as possible. When receiving an acknowledgment from the end-client, the proxy forwards it back to the server as well (steps 1109, 1110).
In order to estimate the available bandwidth, the server may use the round trip time of the sent data frame that is computed as the difference between the sending time of the data frame and the reception time of the acknowledgment message, the pairing between the two being based for instance on the data frame size which should be equal to the window size update. Round trip times can be computed from various data frames of one more video segment. In order to increase accuracy, the data frames may have various sizes. Splitting a video segment into several DATA frames of different sizes can be performed by the server. The server only needs to ensure that the network layer will not split DATA frames into several TCP/IP packets (hence smaller DATA frames) or not buffer content to be sent and merge several DATA frames into a TCP/IP packet. Based on those measurements, standard techniques can be used for computing the available bandwidth (an example can be found in the above-mentioned document) that the server may use to actually decide which video representation to use.
With reference to Figure 12, there is described the case of an initial video playing. The server pushes data using the fast start priority. It is considered that the data probably have a low-bit rate and that the client will receive those data and send acknowledgments to the server so that the server can estimate the bandwidth and switch to the optimal representation. The client-side process is described in steps 1200 to 1207. The server-side process is described in steps 1210 to 1215.
The client process starts with a step 1200 of receipt of pushed data. The client then checks whether the priority has the fast start value (step 1201). In that case, the client typically maximizes the dedicated buffer (step 1202). This maximization is performed when receiving the PUSH_PROMISE of the pushed data. The data is then stored (step 1203) and the client sends an acknowledgement to the server using the WINDOWJJPDATE frame (step 1204). The client then checks whether enough data are available to start playing the video (step 1205). Next, the video playing starts (step 1206). Otherwise the client waits for more data, until enough data are available for starting playing the data.
The server process starts with a step 1211 of sending segment data frames with the fast start priority (step 1210). The server then receives acknowledgments (step 1211) that will allow computing the available bandwidth (step 1212). Once enough measurements are obtained, the server selects the optimal representation (step 1213) and starts pushing optimal representation segments (step 1214). The server decides when to switch representation. This has at least two benefits. First the server may know when the measurements are accurate enough and may switch from one resolution to another as soon as this is the case, while the client will need to handle some delay. Second, the server may decide to switch from one resolution to another at the time that is less disturbing for the user experience. Indeed, the server has the knowledge of the video content. In particular, the MPD may be augmented with information on the times at which resolution switch can be best envisioned.
Figure 13 is a schematic illustration of a device according to embodiments. The device may be a server, a client or a proxy. The device comprises a RAM memory 1302 which may be used as a working memory for a control unit 1301 configured for implementing a method according to embodiments. For example, the control unit may be configured to execute instructions of a computer program loaded from a ROM memory 1303. The program may also be loaded from a hard drive 1306. For example, the computer program is designed based on the flowcharts of figures 8-12 and the above description.
The device also comprises a network interface 1304 which may be a single network interface, or comprise a set of network interfaces (for instance several wireless interfaces, or several types of wired or wireless interfaces). The device may comprise a user interface 1305 for displaying information to a user and for receiving inputs from the user.
The device may also comprise an input/output module 1307 for receiving and/or sending data from/to external devices.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive, the invention being not restricted to the disclosed embodiment. Other variations to the disclosed embodiment can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims.
In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used. Any reference signs in the claims should not be construed as limiting the scope of the invention.

Claims (26)

1. A providing method for a plurality of video segments obtained by temporal dividing of video data, the method comprising: - pushing, to a client, an initial video segment among the plurality of video segments obtained by the timely dividing of the video data; - determining a quality for a following video segment to be transmitted to the client later than the initial video segment; and - transmitting the following video segment of which a quality corresponds to the determined quality to the client.
2. The method according to claim 1, wherein the quality for the following video segment is determined after an acknowledgement relating to the initial video segment from the client is received.
3. The method according to claim 1, wherein the quality for the following video segment is determined based on a reception timing of an acknowledgement relating to the initial video segment from the client.
4. The method according to claim 1, wherein the initial video segment is pushed in response to a predetermined request from the client.
5. The method according to claim 1, wherein the initial video segment with priority information is pushed to the client.
6. The method according to claim 1, wherein the quality for the following video segment is higher than a quality for the initial video segment.
7. The method according to claim 1, wherein the initial video segment is pushed to the client before the MPD (Media Presentation Description) is transmitted to the client.
8. The method according to claim 1, wherein the initial video segment is pushed to the client before description data describing at least quality information for the following video segment is transmitted.
9. The method according to claim 1, wherein the quality is represented by at least one of bit-rate information and spatial resolution information.
10. The method according to claim 1, further comprising: - transmitting an announcement message relating to pushing of the following video segment, and - changing, in response to a request from the client which received the announcement message, a priority for pushing data.
11. The method according to claim 1, wherein a plurality of video segments used by the client for starting a reproduction of a video based on the video data are pushed to the client as the initial video segment.
12. The method according to claim 1, wherein the following video segment is pushed to the client in the transmitting step.
13. The method according to claim 1, wherein the following video segment is transmitted in response to a request from the client.
14. A device for providing a plurality of video segments obtained by temporal dividing of video data, the device comprising: - push means for pushing an initial video segment among the plurality of video segments to a client; - determination means for determining a quality for a following video segment to be transmitted to the client later than the initial video segment; - transmission means for transmitting the following video segment corresponding to the determined quality to the client.
15. The device according to claim 14, wherein the determination means determines the quality for the following video segment after an acknowledgement relating to the initial video segment from the client is received.
16. The device according to claim 14, wherein the determination means determines the quality for the following video segment based on a reception timing of an acknowledgement relating to the initial video segment from the client.
17. The device according to claim 14, wherein the push means pushes the initial video segment in response to a predetermined request from the client.
18. The device according to claim 14, wherein the initial video segment with priority information is pushed to the client.
19. The device according to claim 14, wherein the quality for the following video segment is higher than a quality for the initial video segment.
20. The device according to claim 14, wherein the push means pushes the initial video segment to the client before the MPD (Media Presentation Description) is transmitted to the client.
21. The device according to claim 14, wherein the push means pushes the initial video segment to the client before description data describing at least quality information for the following video segment is transmitted.
22. The device according to claim 14, wherein the quality is represented by at least one of bit-rate information and spatial resolution information.
23. The device according to claim 14, wherein: - the transmission means transmits an announcement message relating to pushing of the following video segment, and - the device further comprises changing means for changing, in response to a request from the client which received the announcement message, a priority for pushing data.
24. The device according to claim 14, wherein a plurality of video segments used by the client for starting a reproduction of a video based on the video data are pushed to the client as the initial video segment.
25. The device according to claim 14, wherein the following video segment is pushed to the client by the transmission means.
26. The device according to claim 14, wherein the transmission means transmits the following video segment in response to a request from the client.
GB1714236.5A 2013-07-12 2013-07-12 Adaptive data streaming method with push messages control Active GB2551674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1714236.5A GB2551674B (en) 2013-07-12 2013-07-12 Adaptive data streaming method with push messages control

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1312561.2A GB2516116B (en) 2013-07-12 2013-07-12 Adaptive data streaming method with push messages control
GB1714236.5A GB2551674B (en) 2013-07-12 2013-07-12 Adaptive data streaming method with push messages control

Publications (3)

Publication Number Publication Date
GB201714236D0 GB201714236D0 (en) 2017-10-18
GB2551674A true GB2551674A (en) 2017-12-27
GB2551674B GB2551674B (en) 2018-04-11

Family

ID=60050527

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1714236.5A Active GB2551674B (en) 2013-07-12 2013-07-12 Adaptive data streaming method with push messages control

Country Status (1)

Country Link
GB (1) GB2551674B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180309840A1 (en) * 2017-04-19 2018-10-25 Comcast Cable Communications, Llc Methods And Systems For Content Delivery Using Server Push

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1523154A1 (en) * 2003-10-08 2005-04-13 France Telecom System and method for offering push services to a mobile user using a push proxy which monitors the state of the mobile user equipment
US20050262257A1 (en) * 2004-04-30 2005-11-24 Major R D Apparatus, system, and method for adaptive-rate shifting of streaming content
WO2011029065A1 (en) * 2009-09-04 2011-03-10 Echostar Advanced Technologies L.L.C. Controlling access to copies of media content by a client device
EP2820817A1 (en) * 2012-03-01 2015-01-07 Motorola Mobility LLC Managing adaptive streaming of data via a communication connection
EP2870770A2 (en) * 2012-07-09 2015-05-13 VID SCALE, Inc. Power aware video decoding and streaming

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1523154A1 (en) * 2003-10-08 2005-04-13 France Telecom System and method for offering push services to a mobile user using a push proxy which monitors the state of the mobile user equipment
US20050262257A1 (en) * 2004-04-30 2005-11-24 Major R D Apparatus, system, and method for adaptive-rate shifting of streaming content
WO2011029065A1 (en) * 2009-09-04 2011-03-10 Echostar Advanced Technologies L.L.C. Controlling access to copies of media content by a client device
EP2820817A1 (en) * 2012-03-01 2015-01-07 Motorola Mobility LLC Managing adaptive streaming of data via a communication connection
EP2870770A2 (en) * 2012-07-09 2015-05-13 VID SCALE, Inc. Power aware video decoding and streaming

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IEEE Internet Computing, March 2011, vol. 2, pages 54-63, "Watching video over the web: Part 1 Streaming protocols", Begen A. et al. *
IEEE Multimedia, April 2011, pages 62-67, vol. 4, "MPEG-DASH standard for multimedia streaming over the internet", Sodagar I. *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180309840A1 (en) * 2017-04-19 2018-10-25 Comcast Cable Communications, Llc Methods And Systems For Content Delivery Using Server Push
US11659057B2 (en) * 2017-04-19 2023-05-23 Comcast Cable Communications, Llc Methods and systems for content delivery using server push

Also Published As

Publication number Publication date
GB201714236D0 (en) 2017-10-18
GB2551674B (en) 2018-04-11

Similar Documents

Publication Publication Date Title
US11375031B2 (en) Adaptive data streaming method with push messages control
GB2516116A (en) Adaptive data streaming method with push messages control
JP2016531466A5 (en)
CN112106375B (en) Differential media presentation description for video streaming
GB2538832B (en) Adaptive client-driven push of resources by a server device
US9621610B2 (en) Methods and arrangements for HTTP media stream distribution
US9253233B2 (en) Switch signaling methods providing improved switching between representations for adaptive HTTP streaming
GB2516112A (en) Methods for providing media data, method for receiving media data and corresponding devices
CN115136609A (en) Client-based storage of remote element parsing
GB2551674A (en) Adaptive data streaming method with push messages control
GB2575189A (en) Adaptive client-driven push of resources by a server device