EP3446477A1 - Mediendatenstreamingverfahren und vorrichtung - Google Patents

Mediendatenstreamingverfahren und vorrichtung

Info

Publication number
EP3446477A1
EP3446477A1 EP17719696.1A EP17719696A EP3446477A1 EP 3446477 A1 EP3446477 A1 EP 3446477A1 EP 17719696 A EP17719696 A EP 17719696A EP 3446477 A1 EP3446477 A1 EP 3446477A1
Authority
EP
European Patent Office
Prior art keywords
client
user device
streaming
video
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP17719696.1A
Other languages
English (en)
French (fr)
Inventor
Manh Hung Peter Do
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alfasage Ltd
Original Assignee
Orbital Multi Media Holdings Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbital Multi Media Holdings Corp filed Critical Orbital Multi Media Holdings Corp
Publication of EP3446477A1 publication Critical patent/EP3446477A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/632Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing using a connection between clients on a wide area network, e.g. setting up a peer-to-peer communication via Internet for retrieving video segments from the hard-disk of other client devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
    • H04L67/1078Resource delivery mechanisms
    • H04L67/108Resource delivery mechanisms characterised by resources being split in blocks or fragments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25841Management of client data involving the geographical location of the client

Definitions

  • Embodiments described herein relate to the transmission of media data from a streaming server to one or more clients.
  • the present application describes improved methods of content delivery from a server to clients, which seek to optimise bandwidth usage in delivering content to users.
  • Applications to which the embodiments described herein relate include over-the-top content (OTT) delivery.
  • OTT over-the-top content
  • CDNs Content delivery networks
  • the delivery of multimedia, especially video data, across open networks currently relies heavily on content delivery networks (CDNs) in order to maintain a high quality of service for users.
  • CDN is a distributed network of servers deployed in multiple data centres, which serves content to an audience of users.
  • OTT content providers pay CDNs to serve such content to the content provider's audience.
  • CDNs typically utilise unicast protocols for the transmission of content to the content provider's audience. Unicast transmission is discussed in further detail below. This approach results in high data consumption, requiring CDNs to employ multiple servers, for example in server farms. For example, a single CDN server may serve around 5000 users, but the content provider may have an audience of one million users, thus requiring around 200 servers.
  • the cost of delivering video content to users may be a barrier to smaller scale content providers. For example, a smaller provider based in London will be unable to deliver content to users in France without using a CDN with a server in France.
  • the cost of using a CDN may be prohibitive for a content provider with thousands, rather than hundreds of thousands, of users. Exacerbating this issue is the fact that the cost of delivery is set to increase with rising demand for higher resolution video, such as HD 1080 and 4K video.
  • CDNs may employ alternative transmission methods, these methods also have disadvantages as described further below.
  • unicast transmission is the most common method used by CDNs for the transmission of content to clients.
  • Each user wanting to view content receives a dedicated video stream from the streaming server.
  • Unicast transmission uses IP delivery methods such as Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), which are session-based protocols.
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • a client connects to a streaming server, the client has a direct relationship to the server.
  • Dedicated client sessions are initiated regardless of proximity between users. For example, users in neighbouring apartments may want to stream the same content; a streaming server employing unicast transmission will result in both users receiving a dedicated video stream.
  • bandwidth usage is directly proportional to the number of clients being served. For example, 20 clients playing video streams with a bit rate of 1.5 Mbps collectively use 30 Mbps of bandwidth from the server. This linear relationship results in poor scalability of unicast transmission, owing to the increased demand for bandwidth.
  • Scalability issues may be particularly problematic during the 'peak hour' of transmission, for example during important events, and can impact on the speed and reliability of the delivery of content to users.
  • a major sporting event may attract a large number of users to tune in to a specific channel for the event duration, resulting in high bandwidth demand.
  • service blackouts resulting from system overloading are inevitable. Although such events and blackouts may only occur occasionally, the result is a negative and unpredictable user experience, which may be costly to the content provider.
  • An alternative method of transmitting content is using multicast transmission.
  • This method is a one-to-many method which relies on multicast-enabled routers in order to forward the packets to all client subnets that have clients listening. Content is therefore delivered to users regardless of whether they are watching the channel. Multiple users can 'tap in' to a single multicast transmission from a streaming server, thus providing multiple simultaneous views of the same content.
  • the provision of a single stream from the streaming server is advantageous over unicast transmission, as bandwidth does not increase proportionally with the number of users. Accordingly, multicast transmission is a suitable method for providing a live stream to a number of users.
  • Multicast transmission includes users' lack of control over playback of the content, and a lack of video on demand (VOD) functionality.
  • VOD video on demand
  • Multicast transmission also typically requires the content provider to have control over the network between the streaming server and the clients, in order to ensure that routers and firewalls permit the transmission of packets destined to multicast groups.
  • many routers include multicast filters in order to block multicast transmissions.
  • a further disadvantage of multicast transmission is that capacity must be reserved for each channel. Serving content over multiple channels therefore requires additional bandwidth, with the bandwidth requirement being proportional to the number of channels. Multicast transmission may therefore result in similar scalability issues to those described above with respect to unicast transmission.
  • P2P peer-to-peer
  • This method has mostly been used for file sharing. Content is served to one user, who may then share the content with additional users who may, in turn, distribute the content to further users in a similar manner.
  • P2P networks have the potential to make any TV channel globally available through users relaying the channel over the P2P network. This would allow scalable distribution to a large audience at no additional cost to the source.
  • P2P distribution requires that the physical file is fully downloaded to a particular user's device before that user can share the file with other users. This results in significant delays, thus rendering P2P distribution unsuitable for live broadcasts.
  • P2P distribution also has associated digital rights management (DRM) and security concerns.
  • DRM digital rights management
  • One aspect of a first embodiment comprises a method of streaming video data at a client, in which the client establishes a connection to a streaming server and receives video data from the streaming server, wherein the video data comprises a portion of a video stream.
  • the client provides a tracking server with information associated with the video stream.
  • the client receives, from the tracking server, information associated with a first user device, wherein the first user device is also receiving the video stream.
  • the client establishes a connection to the first user device, and receives video data from the first user device.
  • Another aspect of the first embodiment comprises a client for streaming video data according to the foregoing method.
  • a further aspect of the first embodiment comprises a method of managing video streaming.
  • the method is implemented at a tracking server, and comprises receiving information associated with a source video stream from a first user device.
  • Information associated with a target video stream is then received from a client, in response to a determination that the client meets a predefined criterion.
  • the client is receiving video data comprising a portion of the target video stream.
  • the method further comprises determining that the source video stream is the same as the target video stream, and sending information associated with the first user device to the client.
  • the information associated with the first user device is sufficient to enable the client to establish a connection to the first user device and receive video data from the first user device.
  • Another aspect of the first embodiment comprises a tracking server for managing video streaming according to the foregoing method.
  • Another aspect of the first embodiment comprises a system for streaming video data, comprising a client for streaming video data and a tracking server for managing video streaming, both as described above.
  • the system further comprises a streaming server operable to send video data to the client, wherein the video data comprises a portion of a video stream.
  • video data is streamed in accordance with an associated method, combining the steps of the client- and server-side methods described above.
  • One aspect of a second embodiment comprises a method of streaming video data at a client, in which the client establishes a connection to a streaming server and receives a first subset of a video file from the streaming server. The client stores the first subset of the video file in a data store.
  • the client In response to a determination that the client meets a predefined criterion, the client provides a tracking server with information associated with the video file. The client then receives, from the tracking server, information associated with at least one user device, wherein each user device stores at least part of the video file. The client establishes a connection to the at least one user device, and receives a second subset of the video file from the at least one user device. The first and second subset together comprise the complete video file. The client also stores the second subset of the video file in the data store. Upon determining that the data store comprises the complete video file, the client terminates the connection to the streaming server and the at least one user device.
  • Another aspect of the second embodiment comprises a client for streaming video data according to the foregoing method.
  • a further aspect of the first embodiment comprises a method of managing video streaming.
  • the method is implemented at a tracking server, and comprises receiving information associated with video data stored at each of a plurality of user devices from that plurality of user devices, wherein the video data comprises a portion of one of a plurality of video files.
  • Information associated with a target video file is then received from a client, in response to a determination that the client meets a predefined criterion.
  • the client is storing a first subset of the target video file in a data store.
  • the tracking server identifies at least one user device storing at least part of the target video file.
  • the tracking server sends information associated with the at least one user device to the client.
  • the information associated with the at least one user device is sufficient to enable the client to receive a second subset of the video file from the at least one user device, wherein the first and second subsets together comprise the complete video file.
  • Another aspect of the second embodiment comprises a tracking server for managing video streaming according to the foregoing method.
  • Another aspect of the second embodiment comprises a system for streaming video data comprising a client for streaming video data and a tracking server for managing video streaming, both as described above.
  • the system further comprises a streaming server operable to send a first subset of a video file to the client.
  • video data is streamed in accordance with an associated method, combining the steps of the client- and server-side methods described above.
  • Figure 1 shows a schematic diagram of a chain transmission architecture, according to one described embodiment.
  • Figure 2A shows a schematic diagram of a first video transmission module, according to one described embodiment.
  • Figure 2B shows a schematic diagram of connections within a transmission chain, according to one described embodiment.
  • Figure 3 shows a schematic diagram of a chain transmission architecture comprising a third party server, according to one described embodiment.
  • Figure 4 shows a flowchart of the procedure of a client requesting video content, according to one described embodiment.
  • Figure 5 shows a flowchart of the procedure for maintaining the transmission of video data, according to one described embodiment.
  • Figure 6 shows a flowchart of the procedure for identifying throughput issues in a transmission chain, according to one described embodiment.
  • Figure 7 shows a schematic diagram of a second video transmission module, according to one described embodiment.
  • Figure 8 shows a schematic diagram of video on demand (VOD) streaming, according to one described embodiment.
  • VOD video on demand
  • Figures 9A to 9D show schematic diagrams of different stages of a combined streaming and downloading process, according to one described embodiment.
  • Figure 10 shows a flowchart of a VOD streaming procedure, according to one described embodiment.
  • Figure 11 shows a schematic diagram of a client suitable for participating in the methods described herein, according to one described embodiment.
  • Figure 12 shows a schematic diagram of a tracking server suitable for participating in the methods described herein, according to one described embodiment.
  • Figure 13 shows a schematic diagram of a system operable to employ the methods described herein, according to one described embodiment.
  • Embodiments described herein define a new approach to providing content to clients, whilst addressing issues of scalability and bandwidth demand, as discussed above, and maintaining a high quality of service for users.
  • the use of such embodiments can eliminate the dependency on CDNs and substantially reduce the cost of video content delivery.
  • Embodiments described herein are compatible with systems employing HTTP Live Streaming (HLS), a HTTP-based streaming protocol which breaks the overall data stream into a sequence of small HTTP-based file downloads. Each download loads a short portion (or "chunk") of the video stream.
  • HLS HTTP Live Streaming
  • the transmission of data over HTTP ensures that HLS is capable of traversing any firewall or proxy server that lets through standard HTTP traffic.
  • HLS HTTP Live Streaming
  • Embodiments described herein are therefore not limited to implementation using the HLS protocol.
  • the described embodiments are transparent to DRM encryption and can be integrated with existing content delivery deployments.
  • Embodiments described herein may be implemented as an independent module. Alternatively, the embodiments may be implemented in conjunction with the data control method and system described in WO2015/150812, herein incorporated by reference.
  • the aforementioned data flow control method provides compatibility with HLS and takes into consideration network conditions and client conditions, such as the data buffer of the client, and applies one or more data flow control modes to improve the speed and quality of media data transmission.
  • embodiments described herein may be implemented in conjunction with the redirection apparatus and method described in WO2011/094844, herein incorporated by reference.
  • This document discloses methods for redirecting a client to a selected streaming server in response to receiving a request for content.
  • the streaming server to which the client is redirected can be selected based on one or more criteria such as streaming server loads (e.g. connection loads and processing loads) and content storage location, using a data structure stored at the redirection server.
  • Embodiments described herein focus on harvesting and recycling video data outside of the server environment. This video data would conventionally be discarded following display at the client, as described above. Therefore, the need for clients to repeatedly acquire data from streaming servers is reduced. Implementation of the described embodiments frees up resources at the server side so that additional video sessions may be supported without increasing capital expenditure.
  • Embodiments described herein are applicable for live television, VOD, and near video on demand (NVOD). The embodiments are primarily a modification of unicast distribution methods, but incorporate characteristics of multicast distribution and P2P distribution. Server bandwidth requirements may be significantly reduced whilst maintaining quality of service and quality of experience. Specific embodiments will now be described, by way of example only and with reference to the accompanying drawings having the figure numbers as listed above.
  • Section 1 Live video streaming
  • FIG. 1 shows an exemplary overview of the transmission architecture 100 according to embodiments described herein.
  • the transmission architecture 100 comprises streaming servers 110a and 110b, leader client 120 and N follower clients 130[1] through to 130[N].
  • Clients participating in the transmission architecture 100 form "chains" of users, wherein the "leaders” of the chain receive a unicast transmission from one or more streaming servers, and the “followers” in the chain receive data uploaded from a preceding user in the chain.
  • the leader client 120 is connected to streaming server 110a in a conventional manner for unicast transmission from streaming server 110a.
  • Video data received from streaming server 1 10a is stored as HLS chunks in the buffer of the leader client 120.
  • the leader client 120 then relays the chunks stored in the buffer to follower client 130[1], which in turn relays the chunks to follower client 130[2], and so on until the chunks are received by follower client 130[N].
  • Information associated with the chunks in the clients' buffers may be stored in a playlist file (for example, a M3U8 file for HLS).
  • clients In order to ensure that video data can be relayed down the 'chain', clients must satisfy a predefined criterion (for example, having a sufficient buffer level) and exceed an upload speed threshold. These criteria are explained in further detail below. Further, to reduce latency, a client must satisfy a proximity criterion with respect to the preceding client in the chain, as the efficiency of the transmission architecture 100 is maximised if clients in the chain are in proximity to one another, thus reducing transmission delays between clients.
  • a predefined criterion for example, having a sufficient buffer level
  • Clients participating in the transmission architecture 100 that is, those devices which have satisfied the predefined criterion, upload speed threshold and proximity criterion, will hereinafter be defined as operating in “chain transmission mode", for ease of reference.
  • the leader client 120 in a preferred embodiment establishes a backup connection to streaming server 110b, so that video data can be streamed from streaming server 110b in the event of an interruption in the transmission from streaming server 110a.
  • each follower client is given two different sources when joining a chain. For example, follower client 130[3] connects to follower client 130[2] and follower client 130[1]. As video data is relayed down the chain, follower client 130[3] receives data from follower client 130[2]. However, in the event that follower client 130[2] goes offline, follower client 130[1] provides the video data to follower client 130[3].
  • Follower client 130[1] is able to provide video data to follower client 130[3] because information about follower client 130[1] was passed to follower client 130[3] by follower client 130[2] upon connection of follower client 130[3] with follower client 130[2].
  • follower client 130[1] connects to the leader client 120 and establishes a backup connection to one of the streaming servers 1 10a, 1 10b, so that video data can be received in the event that the leader client 120 goes offline. If follower clients 130[1] and 130[2] both go offline, follower client 130[3] will make contact with a redirection server (not shown) which can direct the follower client 130 to stream video data from a suitable streaming server.
  • Follower client 130[3] may then become the leader in a new chain.
  • follower clients 130 may maintain a backup connection to one of the streaming servers 1 10a, 100b for a predetermined time period following connection to a chain, so that follower clients 130 may stream video data from one of the streaming servers 1 10a, 110b if they experience network issues upon joining the chain.
  • unicast sessions may allow a client to adapt the rate at which it is receiving data from the streaming server.
  • This technique known as adaptive streaming, enables a lower bit rate to be used in the event of network congestion.
  • the content of the buffer of the leader client 120 is relayed to the follower clients 130[1] to 130[N]. It therefore follows that the transmission architecture 100 is not suited to adaptive streaming techniques.
  • the same video quality is relayed down the chain, so the video quality received by the leader client 120 is the video quality received at the follower clients 130[1] to 130[N].
  • the delay comprises two components: the physical network delay and a processing delay associated with a particular chunk.
  • the physical network delay is the delay associated with relaying content from one client to another.
  • the processing delay occurs as a client must wait until a particular chunk has downloaded before sharing the chunk with another user.
  • the chain length may be capped at a predefined length. For example, the chain length may be capped at 100 users. If, on average, the physical network delay between two clients is 50 ms, a chain length of 100 users will result in a total physical network delay of five seconds for the user at the end of the chain. If each chunk is one second in length, each client in the chain must wait for one second before sharing that chunk with the next client in the chain. Thus the processing delay experienced at each client is one second. The 100 th user in the chain may therefore experience a total processing delay of 100 seconds. Thus the 100 th user in the chain may experience a total delay of 105 seconds.
  • a chain may grow dynamically until it reaches either a maximum number of users or a maximum permissible total delay. For example, if a maximum total delay is set at 10 seconds, and the total delay between users is, on average, 3 seconds, the maximum chain length will be 3 users. Thus the chain length may be determined based on the delay tolerance. Limiting the chain length using the methods outlined above ensures that the 'live' nature of the video stream is maintained within an acceptable threshold, such as 60 seconds. Once the chain length cap is reached, a new chain is started.
  • the chain lengths may be optimised to 75 users.
  • the chain length can adapt to changing network conditions. As stated above, the chain length may be capped dynamically based on the predefined delay threshold. Therefore, if network conditions are poor, the client at the end of the chain will not experience an unacceptable level of delay as that client will have been directed to leave that chain and either join or form another chain.
  • Chain length may also be influenced by HLS chunk size. Larger chunks have a longer associated processing delay as each client must wait for a longer period of time for the chunk to be downloaded before relaying the chunk to the next client. Shorter chunks are accordingly delivered more effectively down the chain, thus reducing the delay between users.
  • a further method of minimising delay in the above-described chain transmission architecture is for clients to download portions of chunks instead of waiting for a particular chunk to be fully downloaded. This is achieved using byte- range download.
  • Byte-range download is a feature of HLS which enables a chunk to be redefined using a particular byte-range. As long as a client has downloaded the specified byte-range, the client can share the byte-range of the chunk with other users in the chain. For example, a five-second chunk may be redefined as five one-second byte-ranges. Accordingly, the processing delay associated with waiting for a chunk to be downloaded may be reduced.
  • Byte-range download is described in further detail in section 1.2.1 below.
  • Chunk size also influences the start-up delay for streaming video data. This is because the client buffer may be required to contain a minimum of three chunks prior to playback. Larger chunks may be advantageous as the combined playback time of chunks in the buffer reduces the likelihood of buffering following playback of a particular chunk; however, the time required to download a large chunk may introduce delays into the transmission scheme. On the other hand, smaller chunks result in reduced transmission delay and reduced start-up delay, but the reduced combined playback time of chunks in a client's buffer increases the risk of buffering following playback of a chunk.
  • One method of minimising the start-up delay associated with chunk size is for the first client in the chain to re-segment the chunk.
  • Re-segmentation involves the client redefining the length of a particular chunk. For example, a client may re-segment a 30- second chunk into three 10-second chunks. This method is carried out by the first client in the chain as the entire chunk must be available in order for the client to re- segment the chunk.
  • a client may receive a direction to re-segment any chunks which are identified as exceeding a predefined length.
  • a dedicated transmission protocol may be used for transmission from a data source (streaming servers and clients) to a data receiver (clients).
  • the dedicated transmission protocol may specify that data sources 'push' the video data to the next client in the chain, as opposed to requiring the next client to download the video data. This ensures that a client in the chain is not required to request data from the preceding client in the chain and that data is sent to that client whenever it becomes available at the preceding client.
  • the dedicated transmission protocol may further specify that it is not necessary for a data source to wait for acknowledgement from a data receiver (similar to UDP).
  • the dedicated transmission protocol may comprise a packet loss recovery mechanism for resending lost packets. The packet loss recovery mechanism differs from similar mechanisms in protocols such as TCP in that the transmission rate is not slowed down when attempting to resend a lost packet.
  • FIG. 2A illustrates an exemplary embodiment of a video transmission module 200, comprising a streaming server 210, a speed test server 220, and a chunk tracking server 230.
  • the video transmission module 200 may further comprise a Session Traversal Utilities for NAT (STUN) server 240, a redirection server 250, and a DRM server 260.
  • STUN Session Traversal Utilities for NAT
  • the redirection server 250 may be able to implement the methods disclosed in WO201 1/094844, as discussed above.
  • a client 270 engages with the video transmission module 200 in order to receive a video stream from a particular content provider.
  • the client 270 initially connects to the streaming server 210 in a conventional manner for unicast transmission from the streaming server 210.
  • the client 270 also establishes a connection to another streaming server (not shown) for redundancy, but only streams video data from one server at a time, unless certain conditions are met.
  • One example condition may be that the rate of transmission from the first streaming server is not sufficient, owing to network congestion. In this case, the client 270 will attempt to stream video data from both streaming servers in order to maintain a healthy buffer.
  • Video data is downloaded from the streaming server until the client 270 meets a predefined criterion.
  • the predefined criterion may require the buffer of the client 270 to be filled to a predefined level, for example 80%.
  • a predefined level is required in order to ensure that the buffer contains enough video data to overcome bandwidth fluctuations.
  • the predefined buffer level also helps to compensate for time spent resolving network issues, which may occur after the client 270 switches to receiving content from a user device participating in chain transmission mode.
  • Alternative criteria may be used, as will be appreciated by persons skilled in the art.
  • the predefined criterion may be based on the combined playback time of the chunks in the client buffer.
  • Criteria employed may be adjusted based on the type of video data being transmitted (for example, whether the data is standard definition (SD) or high definition (HD)).
  • the client 270 makes contact with the speed test server 220.
  • the client 270 uploads content to the speed test server 220 so that the speed test server 220 can determine the upload speed of the client 270.
  • the speed test server 220 In order to participate in chain transmission mode, the speed test server 220 must determine that the upload speed of the client 270 meets a predefined qualification threshold. For example, the minimum upload speed may need to be a factor of 1.1 greater than the video bit rate. Thus, if the video bit rate is 3.0 Mbps, the minimum required upload speed of the client is 3.3 Mbps.
  • the upload speed of clients participating in chain transmission mode may be capped at a predefined maximum level. For example, the upload speed may be capped at a multiple 1 to 1.5 times the video bitrate. Capping the upload speed of a user device may prevent participation in chain transmission mode disrupting other Internet activity.
  • clients participating in chain transmission mode are either “leaders” of the chain or “followers” in the chain. Therefore, following qualification of the client 270, there are two possibilities. Either the client 270 will form a chain as "leader” of that chain, or the client 270 will join an existing chain as a "follower”.
  • the client 270 After meeting the qualification threshold, the client 270 makes contact with the chunk tracking server 230.
  • the client 270 may inform the chunk tracking server 230 that it has met the qualification threshold; however, alternative methods of determining that the client 270 is qualified to participate in chain transmission mode may be envisioned and are discussed in further detail in section 1.2.4 below.
  • the client 270 informs the chunk tracking server 230 of the content being viewed on the client 270 and the chunks of the video stream currently in the buffer of the client 270.
  • Clients in chain transmission mode may be streaming video data from another client in the chain. Therefore, it is necessary for a particular client to know the IP address of the preceding client in order to establish a connection with that client and thus receive video data.
  • some clients may be connected to a local network behind a router or firewall providing Network Address Translation (NAT).
  • NAT Network Address Translation
  • Such clients will not have an external IP address suitable for sending and receiving content via the Internet, as the router or firewall modifies the IP packet headers of any outgoing or incoming packets. It is clear, therefore, that NAT presents an issue for chained transmission of video data between devices which may be connected to separate local networks.
  • the client 270 queries the STUN server 240 in order to determine its external IP address.
  • the STUN server 240 provides the client 270 with the IP address from which the query originated, thus providing the client 270 with its external IP address.
  • the client 270 then provides this external IP address to the chunk tracking server 230.
  • the chunk tracking server 230 maintains a database of each client's IP address, thus enabling the chunk tracking server 230 to provide the IP address of a client to a requesting client. It will be understood that a client will not be able to participate in chain transmission mode if its external IP address cannot be discovered.
  • the chunk tracking server 230 may maintain a database of all clients in the network. Further, the chunk tracking server 230 may maintain a database of the chunks in the buffers of the last three user devices of each chain. Information associated with the last three user devices is required so that a client may join the end of a chain. When a chain breaks, the last three user devices in the truncated chain update the chunk tracking server 230 with details of the chunks in their respective buffers. The last three user devices in a chain may also send periodic notifications to the chunk tracking server 230 to indicate that the devices are still online. Further considerations of the chunk tracking server 230 are discussed in further detail in section 1.2.4 below.
  • the length of the delay between clients is dependent on the proximity of those clients and the physical network delay between the clients. Therefore, it is necessary for the chunk tracking server 230 to retain information associated with the location of the clients. This information may include, but is not limited to, the location (latitude and longitude), elevation, MAC address, IP subnet, and geofence identifier associated with the clients.
  • the location information required by the chunk tracking server 230 may comprise connectivity information. For example, a client may be required to inform the chunk tracking server 230 of whether it is connected to a local area network (LAN), WiFi, or an alternative wireless network. Accordingly, the client 270 provides the necessary location information to the chunk tracking server 230. Alternatively, the chunk tracking server 230 may obtain this information from the client 270. Location information may be provided to or obtained by the chunk tracking server 230 at predefined intervals or upon occurrence of a triggering event.
  • the client 270 queries the chunk tracking server 230 in order to establish whether there are any other user devices receiving the same content as the client 270 which are in proximity to the client 270.
  • the chunk tracking server 230 may automatically provide information on other user devices to the client 270, upon receipt of the content and/or location information from the client 270.
  • the chunk tracking server 230 may identify a number of user devices which can provide the client 270 with the video stream. Identified user devices may be required to satisfy a proximity criterion with respect to the client 270, as discussed further in section 1.2.2 below.
  • Each user device identified by the chunk tracking server 230 is the last user device in a particular chain; the client 270 cannot connect to a user device in the middle of a chain, as this may cause the uploading capability of a user device to be exceeded. Therefore, there may be a number of chains to which the client 270 may connect.
  • the chunk tracking server 230 may generate a list of candidate chains from which the client 270 may receive the video stream. For example, the chunk tracking server 230 may return a list of five candidate chains (i.e. five candidate user devices) and their IP addresses. In order to generate the list of candidate chains, the chunk tracking server 230 may calculate a score associated with each chain. The score may be calculated based on the number of user devices in the chain and may account for the geolocation of the client 270 with respect to the user device at the end of each chain and the connectivity of the user device at the end of each chain.
  • the client 270 may choose the two candidate user devices with the lowest network latency and attempt to connect to one of the two chosen user devices. If the first chosen candidate device is not available, the client 270 may attempt to connect to the second chosen candidate device. The chosen candidate device is indicated as the "primary connection" 272 in Figure 2. In order to determine the candidate source with the lowest network latency, the client 270 may initiate a ping test to determine the round-trip time (RTT) for each candidate source, based on the IP addresses provided by the chunk tracking server 230. The client 270 may then connect to the IP address of the source having the lowest RTT of the candidate chains returned by the chunk tracking server 230.
  • RTT round-trip time
  • Ping tests typically use the Internet Control Message Protocol (ICMP); the client 270 may send an ICMP echo request packet to the candidate source and wait for an ICMP echo reply. The RTT from transmission to reception will be returned to the client 270. If the client 270 is unable to determine the RTT of a candidate source from a ping test utilising ICMP (for example, if the ICMP echo request message is blocked by the candidate source's firewall), the client 270 may be able to utilise an alternative protocol to ping the candidate sources and wait for a response, thus determining the RTT associated with the candidate sources. Suitable alternative protocols may include, for example, HTTP.
  • the client 270 may detect that another user in the same network is watching the same content as the client 270, or that a user in close proximity is watching the same content as the client 270. In both of these scenarios, explained in further detail in section 1.2.7 below, the client 270 may not be required to request candidate sources from the chunk tracking server 230 prior to receiving content from another user device.
  • the client 270 may also establish a secondary connection 274 to the user device preceding the primary connection 272 in the chain, as exemplified in Figure 2B.
  • the primary connection 272 provides the client 270 with information associated with the secondary connection 274 (specifically, the IP address of the secondary connection 274). Therefore, if the primary connection 272 goes offline, the client 270 may receive video data from the secondary connection 274.
  • the client 270 acts as a streaming server and initiates a unicast session with the client 270.
  • the primary connection 272 serves the chunks stored in its buffer to the client 270, which is then able to stream the video data without requiring a connection to the streaming server 210.
  • the client 270 may continue to update the chunk tracking server 230 with the content of its buffer and any changes in location information.
  • the location of the client 270 will determine its capability to receive video data from its primary 272 and secondary 274 connections and its capability to serve video data to other users.
  • the client 270 also sends periodic notifications to the chunk tracking server 230 to indicate that the client 270 is not offline. Once three additional user devices join the chain, the client 270 may no longer be required to provide updates to the chunk tracking server 230.
  • the client 270 may maintain the session with the streaming server 210 for a predetermined period of time, while significantly reducing the amount of video data being streamed from the streaming server 210 to a predefined minimum level. For example, the client 270 may periodically stream one byte-range of a chunk for the duration of the predetermined period, thus substantially reducing bandwidth consumption. In the event that the primary 272 and secondary 274 connections both go offline during the predetermined period, the client 270 may immediately fall back on the streaming server 210 without having to re-establish the connection to the streaming server 210.
  • the client 270 may no longer stream video data from the streaming server 210. If both the primary 272 and secondary 274 connection go offline, the client 270 may make contact with the redirection server 250 in order to receive a direction to stream video data from a particular streaming server. The client 270 will then stream video data from that server until it meets the predefined criterion and is provided with a list of candidate chains from which to stream the video data.
  • the client 270 may make contact with the chunk tracking server 230 and request an alternative source from which to stream video data.
  • the chunk tracking server 230 may provide a list of candidate sources and the client 270 may receive video data from an identified candidate source, as described above.
  • the client 270 may continue to stream video data from its original source whilst streaming from the newly identified source, in order to compare the throughput from both sources and determine the chain with the best throughput.
  • the client 270 monitors throughput from both sources over a predefined time interval. The predefined time interval must be sufficient to enable the client 270 to identify its preferred source.
  • the client 270 is streaming from both sources, no additional users can connect to either chain; therefore, the predefined time interval must not be prohibitively long so as to prevent transmission of video data to requesting users. After the client 270 has identified its preferred source, it may terminate the connection with the other source. Further throughput issue resolution considerations are discussed in section 1.6 below.
  • the client 270 exits the current chain and requests the new channel from the redirection server 250.
  • the redirection server 250 identifies a server from which the client 270 may stream the desired channel and directs the client 270 to stream the channel from the identified server.
  • the client 270 receives video data from the identified server until the client 270 meets the predefined criterion, as described above.
  • the client 270 may then provide content and location information to the chunk tracking server 230 and join a chain as described above. If the user pauses or rewinds the live broadcast, the client 270 exits the current chain and acquires the replay video data from the streaming server 210 via a standard unicast session.
  • Replay video data may alternatively be stored at a separate server, to which the client 270 may connect.
  • the client 270 requests video data from the streaming server 210 and reconnects to a chain if it remains qualified to participate in chain transmission mode.
  • the chunk tracking server 230 provides the requesting client 276 with information associated with the client 270, so that the client 270 can serve the video data to the requesting client 276.
  • the client 270 provides information about its primary connection 272 to the requesting client 276 so that the requesting client 276 can establish its own secondary connection to the primary connection 272 of the client 270, as shown in Figure 2B.
  • the client 270 could pose a threat to the reliable transmission of video data if its buffer level falls below a minimum threshold.
  • the client 270 must exit the chain and reconnect with the streaming server 210 until its buffer level exceeds the threshold level for participation in chain transmission mode.
  • the client 270 may then coordinate with the chunk tracking server 230 to reconnect to a chain.
  • the minimum threshold which determines whether it is necessary for the client 270 to exit the chain, is different from the threshold for participation in chain transmission mode.
  • the difference in threshold values accounts for fluctuations in network conditions causing associated fluctuations in the client buffer level.
  • the minimum threshold may be determined dynamically based on the network conditions. For example, the minimum threshold may be a variable limit between 30% and 50% of buffer capacity.
  • the chunk tracking server 230 If the chunk tracking server 230 is unable to identify any candidate user devices from which the client 270 can receive the video data, the client 270 continues to stream video data from the streaming server 210. As noted above, the chunk tracking server 230 maintains content and location information associated with the client 270. Therefore, the client 270 may become the "leader" in a chain in the event that a user device is identified as being in proximity to and receiving the same content as the client 270. In this scenario, the user device establishes a connection to the client 270 in the same way as the requesting client 276, as discussed above.
  • a byte-range may be specified such that a particular chunk is broken up into portions. This method enables a client to upload the data specified by the byte-range information, without necessarily having received the entire chunk. This reduces delays between users participating in chain transmission mode and avoids a client becoming locked into a long TCP session in which a large chunk is being transferred to the next client.
  • the first client in the chain determines how a chunk should be portioned by defining byte-ranges of that chunk. As the whole chunk is available for download from the streaming server 210, the first client can choose how to split the chunk. It follows that all following clients are also capable of splitting the chunk into portions if the entire chunk is available for download from the preceding device in the chain. However, in practice, a client may download a byte-range and relay the byte-range to the next client before the remainder of the chunk has been downloaded. For larger chunks, byte- ranges specified by the first client in the chain are likely to be relayed down the entire chain.
  • a client may dynamically determine the byte-range to be downloaded from the preceding user in the chain. This dynamic byte-range determination may be based on the client's download speed. For example, a client may specify a byte-range of a chunk as 25% of that chunk. Based on the amount of time required to download 25% of that particular chunk, the client may adjust the byte-range accordingly. The client may determine that the specified byte-range has been downloaded sufficiently quickly for a different byte-range to be specified; therefore, the client may decide to download the next 40% of that chunk. Thus the time required to download previous byte-ranges may be used by the client in specifying future byte-ranges.
  • the chunk tracking server 230 identifies user devices which are receiving the same content as the client 270 and are in proximity to the client 270. For example, user devices may be identified as being connected to the same network as the client 270. Alternatively, user devices may be identified as belonging to the same IP subnet as the client 270. As a further alternative, user devices may be identified as being in proximity to the client 270 using geofencing technology, which is discussed in further detail below. Therefore, user devices may be required to satisfy a proximity criterion, which may specify that identified user devices must be determined as being in proximity to the client 270.
  • an identified user device may be required to satisfy a proximity criterion specifying that the user device is either connected to the same network as the client 270, identified as belonging to the same IP subnet as the client 270, or located within the same geofence as the client 270.
  • the connectivity of a user device may be taken into account in determining whether that user device satisfies a proximity criterion with respect to the client 270.
  • the chunk tracking server 230 may prioritise any user devices identified as being in proximity to the client 250. For example, user devices identified as belonging to the same network may be prioritised over those identified using geofencing, which may in turn be prioritised over those identified as belonging to the same IP subnet. User device connectivity may also be factored into the prioritisation scheme used by the chunk tracking server 230. The prioritisation of user devices may be used by the chunk tracking server 230 in generating the list of candidate user devices (candidate chains), as discussed above. The client 270 may then ping the candidate sources in order to establish which candidate source has the lowest RTT (or use any alternative method to determine each source's RTT), as discussed above.
  • the clients may provide the chunk tracking server 230 with information associated with their location.
  • the chunk tracking server 230 may attempt to detect the location of clients. Firstly, the chunk tracking server 230 may query a client to check for its network MAC address. Secondly, the chunk tracking server 230 may query a client to establish whether the user is situated within a particular geofence, as explained further below. If these two options are not available, the chunk tracking server 230 may query a geolocation service provider using the user's IP address in order to establish the location of the client to a reasonable degree of accuracy.
  • Geofencing is a feature in a software program that uses the global positioning system (GPS) or radio frequency identification (RFID) to define geographical boundaries.
  • GPS global positioning system
  • RFID radio frequency identification
  • a geofence acts as a virtual barrier.
  • An administrator can be alerted when a user device enters (or exits) the defined boundaries of the geofence.
  • the boundaries may be defined using latitude and longitude data or through the use of a satellite view of a geographical area, provided by an application such as Google Earth.
  • the chunk tracking server 230 may be able to establish whether the client 270 is within the same geofence as a user device participating in chain transmission mode. As described above, location information may be provided to or obtained by the chunk tracking server 230 upon occurrence of a triggering event. If the user is on the move, the chunk tracking server 230 may be able to receive a notification when the client 270 enters or exits the same geofence as a user device participating in chain transmission mode. The chunk tracking server 230 may then direct the client 270 to establish primary and secondary connections as described above. For this purpose, the chunk tracking server 230 may maintain a database of defined geofences.
  • the client 270 must meet the upload speed threshold in order to qualify for participation in chain transmission mode. Whether the client 270 meets the qualification threshold depends not only on the throughput of the device but also on whether the user allows the client 270 to upload video data for another user. A user may be able to opt out of sharing data with other users, in which case the client 270 will stream video data from the streaming server 210 in the same way as if the qualification threshold is not met. 1.2.4. Chunk tracking server considerations
  • the chunk tracking server 230 may maintain a database of users in the network and their location. Further, the chunk tracking server 230 may maintain a database of the chunks in the buffers of the last three user devices in each chain. This ensures accurate matching of clients requesting a video stream with candidate user devices participating in chain transmission mode. By maintaining information on the chunks in the buffers of the last three clients in each chain, the chunk tracking server 230 is able to direct a requesting client to receive video data from the last client in the chain. This information also enables the chunk tracking server 230 to monitor the primary and secondary sources of the last client in the chain and consequently establish whether there are any connection issues affecting the transmission of video data to the last client in each chain.
  • the chunk tracking server 230 may periodically probe the last client in the chain in order to determine whether that client is still online. Further, the chunk tracking server 230 may probe the last client in the chain in order to verify that the content of that client's buffer matches the chunk information stored in the chunk database. This helps the chunk tracking server 230 to identify whether a user has, for example, paused the live stream. If so, the chunk tracking server 230 will not include that user device in a list of candidate sources for a requesting client. The chunk tracking server 230 may further be operable to determine, for example from the playlist file of the last client in the chain, whether the content being viewed by that client is DRM encrypted. Information on whether content is encrypted may affect the score calculated by the chunk tracking server 230 in generating the list of candidate sources for a client.
  • the chunk tracking server 230 is operable to query the chunk database to determine whether to redirect a client to stream video data from another client.
  • the chunk tracking server 230 may in fact comprise multiple servers across multiple locations.
  • Certain channels monitored by the chunk tracking server 230 may be location-specific, such as the BBC in the UK. Accordingly, the number of requests received at an individual server may be limited by programming that server to monitor only the channels accessed by users in a particular area. This controls the number of users accessing each individual server.
  • the database maintained by the chunk tracking server 230 may further comprise information associated with the number of users in the chain, in order to permit the chunk tracking server 230 to impose a cap on the number of users in each chain and thus reduce the overall delay between the live stream and the stream received at the last client in the chain.
  • the chunk tracking server 230 may maintain a blacklist of IP subnets associated with regions having limited upload capability. User devices within a backlisted IP subnet are disqualified from participating in chain transmission mode.
  • the chunk tracking server may become aware that a client is qualified to participate in chain transmission mode upon receipt of an indication from the client that it has met the qualification threshold. It can be appreciated that alternative methods may be employed by the chunk tracking server 230 in order to determine whether a client has become qualified to participate in chain transmission mode.
  • the chunk tracking server 230 may maintain a separate database of all clients receiving content from the streaming server 210, and may periodically poll these clients in order to determine whether they meet the predefined criterion. This method requires a communications link between the streaming server 210 and the chunk tracking server 230, but obviates the requirement for clients to make contact with the chunk tracking server 230.
  • the chunk tracking server 230 may receive a notification from the speed test server 220 that a client meets the upload speed criterion.
  • a client may inform the speed test server 220 that it has met the predefined criterion for transmission reliability (for example, the buffer fill level).
  • the speed test server 220 may alternatively monitor clients served by the streaming server 210, as described above with respect to the chunk tracking server 230.
  • a client may inform the streaming server 210 to reduce the transmission rate once the predefined criterion has been met.
  • the streaming server 210 may inform the speed test server 220 and/or the chunk tracking server 230, each of which may initiate contact with the client. 1.2.5. DRM considerations
  • Video data streamed from the streaming server 210 may be encrypted for digital rights management (DRM) purposes.
  • DRM digital rights management
  • the video data stream is encrypted based on a random time window. For example, one key may be required to decrypt a video stream between 1 :00 pm and 1 : 15 pm, but a different key may be required to decrypt the video stream between 1 : 15 pm and 2:50 pm.
  • Keys are issued by the DRM server 260, which also determines when to change the key. Each chunk of video data therefore has an associated key required for playback of that chunk.
  • the leader client in a chain is required to request the relevant key for decryption of a specific number of chunks, prior to playback.
  • the transmission of video data by clients participating in chain transmission mode maintains transparency with the DRM server 260.
  • the leader client relays chunks to a following client in the chain, the leader client also shares the decryption key with that following client. Accordingly, the load on the DRM server 260 is reduced. As following clients are not required to obtain keys from the DRM server 260, transmission delays are consequently reduced.
  • the DRM server 260 does not permit sharing of decryption keys (for example, if a client is billed for receipt of each decryption key), an alternative approach is for each client in the chain to obtain decryption keys directly from the DRM server 260.
  • a client may be operable to determine whether it is necessary to obtain the key for decryption of a particular chunk from the DRM server 260. For example, a client may examine its playlist file and determine that a decryption key received from the previous client in the chain may not be suitable for playback of a particular chunk. The decryption key may be unsuitable for playback of the chunk as the key may have expired or the key may be restricted to a particular geographical area. If the client determines that the key is unsuitable for playback of a chunk, the client may obtain the relevant decryption key from the DRM server 260. 1.2.6. Upload speed considerations
  • the chain transmission architecture has been described above as comprising chains of clients, with each client having a primary connection from which video data is served and a secondary, backup connection. Each client may be the primary connection for a following client, to which video data may be served, and a secondary, backup connection to a further following client. It has further been identified above that the chain transmission architecture does not support adaptive bitrate streaming. The limitation of serving video data to a single client at a single bitrate is imposed by the uploading capability of clients in the chain. Future developments in Internet connectivity are likely to result in clients having increased upload speeds, thus relaxing the constraints on the number of users served by a client and the video data bitrate. Accordingly, the chain transmission architecture described above may be modified for chains of clients with higher upload speeds.
  • a client may be able to stream video data from two separate clients, each of which serves the video data at a different bitrate.
  • the client receiving the two streams at different bitrates may then serve two additional primary connections by serving the video stream at the first bitrate to a first following client and the video stream at the second bitrate to a second following client. If uploading capability allows, further bitrates could be supported.
  • Clients may also be able to distribute content at a particular bitrate to a plurality of following clients. For example, a client may download video data from its primary connection and serve that video data to two following clients, each of which may be able to serve the video data to two further following clients. The number of following clients that each client can serve is thus limited only by the upload speed of that client.
  • the principle of serving multiple following clients may be combined with the principle of serving video data at multiple bitrates, so that each client may serve video data to a plurality of following clients at each of a plurality of bitrates.
  • a client may not necessarily be required to request candidate sources from the chunk tracking server 230 prior to receiving video data.
  • a client may be operable to detect, through use of a processor working at the application level, that another user in the same network is watching the same content as that client. In this case, the client may establish a connection to the other user's device. Streaming video data from the user device in the same local network may be more reliable than streaming video data from a streaming server.
  • a client may not be required to meet a predefined criterion before participating in chain transmission mode. Instead, the client is served video data by the user device in its network and informs the chunk tracking server 230 of the chunks in its buffer and its location, so that a requesting client may receive video data from that client.
  • user devices in close proximity may be operable to establish a connection with one another.
  • Direct communication between user devices in close proximity may enable information associated with chunks in a client buffer to be shared with neighbouring devices. Therefore, prior to streaming content from a streaming server, a client may check for other user devices in close proximity which may be streaming the same content.
  • each user device may be required to consent to their IP address and buffer content being discoverable to other users within a predefined radius.
  • any platform implementing the present method may also be required to implement a protocol in order to enable discoverability of clients using the platform. Therefore, in order to receive content through the platform, each client may be required to consent to use of the implemented protocol. That is, each client may be required to consent to being discoverable to any neighbouring devices within a certain radius and relaying video data to these neighbouring devices. All neighbouring devices will also be using the platform in order to receive video data; therefore, all devices using the present method will be utilising the same protocol. Participation in the present method is, accordingly, conditional on each client using a particular platform implementing this protocol.
  • the client may establish a connection to a primary source and a secondary source within the predefined radius, as described above in section 1.2.
  • the chain transmission architecture may still be employed for transmission of video data between users in close proximity, so that the uploading capability of each client is not overloaded.
  • the requirement for clients to send requests to the various servers identified above may be significantly diminished or even eliminated by employing the present method.
  • This Off grid' scenario works most efficiently if there is a dense crowd of users, as the likelihood of identifying a user device which is streaming the desired content within the predefined radius will be higher.
  • a live video stream may have additional files associated with each chunk.
  • each chunk may have an associated audio description file or a series of audio files supporting playback in a number of different languages.
  • each chunk may have an associated closed caption (subtitle) file; again, closed captions may be displayed in a number of different languages.
  • Information on the different audio and text files associated with chunks can be obtained from the playlist file.
  • a client downloads any necessary closed caption files and additional audio files from the streaming server 210. Closed captioning is an optional feature which may not be relayed to following clients; instead, these clients may obtain the closed caption files from the streaming server 210.
  • a video stream may support a number of different languages and details of supported languages are included in the playlist file. If a client receives a video stream in one language (e.g. French), but the user wishes to associate the content with a different supported language (e.g. Chinese), the client may download the desired language file from the streaming server 210.
  • FIG. 3 shows an exemplary embodiment of a transmission architecture 300 in which a third party server 310 delivers content to a number of clients 320.
  • a third party may install the third party server 310 in a location in which a number of independent users may be present, for example, a restaurant. In this scenario, the third party may be the restaurant owner.
  • the third party server 310 enables the clients 320 to establish a wireless connection with the third party server.
  • the third party server 310 may comprise a wireless hotspot.
  • Clients 320 may connect to the third party server 310 if they enter within a geofence 330 surrounding the third party server 310.
  • the geofence 330 may be defined as the perimeter of the restaurant.
  • the third party server 310 can then serve content to the wirelessly connected clients 320.
  • the content may be relayed between clients 320 using the chain transmission architecture described above. That is, the clients 320 are connected as a chain in order to distribute the content served by the third party server 310.
  • This concept can also be applied to locations in which individual users do not have an Internet connection. If a leader client is the only user with an Internet connection, all following clients can connect to the leader client, for example using Wi-Fi Direct, and relay content being viewed at the leader client using the chain transmission architecture described above.
  • the third party server 310 may receive video data from a streaming server.
  • the first client may connect wirelessly (for example using Wi- Fi Direct) to the third party server 310 and receive the content.
  • Another client may also connect wirelessly to the first client, thus forming a chain, and further clients may connect in turn.
  • the video data may then be transmitted down the chain of clients 320.
  • a caching server 312 may be deployed at the same location as the third party server 310.
  • the caching server 312 may be able to store a portion of a live video stream in addition to VOD content. In the event that the Internet connection to the third party server 310 is interrupted, clients 320 will still be able to stream video data from the caching server 312.
  • a cell tower may be overloaded if demand from a large number of users results in the bandwidth capacity of the cell tower being overloaded.
  • This scenario can be envisioned if there are a crowd of users in a given area, all trying to access a live video stream.
  • use of the caching server 312 and third party server 310, in conjunction with the method described above, would enable chained transmission of the video data without overloading the bandwidth capacity of the cell tower.
  • Figure 4 illustrates an exemplary process of a client requesting video content.
  • the client connects to the streaming server.
  • the client receives video data from the streaming server until the predefined criterion is met (for example, until the client buffer level exceeds 80%), in step 402.
  • the client then performs a self-upload test with the speed test server, in step 404.
  • the speed test server determines whether the client meets the qualification threshold. If the client does not meet the qualification threshold or the user has chosen to opt out of relaying video data to other users, the client maintains the unicast session with the streaming server, in step 408. Alternatively, if the client does meet the qualification threshold, the client provides the chunk tracking server with a list of the HLS chunks within the client buffer, in step 410. The client may also provide the chunk tracking server with information associated with the location of the client.
  • the chunk tracking server determines, in step 412, whether there are any other user devices located in proximity to and receiving the same content as the client. If no other user devices are identified, the client continues to receive content from the streaming server, in step 414. If any other user devices are identified, the chunk tracking server provides the client with a list of candidate sources from which the video data may be obtained, in step 416. In either case, the client continues to update the chunk tracking server with the content of the client buffer. In step 418, the client identifies the candidate source with the lowest network latency from the received list of candidate sources.
  • the client then establishes, in step 420, a primary connection to the identified source and, in step 422, a secondary connection to the entity (user device or streaming server) providing video data to the identified source.
  • the identified source streams video data to the client.
  • the client may then reduce the rate of streaming video data from the streaming server to a predefined minimum level, in step 426.
  • Figure 5 shows an exemplary procedure for maintaining the transmission of video data in the event that a user or users go offline. If, in step 500, the primary source of a client goes offline, the client determines whether its secondary source is online, in step 502. The client receives video data from the secondary source, if the secondary source is online, in step 504.
  • the client may determine whether it still has a connection to the original HLS streaming server, in step 506.
  • the connection with the original streaming server may be maintained for a predetermined period of time, as discussed above. If the connection with the original streaming server has not been terminated (that is, the predetermined time period has not expired), the client may stream video data from the original streaming server, in step 508. If, however, the predetermined period has expired, the client makes contact with the redirection server, in step 510.
  • the redirection server directs the client to stream video data from a particular server, in step 512.
  • the client then streams video data from the streaming server until the predefined criterion is met, in step 514.
  • the client participates in chain transmission mode if the qualification threshold is met, as described above in section 1.4.
  • FIG. 6 shows an exemplary procedure for identifying throughput issues in the transmission chain.
  • a client identifies, in step 600, that throughput from its primary source is insufficient if its buffer level starts to reduce.
  • the client requests an alternative source from the chunk tracking server. If, in step 604, an alternative source is available, the client streams video data from a source identified by the chunk tracking server, in step 606. Whilst streaming from the alternative source, the client continues to stream video data from its original source. The client then determines whether the throughput issue persists, in step 608. If so, the throughput issue is identified as being caused by internal congestion at the client, in step 610. The client therefore breaks from the chain and makes contact with the redirection server, in step 612.
  • step 614 the client streams video data from the streaming server to which it is directed by the redirection server, in a conventional unicast session. If, in step 604, no alternative source is available, the client makes contact with the redirection server, in step 616. In step 618, the client streams video data from the streaming server to which it is directed by the redirection server. The client then determines whether the throughput issue persists, in step 620. If so, the throughput issue is again determined as being caused by internal congestion at the client, in step 622. In step 624, the client therefore breaks from the chain and continues to stream video data from the streaming server, in a conventional unicast session.
  • the throughput issue is determined, in step 626, as being caused by congestion in the chain.
  • the client determines, in step 628, whether throughput from the alternative source or streaming server is greater than throughput from its original source. In determining which source has greater throughput, the client may stream from both sources for a predefined time interval, so that the client can make a comparison between the sources. If throughput from the original source is sufficient, the client continues, in step 630, to stream video data from its original source. The client therefore terminates the connection with the alternative source or streaming server, in step 632.
  • the client breaks from the original chain and streams from the alternative source or streaming server, in step 634.
  • the client also instructs all following clients to break from the original chain and request alternative sources, in step 636.
  • Section 2 Video on demand (VOD)
  • VOD In the chain transmission architecture described above, a user wishing to switch from live streaming to VOD will exit the chain.
  • VOD involves a client accessing a video file from a content server.
  • the content server serves a streaming server with the video file for edge caching at the streaming server.
  • the client streams the video file from the streaming server and temporarily stores downloaded content in the client buffer prior to rendering the content for display.
  • Video data in the client buffer is then erased to make space for new video data served by the streaming server. If the user wishes to re-watch the VOD content, the video data is again served to the client by the streaming server.
  • video data is served to the client each time the user wishes to view the VOD content.
  • the process is identical for viewing a film.
  • VOD streaming can result in similar scalability and quality of service issues as associated with live streaming. These issues may occur, for example, if there is significant demand for viewing a new film release.
  • the video transmission module 200 shown in Figure 2A can also be used to address the above-mentioned issues associated with VOD streaming.
  • Figure 7 shows the interaction of a client 770 with the video transmission module 200, according to an exemplary embodiment.
  • Clients interacting with the video transmission module 200 in order to reduce the load on the streaming server 210 will be referred to as participating in "collaborative VOD distribution mode", for ease of reference.
  • Collaborative VOD distribution mode is compatible with the data flow control methods disclosed in WO2015/150812, as described above with respect to live streaming.
  • a user In order to participate in collaborative VOD distribution mode, a user must consent to local storage of the streamed data on the client 770.
  • Clients typically contain sufficient storage capability in order to store video data received from the streaming server 210.
  • the client 770 may only be necessary to remove stored data from a client's local storage if additional storage is required. Therefore, as data is received from the streaming server, it is moved from the client buffer to the local storage of the client 770. When the local storage of the client 770 becomes full, the client 770 may overwrite the old data with new video data in a circular loop. By storing video data from the streaming server 210 in local storage at the client 770, a user wishing to re-watch VOD content will be able to reuse the video data cached at the client 770.
  • a further condition of participation in collaborative VOD distribution mode is that a user may be required to consent to sharing the cached video data with other users.
  • the client 770 provides the chunk tracking server 230 with details of the chunks stored in its local storage.
  • the client 770 may be required to meet an upload speed threshold. Therefore, the client 770 uploads content to the speed test server 220, which determines whether the client 770 can participate in collaborative VOD distribution mode.
  • the client 770 may be able to serve video data to multiple users if its upload speed is sufficient.
  • collaborative VOD distribution mode the uploading bandwidth consumption of the client 770 is limited to 70% of the user's total available upload bandwidth. This ratio allows the user's typical Internet usage to remain smooth while serving content to other users.
  • the client 770 may serve video data to other user devices located in proximity to the client 770. Thus the proximity considerations discussed above in section 1.2.2 are also applicable to VOD streaming. Therefore, the client 770 provides the chunk tracking server 230 with information associated with the location of the client 770, as described with respect to live streaming. Alternatively, the chunk tracking server 230 may obtain this information from the client 770, again as described above.
  • the VOD content served to the user may have a time restriction, beyond which the user is unable to view the VOD content.
  • the viewing time restriction does not affect the ability of the client 770 to share the VOD content with other users.
  • the client 770 may be required to obtain decryption keys from the DRM server 260, request its external IP address from the STUN server 240, and provide the chunk tracking server 230 with the returned IP address. It will further be appreciated that the client 770 may provide user devices located in proximity to the client 770 with decryption keys in addition to chunks of the video file.
  • the dedicated transmission protocol described above in section 1.1 may also be used to provide the video file to the client 770. That is, sources identified as storing a portion of the video file may push chunks of the video file to the client 770.
  • the chunk tracking server 230 which facilitates clients' participation in collaborative VOD distribution mode stores information associated with the chunks in local storage at managed clients (managed clients being all clients which have provided information associated with the content of their local storage to the chunk tracking server 230).
  • the chunk tracking server 230 does not only update the chunk database with chunk information received from a specific number of clients. Instead, the chunk tracking server 230 is required to maintain a chunk database of all chunks stored by managed clients.
  • the chunk database may be updated every time the content of a client's local storage changes.
  • the chunk tracking server 230 may be required to provide information on neighbouring user devices to the client 770.
  • the chunk tracking server 230 is operable to determine whether user devices satisfy a proximity criterion with respect to the client 770 and generate a score associated with neighbouring user devices in order to prioritise user devices identified as being in proximity to the client 770.
  • a user device may provide connectivity information to the chunk tracking server 230, which may use the connectivity information in determining whether the user device satisfies the proximity criterion and in prioritising neighbouring devices.
  • the chunk tracking server 230 may prioritise user devices storing more chunks of the video file.
  • the proximity criterion evaluated by the chunk tracking server 230 may use geofencing information, as described in section 1.2.1 ; accordingly, the chunk tracking server 230 may maintain a database of defined geofences. Considerations described in section 1.2.5 in relation to sharing of decryption keys are also applicable to collaborative VOD distribution mode. Clients may download decryption keys from neighbouring devices or obtain decryption keys from the DRM server 260. The client 770 may be operable to determine whether a decryption key downloaded from a neighbouring user device is suitable for playback of a chunk. The chunk tracking server 230 may also be able to store, for each chunk, an indication whether that chunk is DRM encrypted.
  • Clients may be operable to detect whether users connected to the same local or wireless network store a portion of the video file.
  • clients may download chunks of the video file from locally-connected user devices or user devices located in close proximity to the client, as described in section 1.2.7 above.
  • Clients may also be able to download closed caption files and additional audio files from the streaming server 210, as described above in section 1.2.8.
  • FIG. 8 An exemplary embodiment of entities operating in collaborative VOD distribution streaming mode is illustrated in Figure 8.
  • a client 830 with a buffer 832 and local storage 834 streams VOD content from a streaming server 810, which establishes a unicast session with the client 830.
  • Video data received from the streaming server 810 is stored in the client buffer 832 prior to being rendered for display at the client 830. After being displayed, the video data chunks are stored in the local storage 834 of the client 830.
  • a first subset of a video file is streamed from the streaming server 810 until the client 830 meets a predefined criterion (for example, until the buffer fill level exceeds a threshold such as 80%), as described with respect to live streaming. Once the predefined criterion is met, the client 830 makes contact with the chunk tracking server 820, in order to request chunks from users in proximity to the client 830.
  • a predefined criterion for example, until the buffer fill level exceeds a threshold such as 80%
  • the client 830 In addition to requesting suitable sources for downloading chunks from users in proximity to the client 830 from the chunk tracking server 820, the client 830 also informs the chunk tracking server 820 of the chunks in its local storage 834. This enables the chunk tracking server 820 to include the client 830 in future lists of candidate user devices from which chunks of the video file can be downloaded.
  • the chunk tracking server 820 may maintain a chunk database containing information associated with the chunks stored locally at all clients participating in collaborative VOD distribution mode. Therefore, clients participating in collaborative VOD distribution mode are required to update the chunk tracking server 820 with any changes to the chunks stored in their local storage. These clients may also be required to inform the chunk tracking server 820 of the bitrate of each chunk; the chunk tracking server 820 will attempt to provide requesting clients with candidate devices having chunks of the video file at the highest bitrate.
  • the chunk tracking server 820 queries its chunk database and provides the client 830 with a list of candidate user devices from which chunks can be downloaded. For example, the chunk tracking server 820 may provide the client 830 with a list of sixteen neighbouring user devices having chunks of the VOD content in their local storage and sufficient uploading capability. Each candidate user device identified by the chunk tracking server 820 stores a part of the video file which has not yet been downloaded to the local storage 834 of the client 830. As stated above, candidate user devices identified by the chunk tracking server 820 may be required to satisfy a proximity criterion based on the location of the client 830.
  • the client 830 may then establish connections with a number of the neighbouring user devices returned by the chunk tracking server 820 in the list of candidate user devices.
  • the dynamic multi-links mode disclosed in WO2015/150812 may be employed in conjunction with collaborative VOD distribution mode.
  • the client 830 may initially simultaneously download chunks from two sources.
  • the client 830 may then determine that it can download chunks from two additional sources, and simultaneously download from four sources.
  • the number of download sources may be incrementally increased by the client 830 responsive to a determination that additional TCP links can be supported. That is, the client 830 determines whether chunks can be downloaded from additional sources without adversely affecting streaming or other Internet activity.
  • the number of devices from which the client 830 is able to download chunks may be capped at a predefined level.
  • the client 830 may be able to simultaneously download chunks from up to sixteen neighbouring user devices (the "download sources" 840).
  • the number of download sources is capped at a particular level to avoid excessive TCP overheads.
  • the client 830 continues to stream the first subset of the video file from the streaming server 810.
  • these user devices may not collectively store the remainder of the video file (i.e. the portion not stored in the local storage 834 of the client 830).
  • the chunk tracking server 820 may continually update the list of candidate user devices from which the client 830 may download the next portion of the video file.
  • the client 830 may connect, download that portion of the video file, and then disconnect from that user device, thus enabling the client 830 to establish a connection to a separate user device. In the event that no candidate user devices store a particular portion of the video file, the client 830 may download this portion of the video file from the streaming server 810.
  • the client 830 downloads chunks from the download sources 840, starting with the last chunk of the VOD content and downloading chunks in reverse playback order.
  • the client 830 continues to stream video data for the start of the VOD content (i.e. the first subset of the video file) from the streaming server 810 while downloading video data for the end of the VOD content from the download sources 840.
  • the client 830 also continues to update the chunk tracking server 820 with the chunks stored in its local storage 834, so that the chunk tracking server 820 can maintain an accurate record of chunks stored by clients participating in collaborative VOD distribution mode.
  • the client 830 detects that the remainder of the video file has been downloaded to the local storage 834. When this occurs, there is no longer any requirement for the client 830 to continue to stream video data from the streaming server 810 and the client 830 may terminate its connection with the streaming server 810. Further, the client 830 may stop downloading video data from the download sources 840.
  • the download sources 840 identified by the chunk tracking server 820 over the duration of the video file should collectively store a second subset of the video file, that is, the portion of the video file not streamed from the streaming server 810.
  • the first and second subsets of the video file together make up the complete video file.
  • the chunk tracking server 820 may be required to identify candidate user devices storing video data beyond a particular point in the video file (that is, from the end of the first subset onwards). It will be understood by persons skilled in the art that the duration of the first and the second subsets of the video file is not determined until the client 830 has detected that the remainder of the video file is present in its local storage 834.
  • the first subset thus represents the total duration of the video file that is streamed from the streaming server 810 whereas the second subset represents the total duration of the video file that is downloaded from the download sources 840 (and, if necessary, from the streaming server 810 if none of the download sources 840 store a particular portion of the video file).
  • the downloading process may be relatively quick, meaning that the majority of the VOD content may be obtained by downloading, rather than streaming. Therefore, demand on the streaming server 810 may be significantly reduced in comparison to conventional VOD streaming.
  • Streamed and downloaded data may be stored in the local storage 834 of the client 830, enabling the client 830 to function as a download source for any user devices wanting to view the same VOD content.
  • the content of the client's local storage 834 will be stored in the chunk database of the chunk tracking server 820. If, at any point, the client buffer level falls below a minimum threshold, playback of the video file may be adversely affected. Therefore, if the buffer level falls below a minimum threshold, the client 830 stops downloading chunks from download sources 840 in order to yield bandwidth for streaming from the streaming server 810. The client 830 then streams the video data from the streaming server 810, in a conventional unicast session, until the predefined criterion is met, as discussed above.
  • the VOD content may be divided into multiple segments, each of which may be given a sequential identification number.
  • Data is streamed (from the streaming server 810) from the beginning of the first segment whilst being downloaded (from the download sources 840) from the end of the first segment.
  • the client 830 may terminate the connection with the streaming server 810 once the remainder of the first segment has been downloaded to its local storage 834. Whilst playing the first segment, the client 830 may download video data from the start of the second segment, from the download sources 840. Chunks of the first segment may be discarded after playback to allow for chunks of the second segment to be downloaded. This process may repeat for further segments.
  • the client 830 may download as much of the video file as can be stored in its local storage 834. Once the local storage is filled to capacity, the client may stream the remainder of the video file from the streaming server 810. 2.2.2. Chunk portioning
  • Portions of chunks may be downloaded from different download sources 840. This is achieved by specifying a byte-range (for example, 25%) of a particular chunk, as described above in section 1.2.1. Thus a first portion of a chunk, defined by its associated byte-range information, may be downloaded from one source while the remaining portions of the chunk may be downloaded from other sources. Each chunk can then be reconstructed from the byte-ranges downloaded from different sources. Byte-range download may be employed if any HLS chunks exceed a predefined length, for example three seconds. Portioning chunks in this way avoids the download sources 840 becoming locked into a long TCP session, such as for the transfer of a ten second HLS chunk, which can result in issues if the network throughput becomes unstable.
  • a byte-range for example, 25%
  • the client 830 may be operable to dynamically determine the byte-range to be downloaded from each download source 840, as described above in section 1.2.1.
  • Chunks are downloaded and reconstructed in reverse sequential order (that is, from the end of the VOD content or first segment thereof), so that a chunk is not downloaded until all portions of the "previous" chunk have been downloaded (the "previous" chunk being the sequentially preceding chunk in the download sequence, but the temporally subsequent chunk in the playback sequence). It should be noted that chunks of segmented video files may be downloaded and reconstructed in temporally sequential order, for the second and subsequent segments (as discussed in section 2.2.1).
  • Figure 9A shows the start of an exemplary collaborative VOD distribution process, at which point the client is streaming video data from the streaming server 810.
  • the client After the client meets the predefined criterion (for example, a buffer level exceeding the 80% threshold), the client starts to download chunks from the download sources 840, as shown in Figure 9B (for clarity, only two download sources are shown).
  • the exemplary process continues, as shown in Figure 9C, until the client detects that the remainder of the VOD content is present in its local storage (Figure 9D), whereupon the client may terminate the connection with the streaming server 810 and the download sources 840.
  • Figure 10 shows an exemplary VOD streaming procedure from the perspective of the client.
  • the client streams a first subset of a video file from the streaming server, until the client meets the predefined criterion, as described above, in step 1002.
  • the client requests a list of candidate download sources from the chunk tracking server.
  • the chunk tracking server identifies download sources collectively storing a second subset of the video file, wherein the first and second subsets of the video file together make up the complete video file.
  • the client connects to neighbouring download sources in step 1006, and downloads chunks of VOD content in reverse order from the neighbouring devices, in step 1008.
  • the client continues to stream video data from the streaming server while downloading video data from the download sources.
  • step 1010 the client determines whether the remainder of the VOD content is stored in its local storage. If not, the client continues to download chunks from neighbouring devices, as in step 1008. However, if the remainder of the VOD content is stored in its local storage, the client may terminate the connection with the streaming server and the neighbouring devices, in step 1012.
  • Exemplary client Figure 11 shows an exemplary embodiment of a client 1100 operable to participate in chain transmission mode for live video streaming or collaborative VOD distribution mode.
  • the client 1100 may be operable to establish connections to a streaming server, a speed test server, a chunk tracking server, a STUN server, a redirection server, a DRM server, and a number of user devices, as described in the above embodiments.
  • the client 1 100 may comprise a transceiver 11 10 operable to receive data from a streaming server, and/or from other user devices.
  • the transceiver 11 10 may further be operable to provide the received video data for receipt at a requesting user device.
  • the transceiver 1 1 10 may be operable to upload content to a speed test server so that a determination can be made as to whether the client 1100 satisfies an upload speed criterion.
  • the client 1 100 may further comprises a buffer 1120, which may, for example, be implemented as RAM.
  • a processor 1130 may be operable to determine the level of the buffer 1120, or monitor any other parameter of the client 1 100 necessary for determining whether the client 1 100 meets the predefined criterion for participation in chain transmission and/or collaborative VOD distribution modes.
  • the processor 1130 may further be operable to determine the location of the client 1 100 and the client 1 100 may provide location information to the chunk tracking server.
  • the processor 1 130 may be operable to identify, from a list of candidate sources returned by the chunk tracking server, the candidate source with the lowest network latency.
  • the processor 1130 may be operable to measure the RTT associated with each candidate source.
  • the processor 1130 may also be operable to determine whether a HLS chunk downloaded from the streaming server exceeds a predefined length. If so, the processor 1 130 may divide the HLS chunk into a number of other chunks such that none of the resulting chunks exceed the predefined length.
  • the processor 1 130 may further be operable to identify throughput issues in chain transmission mode, so that the client 1 100 can request an alternative source from the chunk tracking server. In the event that throughput issues are detected, the processor 1 130 may be operable to compare throughput from two different chains and determine whether to continue to receive video data from the client's original chain or from a newly-identified chain included in a list of alternative sources received from the chunk tracking server. In both chain transmission mode and collaborative VOD distribution mode, the processor 1130 may be operable to dynamically determine the byte-range of a chunk to download; this dynamic determination may be based on the time taken to download the previous byte-range of a chunk. Further, the processor 1 130 may be operable to determine whether a decryption key received from a user device is suitable for playback of a particular chunk or whether the client 1 100 should obtain a new decryption key from the DRM server.
  • the client 1 100 may further comprise local storage 1 140.
  • the local storage 1 140 may comprise internal storage, external storage such as optical or magnetic storage media, a smartcard, flash memory, or other suitable means, or cloud-based storage.
  • the processor 1 130 may be operable to direct the client 1100 to store the content of the buffer 1120 in the local storage 1 140, for participation in collaborative VOD distribution mode. Further, in collaborative VOD distribution mode, the processor 1 130 may be operable to determine whether the local storage 1140 contains the complete video file and direct the client 1 100 to terminate any connections to the streaming server and any user devices from which VOD content is being downloaded. In live streaming mode, the processor 1 130 may be operable to monitor the level of the buffer 1120 in order to determine whether throughput is sufficient, and direct the client 1100 to either establish a connection to a secondary, backup source or to break from the chain.
  • the client 1 100 may further comprise a video playback means 1 150 for playing the streamed or downloaded live video or VOD content at the client 1100.
  • FIG 12 shows an exemplary embodiment of a chunk tracking server 1200 operable to manage clients participating in chain transmission mode or collaborative VOD distribution mode.
  • the chunk tracking server 1200 may comprise a transceiver 1210 operable to receive content and location information from clients, and provide clients requesting video data with a list of candidate user devices from which live video content may be streamed or VOD content may be downloaded. Further, the transceiver 1210 may be operable to provide other user devices with information associated with the requesting client.
  • the chunk tracking server 1200 may comprise a processor 1220 operable to determine whether two clients are watching the same content and whether the two clients are in proximity to one another.
  • the processor 1220 may be operable to determine whether a chain of clients has reached a predefined maximum chain length, and direct further requesting clients to form a new chain.
  • the processor 1220 may be operable to adjust the maximum chain length depending on network conditions and impose any adjusted maximum chain lengths.
  • the processor 1220 may further be operable to query a content database 1222 and a location database 1224 in order to determine whether any user devices may provide a client with the requested content. On the basis of this determination, the processor 1220 may be operable to generate a list of candidate user devices for sending to the requesting client.
  • the transceiver 1210 may be operable to receive an indication that the throughput of a client participating in chain transmission mode is insufficient and a request for an alternative source. In response to this request, the processor 1220 may be operable to query the content database 1222 and determine whether the client may receive the video data from any alternative sources.
  • the content database 1222 may be updated with the chunks stored in the buffers of the last three user devices in each chain.
  • the location database 1224 may be updated with the location of connected clients and may contain information pertaining to the connectivity of connected clients.
  • the content database 1222 may be updated with the chunks stored in the local storage of each connected client.
  • the content database 1222 and location database 1224 may be linked and may be implemented as a single database, or as multiple databases which may be maintained at multiple servers, each of which may, for example, cover a particular geographical area.
  • location information may be volunteered by clients or may be determined by the chunk tracking server 1220.
  • the chunk tracking server 1220 may comprise a location determining means 1230.
  • the location determining means may be operable to query the location database 1224 in order to determine whether information provided by the client comprises an indication of the location of the client.
  • the location determining means 1230 may query the client to check for its network MAC address. Further, the location determining means 1230 may extract the IP address of the client and query a geolocation service provider using the extracted IP address, in order to establish the location of the client to a reasonable degree of accuracy.
  • the location determining means 1230 may store information defining a geofence in the location database 1224. This information may be stored as coordinate data (for example, latitude and longitude data) or as a satellite view or boundary defined in a suitable mapping application. 3.3. Exemplary system
  • FIG. 13 shows an exemplary embodiment of a system operable to employ chain transmission mode or collaborative VOD distribution mode, as described above.
  • the system 1300 may comprise a streaming server 1310 operable to serve content to a client 1370 and a speed test server 1320 operable to determine whether the client 1370 meets a predefined upload speed criterion.
  • the system 1300 may further comprise a STUN server 1340 operable to provide the client 1370 with its external IP address, a redirection server 1350 operable to direct the client 1370 to receive video data from an identified streaming server, and a DRM server 1360 operable to supply the client 1370 with decryption keys for playback of chunks of video data.
  • the system 1300 may further comprise a chunk tracking server 1330 operable to receive content and location information from the client 1370 and maintain a database of the received information.
  • the chunk tracking server 1330 may be operable to provide the client 1370 with information associated with a plurality of user devices 1380.
  • the chunk tracking server 1330 may be implemented as the server described in section 3.2 with respect to Figure 12.
  • the system 1300 may further comprise a plurality of user devices 1380, each of which may be streaming live video or may store VOD content. At least some of the plurality of user devices 1380 may be receiving the same content as the client 1370 and may be operable to serve the client 1370 with the received content. The client 1370 and the user devices 1380 may be operable to update the chunk tracking server 1330 with any changes in content and location information. The client 1370 and the user devices 1380 may each have the same functionality as the client described in section 3.1 with respect to Figure 11.
EP17719696.1A 2016-04-22 2017-04-20 Mediendatenstreamingverfahren und vorrichtung Withdrawn EP3446477A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1607067.4A GB2549536B (en) 2016-04-22 2016-04-22 Media data streaming method and apparatus
PCT/GB2017/051107 WO2017182815A1 (en) 2016-04-22 2017-04-20 Media data streaming method and apparatus

Publications (1)

Publication Number Publication Date
EP3446477A1 true EP3446477A1 (de) 2019-02-27

Family

ID=58633042

Family Applications (1)

Application Number Title Priority Date Filing Date
EP17719696.1A Withdrawn EP3446477A1 (de) 2016-04-22 2017-04-20 Mediendatenstreamingverfahren und vorrichtung

Country Status (3)

Country Link
EP (1) EP3446477A1 (de)
GB (1) GB2549536B (de)
WO (1) WO2017182815A1 (de)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3087335A1 (en) * 2018-01-05 2019-07-11 Xirgo Technologies, Llc Scrub and playback of video buffer over wireless
CN108366277B (zh) * 2018-03-30 2021-06-15 武汉斗鱼网络科技有限公司 一种弹幕服务器连接方法、客户端及可读存储介质
CN112312057A (zh) 2020-02-24 2021-02-02 北京字节跳动网络技术有限公司 多媒体会议数据处理方法、装置和电子设备
CN112637669A (zh) * 2020-12-18 2021-04-09 北京浪潮数据技术有限公司 一种流媒体数据处理方法及相关装置
CN113285947B (zh) * 2021-05-21 2022-04-26 烽火通信科技股份有限公司 一种hls直播和组播直播接续的方法和装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8707375B2 (en) * 2006-04-05 2014-04-22 At&T Intellectual Property I, L.P. Peer-to-peer video on demand techniques
US8838823B2 (en) * 2006-06-27 2014-09-16 Thomson Licensing Performance aware peer-to-peer content-on-demand
US8159949B2 (en) * 2007-05-03 2012-04-17 Abroadcasting Company Linked-list hybrid peer-to-peer system and method for optimizing throughput speed and preventing data starvation
CN101378494B (zh) * 2008-10-07 2011-04-20 中兴通讯股份有限公司 一种实现互联网电视媒体交互的系统及方法
US8447875B2 (en) * 2010-03-10 2013-05-21 Thomson Licensing Unified cache and peer-to-peer method and apparatus for streaming media in wireless mesh networks
US20120297405A1 (en) * 2011-05-17 2012-11-22 Splendorstream, Llc Efficiently distributing video content using a combination of a peer-to-peer network and a content distribution network
US10346595B2 (en) * 2014-05-27 2019-07-09 Arris Enterprises, Inc. System and apparatus for fault-tolerant configuration and orchestration among multiple DRM systems

Also Published As

Publication number Publication date
GB2549536B (en) 2020-12-02
GB2549536A (en) 2017-10-25
WO2017182815A1 (en) 2017-10-26

Similar Documents

Publication Publication Date Title
EP3595268B1 (de) Verfahren zur verteilung von streaming-medienressourcen, system, edge-knoten und zentrales versandsystem
US10516717B2 (en) Network-initiated content streaming control
CN112369038B (zh) 用于在实时上行链路流式传输服务中分发媒体的方法
US20200336535A1 (en) Method and apparatus for signaling of buffer content in a peer-to-peer streaming network
US8826349B2 (en) Multicast adaptive stream switching for delivery of over the top video content
EP3446477A1 (de) Mediendatenstreamingverfahren und vorrichtung
US9332051B2 (en) Media manifest file generation for adaptive streaming cost management
EP3962092B1 (de) Verfahren und vorrichtung zum empfangen von multicast-videos anhand einer abspielliste
RU2647654C2 (ru) Система и способ доставки аудиовизуального контента в клиентское устройство
US20130114597A1 (en) Proxy server, relay method, communication system, relay control program, and recording medium
US8176192B2 (en) Networked transmission system and method for stream data
US20120124179A1 (en) Traffic management in adaptive streaming protocols
JP2018507660A (ja) デジタルコンテンツのアダプティブ仮想ブロードキャスティングのための方法およびシステム
KR20100043190A (ko) 보조 피어-투-피어 미디어 스트리밍 방법 및 보조 피어-투-피어 네트워크 접속 방법
WO2012071998A1 (zh) 一种内容分发网络中媒体文件下载方法及客户端
JP2010027053A (ja) データ配信システム及び方法
WO2012074777A1 (en) Method and apparatus for distributing video
EP2815557B1 (de) Unterstützung von p2p-streaming
Bouten et al. A multicast-enabled delivery framework for QoE assurance of over-the-top services in multimedia access networks
CN109510868B (zh) 一种建立p2p网络的方法、装置、终端设备及存储介质
Alomari A novel adaptive caching mechanism for video on demand system over wireless mobile network
WO2020104300A1 (en) Adaptative bit rate data casting
Noh et al. Time-shifted streaming in a peer-to-peer video multicast system
O’Neill Peer Assisted Multicast Streaming for On-Demand Applications

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20181122

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ALFASAGE LIMITED

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20200826

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20210803

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20211103