WO2016155800A1 - Accessing content - Google Patents

Accessing content Download PDF

Info

Publication number
WO2016155800A1
WO2016155800A1 PCT/EP2015/057059 EP2015057059W WO2016155800A1 WO 2016155800 A1 WO2016155800 A1 WO 2016155800A1 EP 2015057059 W EP2015057059 W EP 2015057059W WO 2016155800 A1 WO2016155800 A1 WO 2016155800A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
metadata
chunk
player
references
Prior art date
Application number
PCT/EP2015/057059
Other languages
French (fr)
Inventor
Greg McKESEY
Martin Soukup
Original Assignee
Irdeto B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Irdeto B.V. filed Critical Irdeto B.V.
Priority to PCT/EP2015/057059 priority Critical patent/WO2016155800A1/en
Publication of WO2016155800A1 publication Critical patent/WO2016155800A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/65Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast

Definitions

  • Over-the-top content delivery models in which content is provided/delivered by a content provider to a user via a multi/general-purpose network infrastructure (such as the Internet) that is not under the control of the content provider, are becoming more and more common.
  • An example of this is Internet Protocol streaming of audio content, video content and other media content.
  • the network conditions experienced by users or groups of users can vary wildly. For example, a user may choose to consume an item of content at home via a device connected to a high-speed, low-latency broadband network whilst another user may choose to consume the same item of content via a mobile device connected to a low-speed, high-latency mobile network. Equally, the content processing capabilities of different user devices can vary significantly.
  • a powerful home computer with hardware video rendering capability may be capable of rendering a high bit-rate video content stream smoothly, whereas a smart phone may be limited to rendering some types of video streams using software only, thus requiring a different encoding type, or a lower bit-rate, for the video stream.
  • a smart phone may be limited to rendering some types of video streams using software only, thus requiring a different encoding type, or a lower bit-rate, for the video stream.
  • HTTP Dynamic Streaming see, for example, http://www.adobe.com/uk/products/hds-dynamic-streaming/faq.html, the entire contents of which are incorporated herein by reference
  • Microsoft Smooth Streaming see, for example, http://www.iis.net/downloads/microsoft/smooth-streaming, the entire contents of which are incorporated herein by reference
  • Dynamic Adaptive Streaming over HTTP or DASH see, for example,
  • such adaptive streaming technology often requires a user device to first download a description file (or manifest or playlist) that, among other things, lists or identifies locations of content segments or chunks for a given variant.
  • a description file or manifest or playlist
  • such systems can typically introduce a delay of 2-3 seconds as a best case, and more than 10 seconds as a worst case, when the user changes between different items of content or between different variants. This is because the user device first needs to download or obtain a new description file for the new item of content of for the new variant of the current item of content.
  • a method for enabling a content player to access an item of content from a content provider, wherein the item of content comprises a plurality of content chunks comprising the content player: generating, in the content player, content metadata based at least in part on content selection data, wherein the content metadata comprises one or more references, each reference being either (a) a content chunk reference or (b) a content item reference that references one or more respective content chunk references, wherein each content chunk reference corresponds to a respective content chunk of the plurality of content chunks; and using at least one of the content chunk references to obtain at least one respective content chunk.
  • the method comprises: obtaining, from the content provider, further content metadata, wherein the further content metadata comprises one or more further content chunk references, each further content chunk reference corresponding to a further respective content chunk of the plurality of content chunks; and using at least one of the one or more further content chunk references to obtain at least one or more further respective content chunks.
  • a method for enabling a local content player to access an item of content from a remote content provider, wherein the item of content comprises a plurality of content chunks comprising a metadata generator: receiving a request from the local content player; in response to the request, generating, in the metadata generator, content metadata based at least in part on content selection data, wherein the content metadata comprises one or more references, each reference being either (a) a content chunk reference or (b) a content item reference that references one or more respective content chunk references, wherein each content chunk reference corresponds to a respective content chunk of the plurality of content chunks; and providing the content metadata to the local content player to thereby enable the local content player to use at least one of the one or more content chunk references to obtain at least one or more respective content chunks.
  • the method comprises: receiving a further request from the local content player; in response to the further request, obtaining, from the remote content provider, further content metadata, wherein the further content metadata comprises one or more further content chunk references, each further content chunk reference corresponding to a further respective content chunk of the plurality of content chunks; providing the further content metadata to the local content player to thereby enable the local content player to use at least one of the one or more further content chunk references to obtain at least one or more respective further content chunks.
  • the method comprises: receiving a request from the local content player for the at least one or more respective content chunks; obtaining the at least one or more respective content chunks from the remote content provider; and providing the at least one or more respective content chunks to the local content player.
  • a method for enabling a content player to access an item of content from a content provider, wherein the item of content comprises a plurality of content chunks comprising the content provider: generating content selection data; and providing the content selection data; wherein the content selection is configured to enable a metadata generation module of the content player or local to the content player to generate content metadata, wherein the content metadata comprises one or more references, each reference being either (a) a content chunk reference or (b) a content item reference that references one or more respective content chunk references, wherein each content chunk reference corresponds to a respective content chunk of the plurality of content chunks.
  • the content selection data may comprise electronic program guide data.
  • the at least one of the content chunk references may be a default content chunk reference corresponding to a respective default content chunk.
  • the content metadata may be a manifest or a playlist corresponding to the item of content.
  • the one or more references may be generated based at least in part on a naming convention. In any of the first, second or third aspects (or embodiments thereof), the one or more references may be generated based at least in part on a network time
  • an apparatus arranged to carry out a method according to any of the first, second, third or fourth aspects (or embodiments thereof).
  • a computer program which, when executed by one or more processors, causes the one or more processors to carry out a method according to any of the first, second, third or fourth aspects (or embodiments thereof).
  • the computer program may be stored on a computer-readable medium.
  • Figure 2a schematically illustrates an example system for enabling a content player to access an item of content
  • Figure 2b schematically illustrates an example of content metadata
  • Figure 2c schematically illustrates a further example of content metadata
  • Figure 3 is a sequence diagram schematically illustrating an example method for enabling a content player to access an item of content;
  • Figure 4 schematically illustrates an exemplary system according to an embodiment of the invention
  • Figure 5c is a sequence diagram schematically illustrating an example method of using the system of figure 4.
  • Figure 6 schematically illustrates an exemplary system according to an embodiment of the invention
  • Figure 7 is a sequence diagram schematically illustrating an example method of using the system of figure 6.
  • FIG. 1 schematically illustrates an example of a computer system 100.
  • the system 100 comprises a computer 102.
  • the computer 102 comprises: a storage medium
  • the storage medium 104 may be any form of non-volatile data storage device such as one or more of a hard disk drive, a magnetic disc, an optical disc, a ROM, etc.
  • the storage medium 104 may store an operating system for the processor 108 to execute in order for the computer 102 to function.
  • the storage medium 104 may also store one or more computer programs (or software or instructions or code).
  • the memory 106 may be any random access memory (storage unit or volatile storage medium) suitable for storing data and/or computer programs (or software or instructions or code).
  • the processor 108 may be any data processing unit suitable for executing one or more computer programs (such as those stored on the storage medium 104 and/or in the memory 106), some of which may be computer programs according to embodiments of the invention or computer programs that, when executed by the processor 108, cause the processor 108 to carry out a method according to an embodiment of the invention and configure the system 100 to be a system according to an embodiment of the invention.
  • the processor 108 may comprise a single data processing unit or multiple data processing units operating in parallel or in cooperation with each other.
  • the processor 108 in carrying out data processing operations for embodiments of the invention, may store data to and/or read data from the storage medium 104 and/or the memory 106.
  • the interface 1 10 may be any unit for providing an interface to a device 122 external to, or removable from, the computer 102.
  • the device 122 may be a data storage device, for example, one or more of an optical disc, a magnetic disc, a solid-state- storage device, etc.
  • the device 122 may have processing capabilities - for example, the device may be a smart card.
  • the interface 1 10 may therefore access data from, or provide data to, or interface with, the device 122 in accordance with one or more commands that it receives from the processor 108.
  • the user input interface 1 14 is arranged to receive input from a user, or operator, of the system 100.
  • the user may provide this input via one or more input devices of the system 100, such as a mouse (or other pointing device) 126 and/or a keyboard 124, that are connected to, or in communication with, the user input interface 1 14.
  • the user may provide input to the computer 102 via one or more additional or alternative input devices (such as a touch screen).
  • the computer 102 may store the input received from the input devices via the user input interface 1 14 in the memory 106 for the processor 108 to subsequently access and process, or may pass it straight to the processor 108, so that the processor 108 can respond to the user input accordingly.
  • the user output interface 1 12 is arranged to provide a graphical/visual and/or audio output to a user, or operator, of the system 100.
  • the processor 108 may be arranged to instruct the user output interface 1 12 to form an image/video signal representing a desired graphical output, and to provide this signal to a monitor (or screen or display unit) 120 of the system 100 that is connected to the user output interface 1 12.
  • the processor 108 may be arranged to instruct the user output interface 1 12 to form an audio signal representing a desired audio output, and to provide this signal to one or more speakers 121 of the system 100 that is connected to the user output interface 1 12.
  • the network interface 1 16 provides functionality for the computer 102 to download data from and/or upload data to one or more data communication networks.
  • the network 230 may be any kind of data communication network suitable for communicating or transferring data between the device 210 and the content provider 220.
  • the network 230 may comprise one or more of: a wide area network, a metropolitan area network, the Internet, a wireless communication network, a wired or cable communication network, a satellite communications network, a telephone network, etc.
  • the device 210 and the content provider 220 may be arranged to communicate with each other via the network 230 via any suitable data communication protocol.
  • the data communication protocol may be TCP/IP, UDP, SCTP, etc.
  • the content provider 220 may be arranged to provide a plurality of items of content 224 that each relate to (or encode or represent) the same content but that are variants of each other with different characteristics. Such items of content related this way shall be referred to herein as "variants" of each other.
  • variants of an item of content 224 may differ by one or more of: encoding quality for the content, bit rate, regionalization, audio track, encoding format, etc.
  • each variant of an item of content 224 may have a respective set of encoding characteristics reflecting a different quality level when rendered by a content player.
  • the item of content 224 comprises one or more content chunks 225.
  • a content chunk 225 is a portion (or part or segment or section) of the item of content 224.
  • a content chunk 225 may correspond to a portion or element of the content represented by the item of content 224.
  • a plurality of content chunks 225 may, or may not, together form a contiguous part of the item of content 224.
  • a content chunk 225 may, or may not, itself form a contiguous part of the item of content 224.
  • the content chunks 225 may be non- overlapping, overlapping, partially overlapping, etc.
  • the length of the content chunks 225 may be fixed/predetermined, variable, dependent on one or more other parameters, etc.
  • the content chunks 225 may be the result of any full or partial partitioning of the item of content 224.
  • the content chunks 225 may correspond to (or represent or encode) one or more respective frames or groups of frames of that video content; if the item of content 224 comprises audio content, the content chunks 225 may correspond to (or represent or encode) one or more respective time periods of audio content (e.g. a number of seconds of audio).
  • the content chunks 225 may comprise one or more parts (or elements) of an MPEG-2 transport stream or an MPEG-4 transport stream or H .264 encoded content.
  • the content chunks 225 of an item of content 224 may be arranged sequentially in a rendering/output order for the item of content 224.
  • the content provider 220 may be arranged to provide content selection data 222.
  • the content selection data 222 may be generated by the content provider 220.
  • the content selection data 222 can be any data that identifies one or more amounts of content that the content provider 220 is arranged to provide as
  • the content provider 220 may be arranged to provide content metadata 223.
  • the content metadata 223 will be described shortly in more detail below.
  • the content provider 220 may be a computer system, such as the exemplary computer system 100 shown in figure 1 .
  • the content provider 220 may be a single server.
  • the content provider 220 may comprise a plurality of such computer systems 100 (such as a plurality of servers). If the content provider 220 comprises a plurality of computer systems 100 - these computer systems may communicate (or be managed or be otherwise coordinated) via one or more networks. These one or more networks may be any kind of data communication network suitable for communicating or transferring data between two or more computer systems 100 of the plurality of computer systems 100.
  • the one or more networks may comprise any of: a local area network, a wide area network, a metropolitan area network, the Internet, a wireless communication network, a wired or cable communication network, a satellite communications network, a telephone network, etc.
  • One or more computer systems 100 of the plurality of computer systems 100 may be located in a geographical area different to that of one or more other computer systems 100 of the plurality of computer systems.
  • One or more computer systems 100 of the plurality of computer systems 100 may have, or perform, corresponding particular roles/tasks of the content provider 220, such as one or more of: providing particular items of content 224, providing particular content chunks 225, generating and/or providing content selection data 222 (or portions thereof), generating and/or providing content metadata 223 (or portions thereof), coordinating (or administering or otherwise managing) one or more other computer systems 100 of the plurality of computer systems, acting as a gateway (or router or load- balancer) for one or more other computer systems 100 of the plurality of computer systems.
  • the content provider 220 may comprise what is usually termed a Content Delivery Network. Content Delivery Networks are well-known and are therefore not further described in detail herein - (see, for example,
  • the content player 212 may be arranged to render an item of content 224 for a user. For example: if at least part of the item of content 224 represents/encodes video content then the content player 212 may display the video content to the user 201 (e.g. via the display 120); if at least part of the item of content 224 represents/encodes audio content then the content player 212 may play the audio content to the user 201 (e.g. via one or more speakers 121 ), etc.
  • the content player 212 may be arranged to display or otherwise use the content selection data 222. For example, if the content selection data 222 comprises EPG data then the content player 212 may display a corresponding EPG.
  • the content metadata 223 can comprise any data that identifies (specifies or otherwise addresses, references or provides a link to) at least part of one or more items of content 224 that the content provider 220 is arranged to provide.
  • the content metadata 223 may comprise one or more references to enable the content player 212 to access any of: one or more items of content 224; part of one or more items of content 224; one or more content chunks 225; etc.
  • Figure 2b schematically illustrates a particular example of the content metadata 223.
  • Figure 2b shows content metadata 223 and one or more items of content 224-1 , 224-2 as described previously.
  • Each item of content 224-1 , 224-2 comprises one or more respective content chunks 225-1 , 225-2 as described previously.
  • figure 2b illustrates two items of content 224 (namely the items of content labeled 224-1 and 224- 2), it will be appreciated that this is merely for illustrative purposes and that different numbers of items of content 224 can be used.
  • figure 2b illustrates each item of content 224 having three respective content chunks, it will be appreciated that this is merely for illustrative purposes and that different items of content 224 can have different respective numbers of content chunks 225.
  • the content metadata 223 comprises one or more content chunk references 2100.
  • a content chunk reference 2100 may correspond to (or specify, identify or otherwise address or provide a link to) a respective content chunk 225-1 , 225-2 (as illustrated by the dashed lines in figure 2b). If the content metadata 223 comprises a plurality of content chunk references 2100 then one or more of these content chunk references 2100 may correspond to respective content chunks 225-1 of a first item of content 224-1 and one or more of these content chunk references 2100 may correspond to respective content chunks 225-2 of a second item of content 224-2.
  • the second item of content 224-2 may be a variant of the first item of content 224-1 .
  • the content chunk references 2100 may correspond to respective content chunks 225 from different items of content 224.
  • the content metadata 223 may correspond to a particular amount of content (e.g. a particular movie), in that the content metadata 223 may comprise content chunk references 2100 corresponding to each content chunk 225 of each variant of an item of content 224 representing/encoding that amount of content.
  • all of the content chunk references 2100 of the content metadata 223 may correspond to respective content chunks 225-1 , 225-2 of the same item of content 224-1 , 224-2.
  • Each content chunk reference 2100 may comprise one or more of any of, or one or more of any part of any of: a Uniform Resource Indicator, Uniform Resource Locator, a hyperlink, a file system reference, an offset for an array (or a file or a data stream or a data sequence), a file name etc.
  • the content metadata 223 may comprise a manifest (or variant playlist).
  • the manifest (or variant playlist) may be any of an HTTP Live Streaming (HLS) variant playlist, a Dynamic Adaptive Streaming over HTTP (DASH) manifest, HTTP Dynamic Streaming (HDS) variant playlist, etc.
  • a content chunk reference 2100 may be formed by (or may follow or may comprise elements which follow) a content chunk naming convention used by the content provider 220.
  • the content chunk naming convention may enable a content chunk reference 2100 (or a part of a content chunk reference 2100) to be formed based on one or more characteristics of the corresponding item of content 224. Such characteristics may comprise any of: encoding quality for the content, bit rate, regionalization, audio track, encoding format, title, author, publisher, director, timestamp, etc.
  • a content chunk reference 2100 may comprise a concatenation of the title of the corresponding item of content and the bit rate of the corresponding item of content separated by a predetermined character.
  • the content chunk naming convention may be specific to any combination of any of: the content provider 220; the content player 212; the content device 210; and the network 230.
  • Figure 2c schematically illustrates a further example of content metadata 223.
  • Figure 2c shows content metadata 223-a, one or more content metadata 223-b, and one or more items of content 224-1 , 224-2 as described above with reference to figure 2b.
  • Each item of content 224-1 , 224-2 comprises one or more respective content chunks 225-1 , 225-2 as described above with reference to figure 2b.
  • the one or more content metadata 223-b are content metadata 223 as described above with reference to figure 2b, and shall be referred to below as one or more "variant content metadata” 223-b.
  • the content metadata 223-a shall be referred to as "master content metadata" 223-a.
  • the master content metadata 223-a may comprise content metadata 223 as described above with reference to figure 2b (and, therefore, may comprise one or more content chunk references 2100). However, the master content metadata 223-a comprises one or more content item references 2200.
  • a content item reference 2200 may correspond to (or specify, identify or otherwise address or provide a link to) one or more content chunk references 2100.
  • a content item reference 2200 may correspond to (or specify, identify or otherwise address or provide a link to) one or more of the variant content metadata 223- b.
  • a content item reference 2200 may correspond to a single item of content 224-1 , 224- 2 (e.g. if the content item reference 2200 identifies variant content metadata 223-b which itself references a single item of content 224-1 , 224-2).
  • a content item reference 2200 may correspond to two or more items of content 224-1 , 224-2 (e.g.
  • a content item reference 2200 may comprise one or more of any of, or one or more of any part of any of: a Uniform Resource Indicator, Uniform Resource Locator, a hyperlink, a file system reference, an offset for an array (or a file or a data stream or a data sequence), a file name etc.
  • the master content metadata 223-a may comprise a playlist (or master playlist or master manifest).
  • the playlist may be an HTTP Live Streaming (HLS) master playlist, HTTP Dynamic Streaming (HDS) master playlist, etc.
  • figure 2c illustrates the master content metadata 223-a referencing two variant content metadata 223-b via respective content item references 2200, it will be appreciated that this is merely for illustrative purposes and that the master content metadata 223-a may reference any number of variant content metadata 223-b via one or more content item references 2200.
  • Figure 3 is a sequence diagram schematically illustrating an example method 300 for using the system 200 of figure 2a.
  • the content player 212 receives or obtains the content selection data 222 from the content provider 220.
  • the content selection data 222 may be received via the network 230.
  • the content selection data 222 may be received in response to the request of the step 302 (i.e. in response to the content provider 220 receiving the request at the step 302).
  • the content selection data 222 may be received by the content player 212 without the need for any request like that in the step 302 - for example, the content provider 220 may initiate the transmission of the content selection data 222 to the content player 212.
  • the steps 302 and 304 are optional, because the content player 212 may already be storing, or may already have access to, the content selection data
  • the content player 212 may have previously been provided with, or have previously obtained, the content selection data 222.
  • the content player 212 may have previously carried out either or both of the steps 302 and 304 and may, then, have stored the content selection data 222 for subsequent use. In this case, the steps 302 and 304 need not be repeated and the content player 212 can simply access or obtain the stored content selection data 222.
  • content is selected (or identified or otherwise chosen). This may be performed in any of a number of available ways, examples of which are set out below.
  • the step 306 may comprise a user 201 selecting content using the content player 212.
  • the step 306 may comprise a user 201 selecting content using an additional device or system communicatively interfaced with the content player 212 (such as a remote control device, e.g. a traditional infra-red remote control device; a mobile computing device - e.g. a tablet computer or a mobile phone - running remote control software; a browser based EPG application running on the client device 210 or on a mobile device connected to the client device - e.g. via a wireless network connection or a Bluetooth connection; etc.).
  • the user 201 may select the content based on the content selection data 222.
  • the content player 212 may have previously displayed (or processed or otherwise rendered) the content selection data 222 so as to enable the user 201 to select the content. If the content selection data 222 comprises EPG data, the content player 212 (or the additional device) may display a corresponding EPG (based on that EPG data) to the user 201 . The user 201 may select the content by interacting with the EPG. EPGs (and methods of displaying and interacting with EPGs) are well-known and shall not, therefore, be described in more detail herein.
  • the content player 212 may select the content itself.
  • the content player 212 may have stored (or have access to or otherwise be able to generate) rules (or methods or parameters) to be processed by the content player 212 to enable the content player 212 to automatically select content. These rules may be based in whole or in part on one or more of: previous user content selections, user preferences, user information, previous user actions, content selection data 222, etc.
  • the content player 212 may automatically select content related to previously selected content.
  • the content player 212 may select content which comprises an episode of a television serial for which the user 201 has previously selected content comprising a previous episode of that television serial.
  • the content player 212 requests master content metadata 223-a (described above with reference to figure 2c) based on, or
  • the content player 212 requests the master content metadata 223-a from the content provider 220.
  • the content player 212 may use the content selection data 222 to request the master content metadata 223-a.
  • the content selection data 222 may comprise data that identifies (or which the content player 212 can use to identify) a content provider 220 associated with the selected content and to which the request for the master content metadata 223-a should be issued or communicated, and the content player 212 may then issue or communicate the request for the master content metadata 223-a to the identified content provider 220, where this request identifies the selected content (so that the content provider 220 knows which master content metadata 223-a to provide back to the content player 212, namely master content metadata 223-a corresponding to the selected content).
  • the content player 212 may send the request to a predetermined content provider 220 (e.g. a content provider 220 that corresponds to the content player 212).
  • a predetermined content provider 220 e.g. a content provider 220 that corresponds to the content player 2112.
  • the identification, in the request, of the selected content may be based on data in the content selection data 222 (e.g. a content identifier).
  • the steps 308 and 310 are optional because, as described above, there are numerous configurations for content metadata 223 (such as those described with reference to figures 2b and 2c).
  • the steps 308 and 310 may be carried out when master content metadata 223-a is being used (as in the example shown in figure 2c), but may be omitted when master content metadata 223-a is not being used (as in the example shown in figure 2b).
  • the master content metadata 223-a may, for example, comprise one or more content item references 2200 that correspond to the selected content (e.g. each content item reference 2200 corresponds to a variant of an item of content encoding the selected content in a different way or with different characteristics), and the content player 212 may select one of those one or more content item references 2200 (e.g. one that matches, or corresponds to, a desired rendering quality or bit rate) and then request variant content metadata 223-b identified by that selected content item reference 2200.
  • each content item reference 2200 corresponds to a variant of an item of content encoding the selected content in a different way or with different characteristics
  • the content player 212 may select one of those one or more content item references 2200 (e.g. one that matches, or corresponds to, a desired rendering quality or bit rate) and then request variant content metadata 223-b identified by that selected content item reference 2200.
  • the content player 212 may request content metadata 223 from the content provider 220.
  • This content metadata 223 is content metadata 223 as has been described previously with reference to figure 2b.
  • the content player 212 requests the content metadata 223 based on the selected content.
  • the content player 212 may use the content selection data 222 to request the content metadata 223.
  • the content selection data 222 may comprise data that identifies (or which the content player 212 can use to identify) a content provider 220 associated with the selected content and to which the request for the content metadata 223 should be issued or communicated, and the content player 212 may then issue or communicate the request for the content metadata 223 to the identified content provider 220, where this request identifies the selected content (so that the content provider 220 knows which content metadata 223 to provide back to the content player 212, namely content metadata 223 corresponding to the selected content).
  • the content player 212 may send the request to a predetermined content provider 220 (e.g. a content provider 220 that corresponds to the content player 212).
  • the identification, in the request, of the selected content may be based on data in the content selection data 222 (e.g. a content identifier).
  • the request may identify or specify a desired rendering quality or bit rate (or other characteristic for the selected content), so that the content provider 220 may select content metadata 223 corresponding to that desired rendering quality or bit rate (or other characteristic for the selected content).
  • the content player 212 receives the content metadata 223 from the content provider 220, i.e. the content provider 220 receives the request issued at the step 312 and provides corresponding content metadata 223 to the content player 212.
  • the content player 212 requests a content chunk 225 using at least one content chunk reference 2100 of the content metadata 223 received at the step 314.
  • a content chunk reference 2100 identifies, or addresses or links to, a corresponding content chunk 225.
  • the content player 212 may access, or request/obtain, a content chunk 225 identified, or addressed or linked to, by a content chunk reference 2100 in the received content metadata 223.
  • the step 316 may comprise the content player 212 requesting the content chunk 225 from the content provider 220.
  • the content player 212 receives the requested content chunk 225.
  • the step 318 may comprise the content player 212 receiving the content chunk 225 from the content provider 220.
  • the content player 212 requests a further content chunk
  • the content player 212 requests the further content chunk 225 from the content provider 220.
  • the step 326 may be initiated (or carried out) during (or concurrent with or partially concurrent with) the step 322, so that this further content chunk 225 is received and ready for rendering before rendering of the current content chunk 225 has finished (thereby providing seamless content output).
  • the content player 212 receives the further content chunk 225 from the content provider 220.
  • the content player 212 renders, or outputs, a further section of content 330.
  • the sequence of steps 312, 314, 316, 318 and 322 may be repeated one or more times. This may occur for a number of reasons, such as: (a) new content may be selected, e.g. by the user 201 (i.e. the step 306 may be performed again), which may require new content metadata 223 referencing the newly selected content; (b) the conditions under which the device 210 and/or the content player 212 and/or the network 230 is/are operating may change, making the current item of content 224 inappropriate so that it would be more appropriate to obtain content chunks 225 from a variant of the current item of content 224 (e.g.
  • the request may be sent to (or received at) one or more computer systems 100 of the plurality of computer systems 100.
  • the one or more computer systems 100 may be chosen based upon one or more criteria, such as: geographical location, latency, access permissions, the content selection data 222 itself, etc.
  • the request may comprise any type of request, such as an HTTP GET request, an FTP GET request, etc.
  • the request may comprise a conditional request, for example an HTTP conditional GET request.
  • the content player 212 receives data (such as content metadata 223, master content metadata 223-a, variant content metadata 223-b, content selection data 222, content chunks 225, etc.) from the content provider 220 the following will be appreciated.
  • the content provider 220 comprises a plurality of computer systems 100 the data may be received from (or sent by) one or more particular computer systems 100 of the plurality of computer systems 100.
  • the one or more computer systems 100 may be chosen based upon one or more criteria, such as:
  • the data may be received from one or more of the entities (or content providers 220 or servers or computer systems 100) to which the request was sent. If the data is received in response to a request, the data may be received from one or more entities (or content providers 220 or servers or computer systems 100) other than the entities to which the request was sent.
  • a problem with the system 200 and the method 300 is that, when the user 201 selects new content (or a new variant of an item of content) to be rendered, then the steps 312 and 314 (and potentially the steps 308 and 310 if master content metadata
  • Embodiments of the invention therefore aim to reduce the above-mentioned delay that occurs when the user 201 selects new content (or a new variant of an item of content) by arranging for the system 200 to be able to operate without having to re- perform the steps 312 and 314 (and potentially the steps 308 and 310 if master content metadata 223-a is being used) and/or by making the re-performance of those steps more efficient or faster. Ways of achieving this are set out below.
  • Figure 4 schematically illustrates an exemplary system 400 according to one embodiment of the invention.
  • the system 400 is the same as the system 200 of figure 2, except as described below. Therefore, features in common to the system 400 and the system 200 have the same reference numeral and shall not be described again.
  • the content player 212 comprises, or is arranged to execute, a metadata generation module 412.
  • the metadata generation module 412 is arranged to generate (or create or form) content metadata 223-1 based at least in part on content selection data 422.
  • the content selection data 422 may comprise the original/initial content metadata
  • One or more content item references 2200 (as described previously with reference to figure 2c) corresponding to one or more respective predetermined or default (or generic) items of content 224. Such content item references 2200 will be referred to hereafter as "default content item references”.
  • One or more content chunk references 2100 (as described previously with reference to figure 2b) corresponding to one or more respective content chunks 225 of one or more respective predetermined or default (or generic) items of content 224. Such content chunk references 2100 will be referred to hereafter as "default content chunk references”.
  • the content selection data 222 which the content provider 220 of the system 200 would have provided may remain unchanged in the system 400, and the content player 212 may receive additional data (such as one or more of the additional data (a)-(g) mentioned above) along with, but potentially separate from, the "original" content selection data 222 at the time that the content player 212 obtains the "original” content selection data 222 (for example, as part of the same operation for obtaining the "original” content selection data 222 from the content provider 220).
  • This additional data, together with the "original” content selection data 222 may be viewed, in the system 400, as forming new content selection data 422 that the metadata generation module 412 uses to generate the content metadata 223-1 .
  • the content provider 220 may be arranged to generate and provide/output the content selection data 422 (such as any of the above-mentioned content selection data 422).
  • a predetermined or default item of content 224 may comprise any type of predetermined content such as: advertising content; trailers (or previews) of further items of content 224; idents (such as animated channel/content provider logos); etc.
  • a default item of content 224 may comprise a portion of the selected content (for example from the start of the selected content) encoded such that the default item of content 224 has predetermined characteristics.
  • a default item of content may comprise an initial portion of the selected content encoded at a predetermined bitrate.
  • the use of such default items of content 224 enables, for example, the content player 212 to render a default item of content 224 shortly after an item of content 224 has been selected and whilst the content player 212 is still obtaining content chunks 225 of the said selected item of content 224 or metadata 223 for the selected item of content 224.
  • the content player 212 can request/obtain content chunks 225 more quickly. This thereby reduces the problematic delay described previously.
  • the metadata generation module 412 may be arranged to generate content metadata 223-1 based at least in part on some or all of the additional data (a)-(g).
  • the content metadata 223-1 generated by the content metadata generator 412 may comprise any of: content metadata 223 as described previously with reference to figure 2b; master content metadata 223-a as described previously with reference to figure 2c; or variant content metadata 223-b as described previously with reference to figure 2c.
  • the content metadata 223-1 may be identical (or equivalent or correspond) to the content metadata 223 (or the master content metadata 223-a, or the variant content metadata 223b) that the content provider 220 of the system 200 would have provided at the step 310 or 314 (or at least a part thereof).
  • the content metadata 223-1 may comprise one or more of: one or more content chunk references 2100 that are equivalent or correspond to, or the same as, one or more content references 2100 that the content provider 220 of the system 200 would have provided at the step 314; or one or more default content chunk references.
  • Figure 5a is a sequence diagram schematically illustrating an example method 500-a for using the system 400 of figure 4.
  • the method 500-a is the same as the method 300 of figure 3, except as described below. Therefore, steps in common to the method 500-a and the method 300 have the same reference numeral and shall not be described again.
  • the method 500-a differs from the method 300 first in that the step 304 is replaced by a step 504.
  • the step 504 is the same as the step 304, except that instead of the content player 212 receiving the content selection data 222, the content player 212 receives the above-mentioned content selection data 422.
  • the content provider 220 may, therefore, be arranged to generate the content selection data 422.
  • the method 500-a also differs from the method 300 in that the steps 308 and 310 in the method 300 are replaced by a step 508.
  • the metadata generation module 412 generates the content metadata 223-1 based, at least in part, on the content selection data 422.
  • the content metadata 223-1 is master content metadata 223-a (as described above with reference to figure 2c).
  • the step 508 may comprise the metadata generation module 412 populating (or otherwise filling) a content metadata template using data associated with the content metadata 223-1 .
  • the content metadata template and/or the data associated with the content metadata 223-1 may have been provided as part of the content selection data 422 in the step 504 (i.e. additional data types (b) and (c) mentioned above) and may, therefore, have been generated by the content provider 220.
  • the content metadata template and/or the data associated with the content metadata 223-1 may already be stored in (or be otherwise locally accessible to) the content player 212 or the device 210.
  • the data may be some or all of the data that would have formed the content metadata 223-a provided at the step 310 of the method 300.
  • the step 508 may comprise the metadata generation module 412 decompressing (or otherwise extracting at least part of the content metadata 223-1 from) a compressed version of part or all of the content metadata 223-1 .
  • This compression version may therefore have been generated by the content provider 220 by compressing content metadata 223 stored at, or generated by, the content provider 220.
  • the compressed version of part or all of the content metadata 223-1 may have been provided as part of the content selection data 422 in the step 504 (i.e. additional data type (a) mentioned above).
  • the compressed data may be a compressed version of some or all of the data that would have formed the content metadata 233-a provided at the step 310 of the method 300.
  • the step 508 may comprise the metadata generation module 412 executing an application or software (e.g. bytecode or a script or an otherwise executable set of instructions) - this means that a legacy content player 222 may be configured to carry out the step 508 by provision of this new application or software (e.g. as plug-in module).
  • the application or software may have been provided as part of the content selection data 422 in the step 504 (i.e. additional data type (d) mentioned above).
  • the application or software may be configured to generate a content item reference 2200 using the predetermined format or naming convention used by the content provider 220, based on the selected content.
  • the generation of the content metadata 223 may be based at least in part on a network time synchronization signal.
  • the step 508 may comprise the metadata generation module 412 using: a time based algorithm; and a time value substantially synchronized with the content provider 220 using the network time synchronization signal.
  • the metadata generation module 412 may make use of a timestamp associated with the selected content in order to generate the content metadata 223-1 . This may be used, for example, where the format of a content item reference 2200 makes use of a timestamp (or timecode, i.e. additional data type (e) mentioned above) for content to which that content item reference 2200 corresponds (as described above) - thus, the metadata generation module 412 may use the timestamp associated with content to help create a content item reference 2200 for that content.
  • the step 508 may comprise the metadata generation module 412 extracting (or reading or obtaining) default content item references (i.e. additional data type (f) mentioned above) from the content selection data 422, and using these default content item references to form the metadata 223-1 .
  • default content item references i.e. additional data type (f) mentioned above
  • Figure 5b is a sequence diagram schematically illustrating an example method 500-b for using the system 400 of figure 4.
  • the method 500-b is the same as the method 300 of figure 3, except as described below. Therefore, steps in common to the method 500-b and the method 300 have the same reference numeral and shall not be described again.
  • the method 500-b also differs from the method 300 in that the steps 312 and 314 in the method 300 are replaced by a step 512.
  • the metadata generation module 412 generates content metadata 223-1 .
  • This step 512 is performed in substantially the same way as the step 508 described above (using any of the above-mentioned additional data (a)-(e) of the content selection data 422), except that the metadata 223-1 generated at the step 512 is either (i) variant content metadata 223-b (as described above with reference to figure 2c) if the steps 308 and 310 are performed, or (ii) content metadata 223 (as described above with reference to figure 2b) if the steps 308 and 310 are not performed.
  • the step 512 may comprise the metadata generation module 412 extracting (or reading or obtaining) default content chunk references (i.e. additional data type (g) mentioned above) from the content selection data 422, and using these default content chunk references to form the metadata 223-1 .
  • Figure 5c is a sequence diagram schematically illustrating an example method
  • the method 500-c for using the system 400 of figure 4.
  • the method 500-c is the same as the method 300 of figure 3, except as described below. Therefore, steps in common to the method 500-c and the method 300 have the same reference numeral and shall not be described again.
  • the method 500-c differs from the method 300 first in that the step 304 is replaced by a step 504.
  • the step 504 is the same as the step 304, except that instead of the content player 212 receiving the content selection data 222, the content player 212 receives the above-mentioned content selection data 422.
  • the content provider 220 may, therefore, be arranged to generate the content selection data 422.
  • the method 500-c also differs from the method 300 in that the steps 308 and 310 in the method 300 are replaced by the step 508 as described above with reference to figure 5a.
  • the method 500-c also differs from the method 300 in that the steps 312 and 314 in the method 300 are replaced by the step 512 as described above with reference to figure 5b.
  • the content metadata 223-1 generated at the step 512 is variant content metadata 223-b.
  • the metadata generation module 412 may generate the variant content metadata 223-b based at least in part on the master content metadata 223-a generated at the step 508.
  • the methods 500-a, 500-b and 500-c reduce the above-mentioned problematic delay by avoiding having to perform the steps 308 and 310 and/or the steps 312 and 314.
  • This is achieved via the metadata generator 412 itself generating some or all of the content metadata 223 instead of having to request and receive (or otherwise obtain) that content metadata 223 from the content provider 220 via the network 230.
  • the metadata generator 412 is enabled to do this using the content selection data 422.
  • the sequence of steps 312, 314 (or the step 512 if used in place of the steps 312 and 314), 316, 318 and 322 (possibly with the optional steps 308 and 310, or the step 508 if used in place of the steps 308 and 310, and/or the optional steps 326, 328 and 332) may be repeated one or more times. This may occur for a number of reasons, such as: (a) new content may be selected, e.g. by the user 201 (i.e.
  • the step 306 may be performed again), which may require new content metadata 223 referencing the newly selected content; (b) the conditions under which the device 210 and/or the content player 212 and/or the network 230 is/are operating may change, making the current item of content 224 inappropriate so that it would be more appropriate to obtain content chunks 225 from a variant of the current item of content 224 (e.g.
  • the content metadata 223 obtained at the step 314 or 512 may contain content chunk references 2100 for only a subset of the possible content chunks 225 of the item of content 224, so that new content metadata 223 may be required in order to obtain and render further content chunks 225 of the item of content 224 (this may be particularly true for live content).
  • the repetition of the steps 312, 314 (or the step 512 if used in place of the steps 312 and 314), and possibly the optional steps 308 and 310 too (or the step 508 if used in place of the steps 308 and 310) may be performed whilst content is being rendered at the step 322 (and/or at the optional step 332), so that the time taken to perform these steps is in parallel with the content rendering that is being performed, thereby ensuring continuous/seamless content rendering (i.e. the further metadata 223 may be obtained in advance of when it is required).
  • the content player 212 having generated content metadata 223 (by virtue of the step 508 and/or the step 512) may obtain further content metadata 223 from the content provider 220 (i.e. without the content player 212 generating that further content metadata 223 itself). This may be achieved using the steps 312 and 314 (and possibly the steps 308 and 310 too), as discussed above with reference to figure 3.
  • Figure 6 schematically illustrates an exemplary system 600 according to one embodiment of the invention.
  • the system 600 is the same as the system 200 of figure 2, except as described below. Therefore, features in common to the system 600 and the system 200 have the same reference numeral and shall not be described again.
  • the system 600 further comprises a metadata generator 610 and, optionally, a local network 630.
  • the metadata generator 610 is a physically separate and/or logically separate unit/entity from the content player 212 (and possibly from the device 210).
  • the local network 630 may be any kind of data communication network suitable for communicating or transferring data: (a) between the device 210 and the metadata generator 610; and (b) between the metadata generator 610 and the network 230.
  • the network 630 may comprise one or more of: a local area network, a wired or cable communication network, a WiFi network, etc.
  • the device 210 and the metadata generator 610 may be arranged to communicate with each other via the local network 630 via any suitable data communication protocol.
  • the data communication protocol may be TCP/IP, UDP, SCTP, etc.
  • the metadata generator 610 and the content provider 220 may be arranged to communicate with each other via the local network 630 and the network 230 via any suitable data communication protocol or protocols.
  • the data communication protocol may be TCP/IP, UDP, SCTP, etc.
  • the device 210 may communicate with the content provider 210 via the local network 630 and the network 230 or, potentially, just via the network 230 (as illustrated by the dashed line in figure 6).
  • the local network 630 is optional as: the device 210 may be arranged to communicate directly with the metadata generator 610 via any suitable data
  • the metadata generator 610 may be connected directly to the network 230 and thus arranged to communicate with the content provider 220 in an analogous manner to the device 210 in figure 2 (so that the local network 630 may be viewed as a local part of the existing network 230).
  • the metadata generator 610 comprises, or is arranged to execute, a metadata generation module 612.
  • the metadata generation module 612 is arranged to generate (or create or form) content metadata 223-1 based at least in part on the content selection data 422. This may be performed using any of the techniques described above with reference to the metadata generation module 412 of figure 4, based on the above- described additional data in the content selection data 422.
  • the metadata generation module 612 may obtain, or be provided with, the content selection data 442 from the content provider 220.
  • the metadata generation module 612 (as a proxy for the content provider 220) may be configured to generate the content selection data 442 itself (in an analogous manner to the content provider 220).
  • the metadata generator 610 acts as a proxy (i.e. a local proxy) of the content provider 220, at least in respect of certain act or tasks of the content provider 220. This shall be referred to herein as the metadata generator 610 "proxying" those acts or tasks. Such proxying may involve the metadata generator 610 receiving data/requests from the content player 212, where these data/requests are intended for the content provider 220. It will be appreciated that this may be transparent i.e.
  • the content player 212 may attempt/intend to send the some data/requests to the content provider 220, with the metadata generator 610 intercepting the data/requests or with those data/requests being re-routed or re-directed to the metadata generator 610.
  • the content player 212 may attempt/intend to send some data/requests to the metadata generator 610 (as opposed to attempting/intending to send data/requests to the content provider 220).
  • proxying may involve the metadata generator 610 passing some data/requests to the content provider 220 (either in the originally received form or as re-formatted/re-generated data/requests).
  • proxying may involve the metadata generator 610 processing some data/requests on behalf of the content provider 220 instead of passing those data/requests on to the content provider 220. Similarly, such proxying may involve the metadata generator 610 receiving data from the content provider 220, where this data is intended for the content player 212. The metadata generator 610 may then pass this data to the content player 212 (either in the originally received form or as re-formatted/re-generated data).
  • Figure 7 is a sequence diagram schematically illustrating an example method 700 for using the system 600 of figure 6.
  • the method 700 is the same as the method 300 of figure 3, except as described below. Therefore, steps in common to the method 700 and the method 300 have the same reference numeral and shall not be described again, except where variations on those steps are possible in the system 600.
  • the content selection data 422 received by the content player 212 in the optional step 304 may be received from the metadata generator 610 - the metadata generator 610 will receive the content selection data 422 from the content provider 220 at a step 304a and pass the content selection data 422 on to the content player 212.
  • the metadata generator 610 may be able to generate and/or provide the content selection data 422 to the content player 212 without having to obtain the content selection data 422 from the content provider 220 - thus, the optional steps 302 and 304 may no longer involve the content provider 220 (so that the steps 302a and 304a are not performed), but involve the metadata generator 610 in the place of the content provider 220 (as illustrated by the dotted line 303 in figure 7).
  • the master content metadata 223-a received by the content player 212 in the optional step 310 may be received from the metadata generation module 612 - the metadata generation module 612 will receive the master content metadata 223-a from the content provider 220 at a step 310a and pass the content master content metadata 223-a on to the content player 212.
  • the metadata generation module 612 may be able to generate and/or provide the master content metadata 223-a to the content player 212 without having to obtain the master content metadata 223-a from the content provider 220 - thus, the optional steps 308 and 310 may no longer involve the content provider 220 (so that the steps 308a and 310a are not performed), but involve the metadata generator 610 (or the metadata generation module 612) in the place of the content provider 220 (as illustrated by the dotted line 309 in figure 7).
  • the steps 312 and 314 of the method 300 may be proxied by the metadata generator 610.
  • the request in step 312 may be first sent from the content player 212 to the metadata generator 610 via the local network 630.
  • the metadata generator 610 may then act as a proxy for the content provider 220 as discussed above.
  • the metadata generation module 612 may then send the same request or an equivalent request or a newly-generated request to the content provider 220 via the network 230 at a step 312a.
  • the metadata generation module 612 may be able to generate and/or provide the content metadata 223 to the content player 212 without having to obtain the content metadata 223 from the content provider 220 - thus, the steps 312 and 314 may no longer involve the content provider 220 (so that the steps 312a and 314a are not performed), but involve the metadata generator 610 (or the metadata generation module 612) in the place of the content provider 220 (as illustrated by the dotted line 313 in figure 7).
  • the step 306 may be performed again), which may require new content metadata 223 referencing the newly selected content; (b) the conditions under which the device 210 and/or the content player 212 and/or the network 230 is/are operating may change, making the current item of content 224 inappropriate so that it would be more
  • the content metadata 223 obtained at the step 314 may contain content chunk references 2100 for only a subset of the possible content chunks 225 of the item of content 224, so that new content metadata 223 may be required in order to obtain and render further content chunks 225 of the item of content 224 (this may be particularly true for live content).
  • the metadata provider 610 having generated content metadata 223 (by virtue of the step 309 and/or the step 313) may obtain further content metadata 223 from the content provider 220 (i.e. without the metadata provider 610 generating that further content metadata 223 itself). This may be achieved using the steps 312a and 314a (and possibly the steps 308a and 310a too), as discussed above.
  • the metadata generator 610 requests the further content metadata 223 from the content provider 220.
  • the metadata generator 610 receives the further content metadata 223 from the content provider 220.
  • the steps 812a and 814a may be optional because in some embodiments the metadata generator 610 may additionally determine if the further content metadata 223 should be generated by the metadata generation module 612. Said determining may be based at least in part on any of: the age of the content selection data 222, criteria in the request of step 812, the age of any content selection data 222 available (or present or stored in or otherwise accessible by) the content player 212, one or more timestamps stored by (or available to, or present in) the content player 212. In these embodiments, if the metadata generator 610 determines the further content metadata 223 should be generated by the metadata generation module 612, then the metadata generation module 612 generates the further content metadata 223 in a similar manner to the step 512 described previously. Otherwise, the steps 812a and 814a are performed as described above.
  • the content player 212 receives the content metadata 223 from the metadata generator 610 in a similar manner to the step 314 described previously.
  • this embodiment allows a degree of flexibility in how further content metadata 223 is provided to the content player 212. This means that whilst the problematic delay described previously is still reduced, the risk of last minute changes to the items of content 224 available from the content provider 220 causing instability or degradation at the content player 212 due to out of date further content metadata 225 being generated by the metadata generator 612 is also reduced. Additionally, if sophisticated watermarking or content protection schemes are in use by the content provider 220, the risk that such schemes are partially compromised by the provision of default items of content 224 to the content player 212 may be greatly reduced.
  • a media player 212 that is in some way adapted to use a metadata generation module 412 (variously through any of: the incorporation of specific hardware in the media player 212 or the client device 210; the incorporation of specific software in the media player 212 or the client device; the modification of the firmware of the media player 212 or the client device 210; the provision of a plug-in application to the media player 212; etc.)
  • the present embodiments, through the use of the metadata generator 610 as a proxy can be used with any media player 212, including legacy media players 212 already in use. This is particularly advantageous where it is not possible or appropriate to: replace existing media players 212; or update the firmware/modify the hardware of existing media players 212; or where any such modifications may render a media player 212 non-compliant with a specific industry standard. Modifications
  • logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or elements, or may impose an alternate decomposition of functionality upon various logic blocks or elements.
  • functionality may be implemented as one or more corresponding modules as hardware and/or software.
  • functionality may be implemented as one or more software
  • the above- mentioned functionality may be implemented as hardware, such as on one or more field- programmable-gate-arrays (FPGAs), and/or one or more application-specific-integrated- circuits (ASICs), and/or one or more digital-signal-processors (DSPs), and/or other hardware arrangements.
  • FPGAs field- programmable-gate-arrays
  • ASICs application-specific-integrated- circuits
  • DSPs digital-signal-processors
  • the computer program may have one or more program instructions, or program code, which, when executed by a computer carries out an embodiment of the invention.
  • program as used herein, may be a sequence of instructions designed for execution on a computer system, and may include a subroutine, a function, a procedure, a module, an object method, an object implementation, an executable application, an applet, a servlet, source code, object code, a shared library, a dynamic linked library, and/or other sequences of instructions designed for execution on a computer system.
  • the storage medium may be a magnetic disc (such as a hard drive or a floppy disc), an optical disc (such as a CD-ROM, a DVD-ROM or a BluRay disc), or a memory (such as a ROM, a RAM, EEPROM, EPROM, Flash memory or a portable/removable memory device), etc.
  • the transmission medium may be a communications signal, a data broadcast, a communications link between two or more computers, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A method for enabling a content player (212) to access an item of content (224) from a content provider (220), wherein the item of content (224) comprises a plurality of content chunks (225), the method comprising the content player (212): generating, in the content player (212), content metadata (223-1) based at least in part on content selection data(422), wherein the content metadata (223-1) comprises one or more references, each reference being either (a) a content chunk reference or (b) a content item reference that references one or more respective content chunk references, wherein each content chunk reference corresponds to a respective content chunk of the plurality of content chunks; and using at least one of the content chunk references to obtain at least one respective content chunk.

Description

ACCESSING CONTENT
Field of the invention The present invention relates to methods for enabling a content player to access an item of content, methods for enabling a local content player to access an item of content from a remote content provider, and apparatus and computer programs for carrying out such methods. Background of the invention
Over-the-top content delivery models, in which content is provided/delivered by a content provider to a user via a multi/general-purpose network infrastructure (such as the Internet) that is not under the control of the content provider, are becoming more and more common. An example of this is Internet Protocol streaming of audio content, video content and other media content. The network conditions experienced by users or groups of users can vary wildly. For example, a user may choose to consume an item of content at home via a device connected to a high-speed, low-latency broadband network whilst another user may choose to consume the same item of content via a mobile device connected to a low-speed, high-latency mobile network. Equally, the content processing capabilities of different user devices can vary significantly. For example, a powerful home computer with hardware video rendering capability may be capable of rendering a high bit-rate video content stream smoothly, whereas a smart phone may be limited to rendering some types of video streams using software only, thus requiring a different encoding type, or a lower bit-rate, for the video stream. As more and more diverse user devices connecting via more and more diverse network technologies and topologies become available, these variations will likely increase.
In order to try to compensate for such variations, adaptive streaming technology, which transmits items of content as sets of segments or chunks, can be used - see, for example, http://en.wikipedia.org/wiki/Adaptive_bitrate_streaming, the entire contents of which are incorporated herein by reference. Therefore, the same item of content can be provided in multiple different formats and bit-rates, thus allowing user devices to receive the most appropriate variant. Indeed, the selected variant can change in response to changes of the network characteristics - for example, if a device finds that the network connection is no longer fast enough to support a current particular bit rate for an item of content, then a variant of the item of content that is more suited to a lower bit rate may be used/delivered instead. Examples of adaptive streaming include: HTTP Live
Streaming and 3GPP Adaptive HTTP Streaming (see, for example,
http://en.wikipedia.org/wiki/HTTP_Live_Streaming, the entire contents of which are incorporated herein by reference); HTTP Dynamic Streaming (see, for example, http://www.adobe.com/uk/products/hds-dynamic-streaming/faq.html, the entire contents of which are incorporated herein by reference); Microsoft Smooth Streaming (see, for example, http://www.iis.net/downloads/microsoft/smooth-streaming, the entire contents of which are incorporated herein by reference); Dynamic Adaptive Streaming over HTTP or DASH (see, for example,
http://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP, the entire contents of which are incorporated herein by reference); UDP-based solutions provided by Octoshape; etc.
Summary of the invention
Due to the number of variants typically available for a given item of content on a streaming platform, such adaptive streaming technology often requires a user device to first download a description file (or manifest or playlist) that, among other things, lists or identifies locations of content segments or chunks for a given variant. As a result, such systems can typically introduce a delay of 2-3 seconds as a best case, and more than 10 seconds as a worst case, when the user changes between different items of content or between different variants. This is because the user device first needs to download or obtain a new description file for the new item of content of for the new variant of the current item of content.
This delay is significantly longer than what users experience when selecting new content in related content delivery technologies, such as digital broadcast television and web browsing. It would, therefore, be desirable to be able to provide such over-the-top content delivery whilst reducing the delay incurred when the user changes between different items of content or variants. Embodiments of the invention seek to address these and other problems of the related prior art. According to a first aspect of the invention, there is provided a method for enabling a content player to access an item of content from a content provider, wherein the item of content comprises a plurality of content chunks, the method comprising the content player: generating, in the content player, content metadata based at least in part on content selection data, wherein the content metadata comprises one or more references, each reference being either (a) a content chunk reference or (b) a content item reference that references one or more respective content chunk references, wherein each content chunk reference corresponds to a respective content chunk of the plurality of content chunks; and using at least one of the content chunk references to obtain at least one respective content chunk.
In some embodiments, the method comprises: obtaining, from the content provider, further content metadata, wherein the further content metadata comprises one or more further content chunk references, each further content chunk reference corresponding to a further respective content chunk of the plurality of content chunks; and using at least one of the one or more further content chunk references to obtain at least one or more further respective content chunks.
According to a second aspect of the invention, there is provided a method for enabling a local content player to access an item of content from a remote content provider, wherein the item of content comprises a plurality of content chunks, the method comprising a metadata generator: receiving a request from the local content player; in response to the request, generating, in the metadata generator, content metadata based at least in part on content selection data, wherein the content metadata comprises one or more references, each reference being either (a) a content chunk reference or (b) a content item reference that references one or more respective content chunk references, wherein each content chunk reference corresponds to a respective content chunk of the plurality of content chunks; and providing the content metadata to the local content player to thereby enable the local content player to use at least one of the one or more content chunk references to obtain at least one or more respective content chunks.
In some embodiments, the method comprises: receiving a further request from the local content player; in response to the further request, obtaining, from the remote content provider, further content metadata, wherein the further content metadata comprises one or more further content chunk references, each further content chunk reference corresponding to a further respective content chunk of the plurality of content chunks; providing the further content metadata to the local content player to thereby enable the local content player to use at least one of the one or more further content chunk references to obtain at least one or more respective further content chunks.
In some embodiments, the method comprises: receiving a request from the local content player for the at least one or more respective content chunks; obtaining the at least one or more respective content chunks from the remote content provider; and providing the at least one or more respective content chunks to the local content player.
According to a third aspect of the invention, there is provided a method for enabling a content player to access an item of content from a content provider, wherein the item of content comprises a plurality of content chunks, the method comprising the content provider: generating content selection data; and providing the content selection data; wherein the content selection is configured to enable a metadata generation module of the content player or local to the content player to generate content metadata, wherein the content metadata comprises one or more references, each reference being either (a) a content chunk reference or (b) a content item reference that references one or more respective content chunk references, wherein each content chunk reference corresponds to a respective content chunk of the plurality of content chunks.
In any of the first, second or third aspects (or embodiments thereof), the content selection data may comprise the one or more references.
In any of the first, second or third aspects (or embodiments thereof), the content selection data may comprise executable program code to be executed to perform at least part of said generation of the content metadata.
In any of the first, second or third aspects (or embodiments thereof), the content selection data may comprise electronic program guide data.
In any of the first, second or third aspects (or embodiments thereof), the at least one of the content chunk references may be a default content chunk reference corresponding to a respective default content chunk.
In any of the first, second or third aspects (or embodiments thereof), the content metadata may be a manifest or a playlist corresponding to the item of content.
In any of the first, second or third aspects (or embodiments thereof), the one or more references may be generated based at least in part on a naming convention. In any of the first, second or third aspects (or embodiments thereof), the one or more references may be generated based at least in part on a network time
synchronization signal.
According to a fourth aspect of the invention, there is provided a method for enabling a local content player to access an item of content on a remote content provider, wherein the item of content comprises a plurality of content chunks, the method comprising local generation of content metadata based at least in part on content selection data, wherein the content metadata comprises one or more content chunk references each corresponding to a respective content chunk of the plurality of content chunks, thereby enabling the local content player to request, based at least in part on at least one of the one or more content chunk references, at least one or more respective content chunks without having to first obtain the one or more content chunk references from the remote content provider.
According to a fifth aspect of the invention, there is provided an apparatus arranged to carry out a method according to any of the first, second, third or fourth aspects (or embodiments thereof).
According to a sixth aspect of the invention, there is provided a computer program which, when executed by one or more processors, causes the one or more processors to carry out a method according to any of the first, second, third or fourth aspects (or embodiments thereof). The computer program may be stored on a computer-readable medium.
Brief description of the drawings Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1 schematically illustrates an example of a computer system;
Figure 2a schematically illustrates an example system for enabling a content player to access an item of content;
Figure 2b schematically illustrates an example of content metadata;
Figure 2c schematically illustrates a further example of content metadata;
Figure 3 is a sequence diagram schematically illustrating an example method for enabling a content player to access an item of content; Figure 4 schematically illustrates an exemplary system according to an embodiment of the invention;
Figure 5a is a sequence diagram schematically illustrating an example method of using the system of figure 4;
Figure 5b is a sequence diagram schematically illustrating an example method of using the system of figure 4;
Figure 5c is a sequence diagram schematically illustrating an example method of using the system of figure 4;
Figure 6 schematically illustrates an exemplary system according to an embodiment of the invention;
Figure 7 is a sequence diagram schematically illustrating an example method of using the system of figure 6; and
Figure 8 is a sequence diagram schematically illustrating an example method of using the system of figure 6.
Detailed description of embodiments of the invention
In the description that follows and in the figures, certain embodiments of the invention are described. However, it will be appreciated that the invention is not limited to the embodiments that are described and that some embodiments may not include all of the features that are described below. It will be evident, however, that various modifications and changes may be made herein without departing from the broader spirit and scope of the invention as set forth in the appended claims.
Figure 1 schematically illustrates an example of a computer system 100. The system 100 comprises a computer 102. The computer 102 comprises: a storage medium
104, a memory 106, a processor 108, an interface 1 10, a user output interface 1 12, a user input interface 1 14 and a network interface 1 16, which are all linked together over one or more communication buses 1 18.
The storage medium 104 may be any form of non-volatile data storage device such as one or more of a hard disk drive, a magnetic disc, an optical disc, a ROM, etc.
The storage medium 104 may store an operating system for the processor 108 to execute in order for the computer 102 to function. The storage medium 104 may also store one or more computer programs (or software or instructions or code). The memory 106 may be any random access memory (storage unit or volatile storage medium) suitable for storing data and/or computer programs (or software or instructions or code).
The processor 108 may be any data processing unit suitable for executing one or more computer programs (such as those stored on the storage medium 104 and/or in the memory 106), some of which may be computer programs according to embodiments of the invention or computer programs that, when executed by the processor 108, cause the processor 108 to carry out a method according to an embodiment of the invention and configure the system 100 to be a system according to an embodiment of the invention. The processor 108 may comprise a single data processing unit or multiple data processing units operating in parallel or in cooperation with each other. The processor 108, in carrying out data processing operations for embodiments of the invention, may store data to and/or read data from the storage medium 104 and/or the memory 106.
The interface 1 10 may be any unit for providing an interface to a device 122 external to, or removable from, the computer 102. The device 122 may be a data storage device, for example, one or more of an optical disc, a magnetic disc, a solid-state- storage device, etc. The device 122 may have processing capabilities - for example, the device may be a smart card. The interface 1 10 may therefore access data from, or provide data to, or interface with, the device 122 in accordance with one or more commands that it receives from the processor 108.
The user input interface 1 14 is arranged to receive input from a user, or operator, of the system 100. The user may provide this input via one or more input devices of the system 100, such as a mouse (or other pointing device) 126 and/or a keyboard 124, that are connected to, or in communication with, the user input interface 1 14. However, it will be appreciated that the user may provide input to the computer 102 via one or more additional or alternative input devices (such as a touch screen). The computer 102 may store the input received from the input devices via the user input interface 1 14 in the memory 106 for the processor 108 to subsequently access and process, or may pass it straight to the processor 108, so that the processor 108 can respond to the user input accordingly.
The user output interface 1 12 is arranged to provide a graphical/visual and/or audio output to a user, or operator, of the system 100. As such, the processor 108 may be arranged to instruct the user output interface 1 12 to form an image/video signal representing a desired graphical output, and to provide this signal to a monitor (or screen or display unit) 120 of the system 100 that is connected to the user output interface 1 12. Additionally or alternatively, the processor 108 may be arranged to instruct the user output interface 1 12 to form an audio signal representing a desired audio output, and to provide this signal to one or more speakers 121 of the system 100 that is connected to the user output interface 1 12.
Finally, the network interface 1 16 provides functionality for the computer 102 to download data from and/or upload data to one or more data communication networks.
It will be appreciated that the architecture of the system 100 illustrated in figure 1 and described above is merely exemplary and that other computer systems 100 with different architectures (for example with fewer components than shown in figure 1 or with additional and/or alternative components than shown in figure 1 ) may be used in embodiments of the invention. As examples, the computer system 100 could comprise one or more of: a personal computer; a server computer; a mobile telephone; a tablet; a laptop; a television set; a set top box; a games console; other mobile devices or consumer electronics devices; etc.
Figure 2a schematically illustrates an example system 200. The system 200 comprises a device 210, a content provider 220, and a network 230.
The network 230 may be any kind of data communication network suitable for communicating or transferring data between the device 210 and the content provider 220. Thus, the network 230 may comprise one or more of: a wide area network, a metropolitan area network, the Internet, a wireless communication network, a wired or cable communication network, a satellite communications network, a telephone network, etc. The device 210 and the content provider 220 may be arranged to communicate with each other via the network 230 via any suitable data communication protocol. For example, when the network 230 comprises the Internet, the data communication protocol may be TCP/IP, UDP, SCTP, etc.
The content provider 220 may be arranged to provide one or more items of content 224, one of which is shown in figure 2. Herein, the term "content" relates to any type of media (e.g. audio, video, text, Flash presentation, image sequence, etc.). An item of content 224 may comprise (or encode or represent) one or more of such media. An item of content 224 (or the content it represents) can be displayed, presented, or otherwise rendered by an appropriate content player. For example, the item of content 224 may comprise an MPEG-4 stream (or at least a part of an MPEG-4 stream).
The content provider 220 may be arranged to provide a plurality of items of content 224 that each relate to (or encode or represent) the same content but that are variants of each other with different characteristics. Such items of content related this way shall be referred to herein as "variants" of each other. For example, variants of an item of content 224 may differ by one or more of: encoding quality for the content, bit rate, regionalization, audio track, encoding format, etc. For example, each variant of an item of content 224 may have a respective set of encoding characteristics reflecting a different quality level when rendered by a content player.
The item of content 224 comprises one or more content chunks 225. A content chunk 225 is a portion (or part or segment or section) of the item of content 224. A content chunk 225 may correspond to a portion or element of the content represented by the item of content 224. A plurality of content chunks 225 may, or may not, together form a contiguous part of the item of content 224. A content chunk 225 may, or may not, itself form a contiguous part of the item of content 224. The content chunks 225 may be non- overlapping, overlapping, partially overlapping, etc. The length of the content chunks 225 may be fixed/predetermined, variable, dependent on one or more other parameters, etc. Herein, by "length", it is meant one of: duration ; or bit-length; or size. The content chunks 225 may be the result of any full or partial partitioning of the item of content 224. For example, if the item of content 224 comprises video content, the content chunks 225 may correspond to (or represent or encode) one or more respective frames or groups of frames of that video content; if the item of content 224 comprises audio content, the content chunks 225 may correspond to (or represent or encode) one or more respective time periods of audio content (e.g. a number of seconds of audio). For example, the content chunks 225 may comprise one or more parts (or elements) of an MPEG-2 transport stream or an MPEG-4 transport stream or H .264 encoded content.
The content chunks 225 of an item of content 224 may be arranged sequentially in a rendering/output order for the item of content 224. Thus, if an item of content 224 comprises N content chunks 2251 ; 2252,... ,225N, then the content chunk 225, is to be output or rendered immediately after outputting or rendering the content chunk 225M (i=2, ..,N). The content provider 220 may be arranged to provide content selection data 222. The content selection data 222 may be generated by the content provider 220. In general, the content selection data 222 can be any data that identifies one or more amounts of content that the content provider 220 is arranged to provide as
corresponding items of content 224 (currently and/or potentially at some point in the future). For example, the content selection data 222 may comprise Electronic Program Guide (EPG) data or data forming (or representing or otherwise capable of being processed to produce) an EPG which lists content available from, or provided by, the content provider 220, including content represented by the item of content 224.
The content provider 220 may be arranged to provide content metadata 223. The content metadata 223 will be described shortly in more detail below.
The content provider 220 may be a computer system, such as the exemplary computer system 100 shown in figure 1 . In this case the content provider 220 may be a single server. Alternatively, the content provider 220 may comprise a plurality of such computer systems 100 (such as a plurality of servers). If the content provider 220 comprises a plurality of computer systems 100 - these computer systems may communicate (or be managed or be otherwise coordinated) via one or more networks. These one or more networks may be any kind of data communication network suitable for communicating or transferring data between two or more computer systems 100 of the plurality of computer systems 100. Thus, the one or more networks may comprise any of: a local area network, a wide area network, a metropolitan area network, the Internet, a wireless communication network, a wired or cable communication network, a satellite communications network, a telephone network, etc. One or more computer systems 100 of the plurality of computer systems 100 may be located in a geographical area different to that of one or more other computer systems 100 of the plurality of computer systems. One or more computer systems 100 of the plurality of computer systems 100 may have, or perform, corresponding particular roles/tasks of the content provider 220, such as one or more of: providing particular items of content 224, providing particular content chunks 225, generating and/or providing content selection data 222 (or portions thereof), generating and/or providing content metadata 223 (or portions thereof), coordinating (or administering or otherwise managing) one or more other computer systems 100 of the plurality of computer systems, acting as a gateway (or router or load- balancer) for one or more other computer systems 100 of the plurality of computer systems. For example, the content provider 220 may comprise what is usually termed a Content Delivery Network. Content Delivery Networks are well-known and are therefore not further described in detail herein - (see, for example,
http://en.wikipedia.org/wiki/Content_delivery_network, the entire contents of which is incorporated herein by reference).
The device 210 may be operable by a user 201 . The device 210 may be a computer system, such as the exemplary computer system 100 shown in figure 1 . For example, the device 210 may be a personal computer, a laptop, a tablet computer, a mobile telephone, a smart watch, a smart television, etc. The device 210 comprises a content player 212 (or is arranged to execute a content player 212, for example on a processor of the device 210). The content player 212 may be implemented using one or more of hardware, software, firmware, etc.
The content player 212 may be arranged to render an item of content 224 for a user. For example: if at least part of the item of content 224 represents/encodes video content then the content player 212 may display the video content to the user 201 (e.g. via the display 120); if at least part of the item of content 224 represents/encodes audio content then the content player 212 may play the audio content to the user 201 (e.g. via one or more speakers 121 ), etc. The content player 212 may be arranged to display or otherwise use the content selection data 222. For example, if the content selection data 222 comprises EPG data then the content player 212 may display a corresponding EPG.
In general, the content metadata 223 can comprise any data that identifies (specifies or otherwise addresses, references or provides a link to) at least part of one or more items of content 224 that the content provider 220 is arranged to provide. The content metadata 223 may comprise one or more references to enable the content player 212 to access any of: one or more items of content 224; part of one or more items of content 224; one or more content chunks 225; etc.
Figure 2b schematically illustrates a particular example of the content metadata 223. Figure 2b shows content metadata 223 and one or more items of content 224-1 , 224-2 as described previously. Each item of content 224-1 , 224-2 comprises one or more respective content chunks 225-1 , 225-2 as described previously. Whilst figure 2b illustrates two items of content 224 (namely the items of content labeled 224-1 and 224- 2), it will be appreciated that this is merely for illustrative purposes and that different numbers of items of content 224 can be used. Similarly, whilst figure 2b illustrates each item of content 224 having three respective content chunks, it will be appreciated that this is merely for illustrative purposes and that different items of content 224 can have different respective numbers of content chunks 225.
The content metadata 223 comprises one or more content chunk references 2100. A content chunk reference 2100 may correspond to (or specify, identify or otherwise address or provide a link to) a respective content chunk 225-1 , 225-2 (as illustrated by the dashed lines in figure 2b). If the content metadata 223 comprises a plurality of content chunk references 2100 then one or more of these content chunk references 2100 may correspond to respective content chunks 225-1 of a first item of content 224-1 and one or more of these content chunk references 2100 may correspond to respective content chunks 225-2 of a second item of content 224-2. The second item of content 224-2 may be a variant of the first item of content 224-1 . Thus, in general, the content chunk references 2100 may correspond to respective content chunks 225 from different items of content 224. Indeed, the content metadata 223 may correspond to a particular amount of content (e.g. a particular movie), in that the content metadata 223 may comprise content chunk references 2100 corresponding to each content chunk 225 of each variant of an item of content 224 representing/encoding that amount of content. Alternatively, all of the content chunk references 2100 of the content metadata 223 may correspond to respective content chunks 225-1 , 225-2 of the same item of content 224-1 , 224-2. Each content chunk reference 2100 may comprise one or more of any of, or one or more of any part of any of: a Uniform Resource Indicator, Uniform Resource Locator, a hyperlink, a file system reference, an offset for an array (or a file or a data stream or a data sequence), a file name etc. For example the content metadata 223 may comprise a manifest (or variant playlist). The manifest (or variant playlist) may be any of an HTTP Live Streaming (HLS) variant playlist, a Dynamic Adaptive Streaming over HTTP (DASH) manifest, HTTP Dynamic Streaming (HDS) variant playlist, etc. A content chunk reference 2100 (or part thereof) may be formed by (or may follow or may comprise elements which follow) a content chunk naming convention used by the content provider 220. The content chunk naming convention may enable a content chunk reference 2100 (or a part of a content chunk reference 2100) to be formed based on one or more characteristics of the corresponding item of content 224. Such characteristics may comprise any of: encoding quality for the content, bit rate, regionalization, audio track, encoding format, title, author, publisher, director, timestamp, etc. For example, a content chunk reference 2100 may comprise a concatenation of the title of the corresponding item of content and the bit rate of the corresponding item of content separated by a predetermined character. The content chunk naming convention may be specific to any combination of any of: the content provider 220; the content player 212; the content device 210; and the network 230.
Figure 2c schematically illustrates a further example of content metadata 223. Figure 2c shows content metadata 223-a, one or more content metadata 223-b, and one or more items of content 224-1 , 224-2 as described above with reference to figure 2b. Each item of content 224-1 , 224-2 comprises one or more respective content chunks 225-1 , 225-2 as described above with reference to figure 2b.
The one or more content metadata 223-b are content metadata 223 as described above with reference to figure 2b, and shall be referred to below as one or more "variant content metadata" 223-b. The content metadata 223-a shall be referred to as "master content metadata" 223-a. The master content metadata 223-a may comprise content metadata 223 as described above with reference to figure 2b (and, therefore, may comprise one or more content chunk references 2100). However, the master content metadata 223-a comprises one or more content item references 2200.
A content item reference 2200 may correspond to (or specify, identify or otherwise address or provide a link to) one or more content chunk references 2100. For example, a content item reference 2200 may correspond to (or specify, identify or otherwise address or provide a link to) one or more of the variant content metadata 223- b. A content item reference 2200 may correspond to a single item of content 224-1 , 224- 2 (e.g. if the content item reference 2200 identifies variant content metadata 223-b which itself references a single item of content 224-1 , 224-2). A content item reference 2200 may correspond to two or more items of content 224-1 , 224-2 (e.g. if the content item reference 2200 identifies variant content metadata 223-b which itself references multiple items of content 224-1 , 224-2). A content item reference 2200 may comprise one or more of any of, or one or more of any part of any of: a Uniform Resource Indicator, Uniform Resource Locator, a hyperlink, a file system reference, an offset for an array (or a file or a data stream or a data sequence), a file name etc. For example, the master content metadata 223-a may comprise a playlist (or master playlist or master manifest). The playlist may be an HTTP Live Streaming (HLS) master playlist, HTTP Dynamic Streaming (HDS) master playlist, etc. A content item reference 2200 (or part thereof) may be formed by (or may follow or may comprise elements which follow) a content item naming convention used by the content provider 220. The content item naming convention may enable a content item reference 2200 (or a part of a content item reference 2200) to be formed based on one or more characteristics of the corresponding item of content 224. Such characteristics may comprise any of: encoding quality for the content, bit rate, regionalization, audio track, encoding format, title, author, publisher, director, timestamp, etc. For example, a content item reference 2200 may comprise a concatenation of the title of the corresponding item of content and the bit rate of the corresponding item of content separated by a predetermined character. The content chunk naming convention may be specific to any combination of any of: the content provider 220; the content player 212; the content device 210; and the network 230.
Whilst figure 2c illustrates the master content metadata 223-a referencing two variant content metadata 223-b via respective content item references 2200, it will be appreciated that this is merely for illustrative purposes and that the master content metadata 223-a may reference any number of variant content metadata 223-b via one or more content item references 2200.
Figure 3 is a sequence diagram schematically illustrating an example method 300 for using the system 200 of figure 2a.
At an optional step 302, the content player 212 requests the content selection data 222. This request is sent to the content provider 220 via the network 230.
At an optional step 304, the content player 212 receives or obtains the content selection data 222 from the content provider 220. The content selection data 222 may be received via the network 230. The content selection data 222 may be received in response to the request of the step 302 (i.e. in response to the content provider 220 receiving the request at the step 302). Alternatively, the content selection data 222 may be received by the content player 212 without the need for any request like that in the step 302 - for example, the content provider 220 may initiate the transmission of the content selection data 222 to the content player 212.
As mentioned, the steps 302 and 304 are optional, because the content player 212 may already be storing, or may already have access to, the content selection data
222. For example, the content player 212 may have previously been provided with, or have previously obtained, the content selection data 222. Alternatively, the content player 212 may have previously carried out either or both of the steps 302 and 304 and may, then, have stored the content selection data 222 for subsequent use. In this case, the steps 302 and 304 need not be repeated and the content player 212 can simply access or obtain the stored content selection data 222.
At a step 306, content is selected (or identified or otherwise chosen). This may be performed in any of a number of available ways, examples of which are set out below.
The step 306 may comprise a user 201 selecting content using the content player 212. The step 306 may comprise a user 201 selecting content using an additional device or system communicatively interfaced with the content player 212 (such as a remote control device, e.g. a traditional infra-red remote control device; a mobile computing device - e.g. a tablet computer or a mobile phone - running remote control software; a browser based EPG application running on the client device 210 or on a mobile device connected to the client device - e.g. via a wireless network connection or a Bluetooth connection; etc.). The user 201 may select the content based on the content selection data 222. The content player 212 (or the additional device) may have previously displayed (or processed or otherwise rendered) the content selection data 222 so as to enable the user 201 to select the content. If the content selection data 222 comprises EPG data, the content player 212 (or the additional device) may display a corresponding EPG (based on that EPG data) to the user 201 . The user 201 may select the content by interacting with the EPG. EPGs (and methods of displaying and interacting with EPGs) are well-known and shall not, therefore, be described in more detail herein.
Alternatively, the content player 212 may select the content itself. The content player 212 may have stored (or have access to or otherwise be able to generate) rules (or methods or parameters) to be processed by the content player 212 to enable the content player 212 to automatically select content. These rules may be based in whole or in part on one or more of: previous user content selections, user preferences, user information, previous user actions, content selection data 222, etc. For example, the content player 212 may automatically select content related to previously selected content. For example, the content player 212 may select content which comprises an episode of a television serial for which the user 201 has previously selected content comprising a previous episode of that television serial.
At an optional step 308, the content player 212 requests master content metadata 223-a (described above with reference to figure 2c) based on, or
corresponding to, the selected content. The content player 212 requests the master content metadata 223-a from the content provider 220. The content player 212 may use the content selection data 222 to request the master content metadata 223-a. For example, the content selection data 222 may comprise data that identifies (or which the content player 212 can use to identify) a content provider 220 associated with the selected content and to which the request for the master content metadata 223-a should be issued or communicated, and the content player 212 may then issue or communicate the request for the master content metadata 223-a to the identified content provider 220, where this request identifies the selected content (so that the content provider 220 knows which master content metadata 223-a to provide back to the content player 212, namely master content metadata 223-a corresponding to the selected content).
Alternatively, the content player 212 may send the request to a predetermined content provider 220 (e.g. a content provider 220 that corresponds to the content player 212). The identification, in the request, of the selected content may be based on data in the content selection data 222 (e.g. a content identifier).
At an optional step 310 the content player 212 receives the master content metadata 223-a from the content provider 220, i.e. the content provider 220 receives the request issued at the step 308 and provides corresponding master content metadata 223-a to the content player 212. If the master content metadata 223-a corresponds to an item of content 224, said item of content 224 corresponds to the selected content.
The steps 308 and 310 are optional because, as described above, there are numerous configurations for content metadata 223 (such as those described with reference to figures 2b and 2c). The steps 308 and 310 may be carried out when master content metadata 223-a is being used (as in the example shown in figure 2c), but may be omitted when master content metadata 223-a is not being used (as in the example shown in figure 2b).
If the steps 308 and 310 are carried out, so that the content player 212 has received the master content metadata 223-a from the content provider 220 at the step 310 then, at a step 312, the content player 212 requests content metadata 223 from the content provider 220. The content metadata 223 is variant content metadata 223-b which has been described previously with reference to figure 2c. The content player 212 uses the master content metadata 223-a to identify variant content metadata 223-b to request from the content provider 220. In particular, the content player 212 may use one or more content item references 2200 of the master content metadata 223-a to identify the variant content metadata 223-b. The master content metadata 223-a may, for example, comprise one or more content item references 2200 that correspond to the selected content (e.g. each content item reference 2200 corresponds to a variant of an item of content encoding the selected content in a different way or with different characteristics), and the content player 212 may select one of those one or more content item references 2200 (e.g. one that matches, or corresponds to, a desired rendering quality or bit rate) and then request variant content metadata 223-b identified by that selected content item reference 2200.
Alternatively, if the steps 308 and 310 are not carried out, then, at the step 312, the content player 212 may request content metadata 223 from the content provider 220. This content metadata 223 is content metadata 223 as has been described previously with reference to figure 2b. The content player 212 requests the content metadata 223 based on the selected content. The content player 212 may use the content selection data 222 to request the content metadata 223. For example, the content selection data 222 may comprise data that identifies (or which the content player 212 can use to identify) a content provider 220 associated with the selected content and to which the request for the content metadata 223 should be issued or communicated, and the content player 212 may then issue or communicate the request for the content metadata 223 to the identified content provider 220, where this request identifies the selected content (so that the content provider 220 knows which content metadata 223 to provide back to the content player 212, namely content metadata 223 corresponding to the selected content). Alternatively, the content player 212 may send the request to a predetermined content provider 220 (e.g. a content provider 220 that corresponds to the content player 212). The identification, in the request, of the selected content may be based on data in the content selection data 222 (e.g. a content identifier). The request may identify or specify a desired rendering quality or bit rate (or other characteristic for the selected content), so that the content provider 220 may select content metadata 223 corresponding to that desired rendering quality or bit rate (or other characteristic for the selected content).
At a step 314, the content player 212 receives the content metadata 223 from the content provider 220, i.e. the content provider 220 receives the request issued at the step 312 and provides corresponding content metadata 223 to the content player 212. At a step 316, the content player 212 requests a content chunk 225 using at least one content chunk reference 2100 of the content metadata 223 received at the step 314. As discussed above, a content chunk reference 2100 identifies, or addresses or links to, a corresponding content chunk 225. Thus, the content player 212 may access, or request/obtain, a content chunk 225 identified, or addressed or linked to, by a content chunk reference 2100 in the received content metadata 223. The step 316 may comprise the content player 212 requesting the content chunk 225 from the content provider 220.
At a step 318, the content player 212 receives the requested content chunk 225. The step 318 may comprise the content player 212 receiving the content chunk 225 from the content provider 220.
At a step 322, the content player 212 renders, or outputs, a section of content 320. The section of content 320 comprises part of the item of content 224 for the selected content. The section of content 320 comprises some or all of the content of the content chunk 225 received at the step 318.
At an optional step 326 the content player 212 requests a further content chunk
225 (e.g. a content chunk 225 sequentially next (in a rendering order for the selected content) after the content chunk 225 received at the step 318) using one or more content chunk references 2100 of the content metadata 223 received at the step 314. The content player 212 requests the further content chunk 225 from the content provider 220. The step 326 may be initiated (or carried out) during (or concurrent with or partially concurrent with) the step 322, so that this further content chunk 225 is received and ready for rendering before rendering of the current content chunk 225 has finished (thereby providing seamless content output). At an optional step 328 the content player 212 receives the further content chunk 225 from the content provider 220. At an optional step 332, the content player 212 renders, or outputs, a further section of content 330.
This further section of content 330 comprises part of the item of content 224 for the selected content. This further section of content 330 comprises some or all of the content of the content chunk 225 received at the step 328. Thus, the steps 326, 328 and 332 are repeats of the steps 316, 318 and 322 respectively, in respect of further, or subsequent, content chunks identified (or referenced) by the content metadata 223 received at the step 314. The steps 326, 328 and 332 may be repeated in respect of yet further, or subsequent, content chunks identified (or referenced) by the content metadata 223 received at the step 314. The sequence of steps 312, 314, 316, 318 and 322 (possibly with the optional steps 308 and 310 and/or the optional steps 326, 328 and 332) may be repeated one or more times. This may occur for a number of reasons, such as: (a) new content may be selected, e.g. by the user 201 (i.e. the step 306 may be performed again), which may require new content metadata 223 referencing the newly selected content; (b) the conditions under which the device 210 and/or the content player 212 and/or the network 230 is/are operating may change, making the current item of content 224 inappropriate so that it would be more appropriate to obtain content chunks 225 from a variant of the current item of content 224 (e.g. if network conditions change so that the download speed to the device 210 decreases or increases, then a variant of the current item of content 224 encoded at a lower or higher bit-rate may be more appropriate), which may require new content metadata 223; (c) the content metadata 223 obtained at the step 314 may contain content chunk references 2100 for only a subset of the possible content chunks 225 of the item of content 224, so that new content metadata 223 may be required in order to obtain and render further content chunks 225 of the item of content 224 (this may be particularly true for live content). The repetition of the steps 312, 314 (and possibly the optional steps 308 and 310 too) may be performed whilst content is being rendered at the step 322 (and/or at the optional step 332), so that the time taken to perform these steps is in parallel with the content rendering that is being performed, thereby ensuring continuous/seamless content rendering (i.e. the further metadata 223 may be obtained in advance of when it is required).
In any of the steps above where the content player 212 requests data from the content provider 220 the following will be appreciated. If the content provider 220 comprises a plurality of computer systems 100 the request may be sent to (or received at) one or more computer systems 100 of the plurality of computer systems 100. The one or more computer systems 100 may be chosen based upon one or more criteria, such as: geographical location, latency, access permissions, the content selection data 222 itself, etc. The request may comprise any type of request, such as an HTTP GET request, an FTP GET request, etc. The request may comprise a conditional request, for example an HTTP conditional GET request.
In any of the steps above where the content player 212 receives data (such as content metadata 223, master content metadata 223-a, variant content metadata 223-b, content selection data 222, content chunks 225, etc.) from the content provider 220 the following will be appreciated. If the content provider 220 comprises a plurality of computer systems 100 the data may be received from (or sent by) one or more particular computer systems 100 of the plurality of computer systems 100. The one or more computer systems 100 may be chosen based upon one or more criteria, such as:
geographical location, latency, access permissions, the content selection data 222 itself, etc. If the data is received in response to a request, the data may be received from one or more of the entities (or content providers 220 or servers or computer systems 100) to which the request was sent. If the data is received in response to a request, the data may be received from one or more entities (or content providers 220 or servers or computer systems 100) other than the entities to which the request was sent.
A request for data (for example, content selection data 222 at the step 302;
content metadata 223 at the steps 308 and 312; a content chunk 225 at the steps 316, 326; etc.) and a corresponding subsequent receiving of the requested data (such as described in the steps 304, 310, 314, 318 and 328) may be performed by the content player 212 generating and sending a request to the content provider 220, and the content provider 220, in response to receiving that request, sending a corresponding response back to the content player 212. Alternatively, one or more of these steps may be carried out by the content player 212 itself fetching (or obtaining and accessing) the data from the content provider 220 without the content provider 220 itself actually receiving and processing that request - this may be referred to herein as a single "fetch" operation by the content player 212. It will be appreciated that the same applies analogously to other request/receipt operations described herein below.
A problem with the system 200 and the method 300 is that, when the user 201 selects new content (or a new variant of an item of content) to be rendered, then the steps 312 and 314 (and potentially the steps 308 and 310 if master content metadata
223-a is being used) need to be repeated. This inevitably causes a delay between the time that the user 201 makes the new selection and the time at which content rendering can be performed at the step 322 for the newly selected content (or newly selected variant). This delay is often too long for a sufficiently seamless user experience and therefore causes inconvenience and delay to a user (for example, if the user 201 is
"channel hopping" and therefore wishes to change "channel" a significant number of times in order to preview content on different channels, then this kind of delay imposes a significant delay and inefficiency to that channel hopping process). Embodiments of the invention therefore aim to reduce the above-mentioned delay that occurs when the user 201 selects new content (or a new variant of an item of content) by arranging for the system 200 to be able to operate without having to re- perform the steps 312 and 314 (and potentially the steps 308 and 310 if master content metadata 223-a is being used) and/or by making the re-performance of those steps more efficient or faster. Ways of achieving this are set out below.
Figure 4 schematically illustrates an exemplary system 400 according to one embodiment of the invention. The system 400 is the same as the system 200 of figure 2, except as described below. Therefore, features in common to the system 400 and the system 200 have the same reference numeral and shall not be described again.
In the system 400, the content player 212 comprises, or is arranged to execute, a metadata generation module 412. The metadata generation module 412 is arranged to generate (or create or form) content metadata 223-1 based at least in part on content selection data 422.
The content selection data 422 may comprise the original/initial content metadata
222 of the system 200 along with additional data, such as one or more of:
(a) A compressed version of some or all of the content metadata 223-1 .
(b) Data associated with the content metadata 223-1 .
(c) A content metadata template.
(d) An application or software (such as bytecode, a script, or some otherwise executable set of instructions).
(e) A timestamp.
(f) One or more content item references 2200 (as described previously with reference to figure 2c) corresponding to one or more respective predetermined or default (or generic) items of content 224. Such content item references 2200 will be referred to hereafter as "default content item references".
(g) One or more content chunk references 2100 (as described previously with reference to figure 2b) corresponding to one or more respective content chunks 225 of one or more respective predetermined or default (or generic) items of content 224. Such content chunk references 2100 will be referred to hereafter as "default content chunk references".
Alternatively, the content selection data 222 which the content provider 220 of the system 200 would have provided may remain unchanged in the system 400, and the content player 212 may receive additional data (such as one or more of the additional data (a)-(g) mentioned above) along with, but potentially separate from, the "original" content selection data 222 at the time that the content player 212 obtains the "original" content selection data 222 (for example, as part of the same operation for obtaining the "original" content selection data 222 from the content provider 220). This additional data, together with the "original" content selection data 222, may be viewed, in the system 400, as forming new content selection data 422 that the metadata generation module 412 uses to generate the content metadata 223-1 .
Thus, the content provider 220 may be arranged to generate and provide/output the content selection data 422 (such as any of the above-mentioned content selection data 422).
A predetermined or default item of content 224 (corresponding to a default content item reference or to a default content chunk reference) may comprise any type of predetermined content such as: advertising content; trailers (or previews) of further items of content 224; idents (such as animated channel/content provider logos); etc. A default item of content 224 may comprise a portion of the selected content (for example from the start of the selected content) encoded such that the default item of content 224 has predetermined characteristics. For example, a default item of content may comprise an initial portion of the selected content encoded at a predetermined bitrate. As can be seen below, the use of such default items of content 224, enables, for example, the content player 212 to render a default item of content 224 shortly after an item of content 224 has been selected and whilst the content player 212 is still obtaining content chunks 225 of the said selected item of content 224 or metadata 223 for the selected item of content 224. By having the default content item references or the default content chunk references already available via the content selection data 422, the content player 212 can request/obtain content chunks 225 more quickly. This thereby reduces the problematic delay described previously.
The metadata generation module 412 may be arranged to generate content metadata 223-1 based at least in part on some or all of the additional data (a)-(g).
Examples of this shall be described in more detail below. It will, however, be
appreciated that embodiments of the invention may make use of other types of additional data instead, or in addition, in order to make the method 300 of figure 3 more efficient. The content metadata 223-1 generated by the content metadata generator 412 may comprise any of: content metadata 223 as described previously with reference to figure 2b; master content metadata 223-a as described previously with reference to figure 2c; or variant content metadata 223-b as described previously with reference to figure 2c. The content metadata 223-1 may be identical (or equivalent or correspond) to the content metadata 223 (or the master content metadata 223-a, or the variant content metadata 223b) that the content provider 220 of the system 200 would have provided at the step 310 or 314 (or at least a part thereof).
If the content metadata 223-1 comprises content metadata 223 or variant content metadata 223-b, the content metadata 223-1 may comprise one or more of: one or more content chunk references 2100 that are equivalent or correspond to, or the same as, one or more content references 2100 that the content provider 220 of the system 200 would have provided at the step 314; or one or more default content chunk references.
If the content metadata 223-1 comprises master content metadata 223-a, the content metadata 223-1 may comprise one or more of: one or more content item references 2200 that are equivalent or correspond to, or the same as, one or more content item references 2200 that the content provider 220 of the system 200 would have provided at the step 310; or one or more default content item references.
Figure 5a is a sequence diagram schematically illustrating an example method 500-a for using the system 400 of figure 4. The method 500-a is the same as the method 300 of figure 3, except as described below. Therefore, steps in common to the method 500-a and the method 300 have the same reference numeral and shall not be described again.
As shown in figure 5a, the method 500-a differs from the method 300 first in that the step 304 is replaced by a step 504. The step 504 is the same as the step 304, except that instead of the content player 212 receiving the content selection data 222, the content player 212 receives the above-mentioned content selection data 422. The content provider 220 may, therefore, be arranged to generate the content selection data 422.
Additionally, as shown in figure 5a, the method 500-a also differs from the method 300 in that the steps 308 and 310 in the method 300 are replaced by a step 508. In the step 508 the metadata generation module 412 generates the content metadata 223-1 based, at least in part, on the content selection data 422. The content metadata 223-1 is master content metadata 223-a (as described above with reference to figure 2c).
The step 508 may comprise the metadata generation module 412 populating (or otherwise filling) a content metadata template using data associated with the content metadata 223-1 . The content metadata template and/or the data associated with the content metadata 223-1 may have been provided as part of the content selection data 422 in the step 504 (i.e. additional data types (b) and (c) mentioned above) and may, therefore, have been generated by the content provider 220. The content metadata template and/or the data associated with the content metadata 223-1 may already be stored in (or be otherwise locally accessible to) the content player 212 or the device 210. The data may be some or all of the data that would have formed the content metadata 223-a provided at the step 310 of the method 300.
The step 508 may comprise the metadata generation module 412 decompressing (or otherwise extracting at least part of the content metadata 223-1 from) a compressed version of part or all of the content metadata 223-1 . This compression version may therefore have been generated by the content provider 220 by compressing content metadata 223 stored at, or generated by, the content provider 220. The compressed version of part or all of the content metadata 223-1 may have been provided as part of the content selection data 422 in the step 504 (i.e. additional data type (a) mentioned above). The compressed data may be a compressed version of some or all of the data that would have formed the content metadata 233-a provided at the step 310 of the method 300.
The step 508 may comprise the metadata generation module 412 executing an application or software (e.g. bytecode or a script or an otherwise executable set of instructions) - this means that a legacy content player 222 may be configured to carry out the step 508 by provision of this new application or software (e.g. as plug-in module). The application or software may have been provided as part of the content selection data 422 in the step 504 (i.e. additional data type (d) mentioned above). For example, the application or software may be configured to generate a content item reference 2200 using the predetermined format or naming convention used by the content provider 220, based on the selected content.
The generation of the content metadata 223 may be based at least in part on a network time synchronization signal. In particular, the step 508 may comprise the metadata generation module 412 using: a time based algorithm; and a time value substantially synchronized with the content provider 220 using the network time synchronization signal. Thus, the metadata generation module 412 may make use of a timestamp associated with the selected content in order to generate the content metadata 223-1 . This may be used, for example, where the format of a content item reference 2200 makes use of a timestamp (or timecode, i.e. additional data type (e) mentioned above) for content to which that content item reference 2200 corresponds (as described above) - thus, the metadata generation module 412 may use the timestamp associated with content to help create a content item reference 2200 for that content.
The step 508 may comprise the metadata generation module 412 extracting (or reading or obtaining) default content item references (i.e. additional data type (f) mentioned above) from the content selection data 422, and using these default content item references to form the metadata 223-1 .
Figure 5b is a sequence diagram schematically illustrating an example method 500-b for using the system 400 of figure 4. The method 500-b is the same as the method 300 of figure 3, except as described below. Therefore, steps in common to the method 500-b and the method 300 have the same reference numeral and shall not be described again.
As shown in figure 5b, the method 500-b differs from the method 300 first in that the step 304 is replaced by a step 504. The step 504 is the same as the step 304, except that instead of the content player 212 receiving the content selection data 222, the content player 212 receives the above-mentioned content selection data 422. The content provider 220 may, therefore, be arranged to generate the content selection data 422.
Additionally, as shown in figure 5b, the method 500-b also differs from the method 300 in that the steps 312 and 314 in the method 300 are replaced by a step 512. In the step 512 the metadata generation module 412 generates content metadata 223-1 . This step 512 is performed in substantially the same way as the step 508 described above (using any of the above-mentioned additional data (a)-(e) of the content selection data 422), except that the metadata 223-1 generated at the step 512 is either (i) variant content metadata 223-b (as described above with reference to figure 2c) if the steps 308 and 310 are performed, or (ii) content metadata 223 (as described above with reference to figure 2b) if the steps 308 and 310 are not performed. The step 512 may comprise the metadata generation module 412 extracting (or reading or obtaining) default content chunk references (i.e. additional data type (g) mentioned above) from the content selection data 422, and using these default content chunk references to form the metadata 223-1 .
Figure 5c is a sequence diagram schematically illustrating an example method
500-c for using the system 400 of figure 4. The method 500-c is the same as the method 300 of figure 3, except as described below. Therefore, steps in common to the method 500-c and the method 300 have the same reference numeral and shall not be described again.
As shown in figure 5c, the method 500-c differs from the method 300 first in that the step 304 is replaced by a step 504. The step 504 is the same as the step 304, except that instead of the content player 212 receiving the content selection data 222, the content player 212 receives the above-mentioned content selection data 422. The content provider 220 may, therefore, be arranged to generate the content selection data 422.
Additionally, as shown in figure 5c, the method 500-c also differs from the method 300 in that the steps 308 and 310 in the method 300 are replaced by the step 508 as described above with reference to figure 5a.
Additionally, as shown in figure 5c, the method 500-c also differs from the method 300 in that the steps 312 and 314 in the method 300 are replaced by the step 512 as described above with reference to figure 5b. The content metadata 223-1 generated at the step 512 is variant content metadata 223-b. The metadata generation module 412 may generate the variant content metadata 223-b based at least in part on the master content metadata 223-a generated at the step 508.
Thus, the methods 500-a, 500-b and 500-c reduce the above-mentioned problematic delay by avoiding having to perform the steps 308 and 310 and/or the steps 312 and 314. This is achieved via the metadata generator 412 itself generating some or all of the content metadata 223 instead of having to request and receive (or otherwise obtain) that content metadata 223 from the content provider 220 via the network 230. The metadata generator 412 is enabled to do this using the content selection data 422.
In an analogous manner as for figure 3, the sequence of steps 312, 314 (or the step 512 if used in place of the steps 312 and 314), 316, 318 and 322 (possibly with the optional steps 308 and 310, or the step 508 if used in place of the steps 308 and 310, and/or the optional steps 326, 328 and 332) may be repeated one or more times. This may occur for a number of reasons, such as: (a) new content may be selected, e.g. by the user 201 (i.e. the step 306 may be performed again), which may require new content metadata 223 referencing the newly selected content; (b) the conditions under which the device 210 and/or the content player 212 and/or the network 230 is/are operating may change, making the current item of content 224 inappropriate so that it would be more appropriate to obtain content chunks 225 from a variant of the current item of content 224 (e.g. if network conditions change so that the download speed to the device 210 decreases or increases, then a variant of the current item of content 224 encoded at a lower or higher bit-rate may be more appropriate), which may require new content metadata 223; (c) the content metadata 223 obtained at the step 314 or 512 may contain content chunk references 2100 for only a subset of the possible content chunks 225 of the item of content 224, so that new content metadata 223 may be required in order to obtain and render further content chunks 225 of the item of content 224 (this may be particularly true for live content). The repetition of the steps 312, 314 (or the step 512 if used in place of the steps 312 and 314), and possibly the optional steps 308 and 310 too (or the step 508 if used in place of the steps 308 and 310) may be performed whilst content is being rendered at the step 322 (and/or at the optional step 332), so that the time taken to perform these steps is in parallel with the content rendering that is being performed, thereby ensuring continuous/seamless content rendering (i.e. the further metadata 223 may be obtained in advance of when it is required).
In some embodiments, the content player 212, having generated content metadata 223 (by virtue of the step 508 and/or the step 512) may obtain further content metadata 223 from the content provider 220 (i.e. without the content player 212 generating that further content metadata 223 itself). This may be achieved using the steps 312 and 314 (and possibly the steps 308 and 310 too), as discussed above with reference to figure 3. Whilst these steps themselves take longer than having the content player 212 itself generate content metadata 223, it will be appreciated that (a) by initially having the content player 212 generate initial content metadata 223, the content player is able to start rendering content to the user 201 sooner whilst (b) the extra time incurred in having the content player 212 obtain further content metadata 223 from the content provider 220 is less important, as this further content metadata 223 can be obtained, for example, whilst content is being rendered (i.e. as a background process, or in parallel), so that this extra time that is incurred is not noticeable to the user 201 . Thus further content metadata 223 may then be used in the same way as described above with respect to figure 3.
Figure 6 schematically illustrates an exemplary system 600 according to one embodiment of the invention. The system 600 is the same as the system 200 of figure 2, except as described below. Therefore, features in common to the system 600 and the system 200 have the same reference numeral and shall not be described again. In particular, the system 600 further comprises a metadata generator 610 and, optionally, a local network 630. The metadata generator 610 is a physically separate and/or logically separate unit/entity from the content player 212 (and possibly from the device 210).
The local network 630 may be any kind of data communication network suitable for communicating or transferring data: (a) between the device 210 and the metadata generator 610; and (b) between the metadata generator 610 and the network 230. Thus, the network 630 may comprise one or more of: a local area network, a wired or cable communication network, a WiFi network, etc. The device 210 and the metadata generator 610 may be arranged to communicate with each other via the local network 630 via any suitable data communication protocol. For example, when the local network 630 comprises a local area network, the data communication protocol may be TCP/IP, UDP, SCTP, etc. The metadata generator 610 and the content provider 220 may be arranged to communicate with each other via the local network 630 and the network 230 via any suitable data communication protocol or protocols. For example, when the local network 630 comprises a local area network and the network 230 comprises the Internet, the data communication protocol may be TCP/IP, UDP, SCTP, etc. The device 210 may communicate with the content provider 210 via the local network 630 and the network 230 or, potentially, just via the network 230 (as illustrated by the dashed line in figure 6).
The local network 630 is optional as: the device 210 may be arranged to communicate directly with the metadata generator 610 via any suitable data
communication protocol; and the metadata generator 610 may be connected directly to the network 230 and thus arranged to communicate with the content provider 220 in an analogous manner to the device 210 in figure 2 (so that the local network 630 may be viewed as a local part of the existing network 230).
By using such a local network or direct connection it will be appreciated that one or more of the following advantages will be obtained: (a) the speed of communication between the content player 212 and the metadata generator 610 will be greater than the speed of communication between the content player 212 and the content provider 220; and/or (b) the latency of the communication between the content player 212 and the metadata generator 610 will be less than the latency of the communication between the content player 212 and the content provider 220. The metadata generator 610 may, therefore, be viewed as being "local" to the device 210, with the content provider 220 being "remote" from the device 210 and "remote" from the metadata generator 610, in that (a) the communication rate is higher between the device 210 and the metadata generator 610 than between the device 210 and the content provider 220; and/or (b) communication latency is lower between the device 210 and the metadata generator 610 than between the device 210 and the content provider 220. This may be due to the fact that the metadata generator 610 is geographically nearer to the device 210 than the content provider 220 is to the device 210.
The metadata generator 610 comprises, or is arranged to execute, a metadata generation module 612. The metadata generation module 612 is arranged to generate (or create or form) content metadata 223-1 based at least in part on the content selection data 422. This may be performed using any of the techniques described above with reference to the metadata generation module 412 of figure 4, based on the above- described additional data in the content selection data 422. The metadata generation module 612 may obtain, or be provided with, the content selection data 442 from the content provider 220. Alternatively, the metadata generation module 612 (as a proxy for the content provider 220) may be configured to generate the content selection data 442 itself (in an analogous manner to the content provider 220).
The metadata generator 610 may be a computer system, such as the exemplary computer system 100 shown in figure 1 .
As shall be described in more detail below, the metadata generator 610 acts as a proxy (i.e. a local proxy) of the content provider 220, at least in respect of certain act or tasks of the content provider 220. This shall be referred to herein as the metadata generator 610 "proxying" those acts or tasks. Such proxying may involve the metadata generator 610 receiving data/requests from the content player 212, where these data/requests are intended for the content provider 220. It will be appreciated that this may be transparent i.e. the content player 212 may attempt/intend to send the some data/requests to the content provider 220, with the metadata generator 610 intercepting the data/requests or with those data/requests being re-routed or re-directed to the metadata generator 610. Alternatively the content player 212 may attempt/intend to send some data/requests to the metadata generator 610 (as opposed to attempting/intending to send data/requests to the content provider 220). Such proxying may involve the metadata generator 610 passing some data/requests to the content provider 220 (either in the originally received form or as re-formatted/re-generated data/requests). Such proxying may involve the metadata generator 610 processing some data/requests on behalf of the content provider 220 instead of passing those data/requests on to the content provider 220. Similarly, such proxying may involve the metadata generator 610 receiving data from the content provider 220, where this data is intended for the content player 212. The metadata generator 610 may then pass this data to the content player 212 (either in the originally received form or as re-formatted/re-generated data).
Figure 7 is a sequence diagram schematically illustrating an example method 700 for using the system 600 of figure 6. The method 700 is the same as the method 300 of figure 3, except as described below. Therefore, steps in common to the method 700 and the method 300 have the same reference numeral and shall not be described again, except where variations on those steps are possible in the system 600.
Firstly, the optional steps 302 and 304 of the method 300 may be proxied by the metadata generator 610. Thus, the request in optional step 302 may be first sent from the content player 212 to the metadata generator 610 via the local network 630. The metadata generator 610 may then act as a proxy for the content provider 220 as discussed above. In particular, the metadata generator 610 may then send the same request or an equivalent request or a newly-generated request to the content provider 220 via the network 230 at a step 302a. In this case, the content selection data 422 received by the content player 212 in the optional step 304 may be received from the metadata generator 610 - the metadata generator 610 will receive the content selection data 422 from the content provider 220 at a step 304a and pass the content selection data 422 on to the content player 212. Alternatively, the metadata generator 610 may be able to generate and/or provide the content selection data 422 to the content player 212 without having to obtain the content selection data 422 from the content provider 220 - thus, the optional steps 302 and 304 may no longer involve the content provider 220 (so that the steps 302a and 304a are not performed), but involve the metadata generator 610 in the place of the content provider 220 (as illustrated by the dotted line 303 in figure 7).
The optional steps 308 and 310 of the method 300 may be proxied by the metadata generator 610. Thus, the request in optional step 308 may be first sent from the content player 212 to the metadata generator 610 via the local network 630. The metadata generator 610 may then act as a proxy for the content provider 220 as discussed above. In particular, the metadata generation module 612 may then send the same request or an equivalent request or a newly-generated request to the content provider 220 via the network 230 at a step 308a. In this case, the master content metadata 223-a received by the content player 212 in the optional step 310 may be received from the metadata generation module 612 - the metadata generation module 612 will receive the master content metadata 223-a from the content provider 220 at a step 310a and pass the content master content metadata 223-a on to the content player 212. Alternatively, the metadata generation module 612 may be able to generate and/or provide the master content metadata 223-a to the content player 212 without having to obtain the master content metadata 223-a from the content provider 220 - thus, the optional steps 308 and 310 may no longer involve the content provider 220 (so that the steps 308a and 310a are not performed), but involve the metadata generator 610 (or the metadata generation module 612) in the place of the content provider 220 (as illustrated by the dotted line 309 in figure 7).
The steps 312 and 314 of the method 300 may be proxied by the metadata generator 610. Thus, the request in step 312 may be first sent from the content player 212 to the metadata generator 610 via the local network 630. The metadata generator 610 may then act as a proxy for the content provider 220 as discussed above. In particular, the metadata generation module 612 may then send the same request or an equivalent request or a newly-generated request to the content provider 220 via the network 230 at a step 312a. In this case, the content metadata 223 received by the content player 212 in the step 314 may be received from the metadata generation module 612 - the metadata generation module 612 will receive the content metadata 223 from the content provider 220 at a step 314a and pass the content metadata 223 on to the content player 212. Alternatively, the metadata generation module 612 may be able to generate and/or provide the content metadata 223 to the content player 212 without having to obtain the content metadata 223 from the content provider 220 - thus, the steps 312 and 314 may no longer involve the content provider 220 (so that the steps 312a and 314a are not performed), but involve the metadata generator 610 (or the metadata generation module 612) in the place of the content provider 220 (as illustrated by the dotted line 313 in figure 7).
With the method 700 of figure 7, the metadata generation module 612 generates content metadata 223-1 by generating one or both of (a) master content metadata 223-a at the step 309 or (b) content metadata 223 (be that content metadata 223 of figure 2b or variant content metadata 223-b of figure 2c) at the step 313. In this way, the slower communication with the remote content provider 220 is avoided by virtue of the more local metadata generator 610 itself generating at least some of the content metadata 223-1 and providing this to the content player 212 (instead of the content provider 220 doing this). This reduces the above-mentioned problematic delay.
In an analogous manner as for figure 3, the sequence of steps 312, 312a, 314a, 314 (or the steps 312, 313 and 314 if used in place of the steps 312, 312a, 314a, 314), 316, 318 and 322 (possibly with the optional steps 308, 308a, 310a and 310, or the steps 308, 309 and 310 if used in place of the steps 308, 308a, 310a and 310, and/or the optional steps 326, 328 and 332) may be repeated one or more times. This may occur for a number of reasons, such as: (a) new content may be selected, e.g. by the user 201 (i.e. the step 306 may be performed again), which may require new content metadata 223 referencing the newly selected content; (b) the conditions under which the device 210 and/or the content player 212 and/or the network 230 is/are operating may change, making the current item of content 224 inappropriate so that it would be more
appropriate to obtain content chunks 225 from a variant of the current item of content 224 (e.g. if network conditions change so that the download speed to the device 210 decreases or increases, then a variant of the current item of content 224 encoded at a lower or higher bit-rate may be more appropriate), which may require new content metadata 223; (c) the content metadata 223 obtained at the step 314 may contain content chunk references 2100 for only a subset of the possible content chunks 225 of the item of content 224, so that new content metadata 223 may be required in order to obtain and render further content chunks 225 of the item of content 224 (this may be particularly true for live content). The repetition of the steps 312, 312a, 314a, 314 (or the steps 312, 313 and 314 if used in place of the steps 312, 312a, 314a, 314), and possibly the optional steps 308, 308a, 310a and 310 too (or the steps 308, 309 and 310 if used in place of the steps 308, 308a, 310a and 310) may be performed whilst content is being rendered at the step 322 (and/or at the optional step 332), so that the time taken to perform these steps is in parallel with the content rendering that is being performed, thereby ensuring continuous/seamless content rendering (i.e. the further metadata 223 may be obtained in advance of when it is required).
In some embodiments, the metadata provider 610, having generated content metadata 223 (by virtue of the step 309 and/or the step 313) may obtain further content metadata 223 from the content provider 220 (i.e. without the metadata provider 610 generating that further content metadata 223 itself). This may be achieved using the steps 312a and 314a (and possibly the steps 308a and 310a too), as discussed above. Whilst these steps themselves take longer than having the metadata provider 610 itself generate content metadata 223, it will be appreciated that (a) by initially having the metadata provider 610 generate initial content metadata 223, the content player 212 is able to start rendering content to the user 201 sooner whilst (b) the extra time incurred in having the metadata provider 610 obtain further content metadata 223 from the content provider 220 is less important, as this further content metadata 223 can be obtained, for example, whilst content is being rendered (i.e. as a background process, or in parallel), so that this extra time that is incurred is not noticeable to the user 201 . Thus further content metadata 223 may then be used in the same way as described above with respect to figure 3.
In figure 7, the content player 212 is shown as obtaining the content chunks 225 (by virtue of the steps 316, 318, 326 and 328) from the content provider 220. In some embodiments, however, the content player 212 may request the content chunks 225 from the metadata generator 610. Thus, the metadata generator 610 (acting as a proxy for the content provider 220) may receive a request from the content player 212 for one or more content chunks 225, may obtain those one or more content chunks 225 from the content provider 220 (in the same way that the content player 212 would have obtained those content chunks 225), and provide those content chunks 225 to the content player 212.
Figure 8 is a sequence diagram schematically illustrating an example method 800 for using the system 600 of figure 6. The method 800 is the same as the method 700 of figure 7, except as described below. Therefore, steps in common to the method 800 and the method 700 have the same reference numeral and shall not be described again. After the step 318, the method 800 comprises a step 812 at which the content player 212 requests further content metadata 223 from the metadata generator 610 in a similar manner to the step 312 described previously.
At an optional step 812a the metadata generator 610 requests the further content metadata 223 from the content provider 220.
At an optional step 814a the metadata generator 610 receives the further content metadata 223 from the content provider 220.
The steps 812a and 814a may be optional because in some embodiments the metadata generator 610 may additionally determine if the further content metadata 223 should be generated by the metadata generation module 612. Said determining may be based at least in part on any of: the age of the content selection data 222, criteria in the request of step 812, the age of any content selection data 222 available (or present or stored in or otherwise accessible by) the content player 212, one or more timestamps stored by (or available to, or present in) the content player 212. In these embodiments, if the metadata generator 610 determines the further content metadata 223 should be generated by the metadata generation module 612, then the metadata generation module 612 generates the further content metadata 223 in a similar manner to the step 512 described previously. Otherwise, the steps 812a and 814a are performed as described above.
At the step 814 the content player 212 receives the content metadata 223 from the metadata generator 610 in a similar manner to the step 314 described previously.
The embodiment described above with reference to figure 8 has additional benefits. In particular, this embodiment allows a degree of flexibility in how further content metadata 223 is provided to the content player 212. This means that whilst the problematic delay described previously is still reduced, the risk of last minute changes to the items of content 224 available from the content provider 220 causing instability or degradation at the content player 212 due to out of date further content metadata 225 being generated by the metadata generator 612 is also reduced. Additionally, if sophisticated watermarking or content protection schemes are in use by the content provider 220, the risk that such schemes are partially compromised by the provision of default items of content 224 to the content player 212 may be greatly reduced.
Additionally, the two embodiments described previously with reference to figure 7 and figure 8 respectively, both have clear advantages over the previous embodiments described with reference to figure 4. In particular, whilst the previous embodiments described with reference to figure 4 require a media player 212 that is in some way adapted to use a metadata generation module 412 (variously through any of: the incorporation of specific hardware in the media player 212 or the client device 210; the incorporation of specific software in the media player 212 or the client device; the modification of the firmware of the media player 212 or the client device 210; the provision of a plug-in application to the media player 212; etc.) the present embodiments, through the use of the metadata generator 610 as a proxy, can be used with any media player 212, including legacy media players 212 already in use. This is particularly advantageous where it is not possible or appropriate to: replace existing media players 212; or update the firmware/modify the hardware of existing media players 212; or where any such modifications may render a media player 212 non-compliant with a specific industry standard. Modifications
It will be appreciated that the methods described have been shown as individual steps carried out in a specific order. However, the skilled person will appreciate that these steps may be combined or carried out in a different order whilst still achieving the desired result.
It will be appreciated that embodiments of the invention may be implemented using a variety of different information processing systems. In particular, although the figures and the discussion thereof provide an exemplary computing system and methods, these are presented merely to provide a useful reference in discussing various aspects of the invention. Embodiments of the invention may be carried out on any suitable data processing device, such as a personal computer, laptop, personal digital assistant, mobile telephone, set top box, television, server computer, etc. Of course, the description of the systems and methods has been simplified for purposes of discussion, and they are just one of many different types of system and method that may be used for embodiments of the invention. It will be appreciated that the boundaries between logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or elements, or may impose an alternate decomposition of functionality upon various logic blocks or elements. It will be appreciated that the above-mentioned functionality may be implemented as one or more corresponding modules as hardware and/or software. For example, the above-mentioned functionality may be implemented as one or more software
components for execution by a processor of the system. Alternatively, the above- mentioned functionality may be implemented as hardware, such as on one or more field- programmable-gate-arrays (FPGAs), and/or one or more application-specific-integrated- circuits (ASICs), and/or one or more digital-signal-processors (DSPs), and/or other hardware arrangements. Method steps implemented in flowcharts contained herein, or as described above, may each be implemented by corresponding respective modules; multiple method steps implemented in flowcharts contained herein, or as described above, may be implemented together by a single module.
It will be appreciated that, insofar as embodiments of the invention are
implemented by a computer program, then a storage medium and a transmission medium carrying the computer program form aspects of the invention. The computer program may have one or more program instructions, or program code, which, when executed by a computer carries out an embodiment of the invention. The term "program" as used herein, may be a sequence of instructions designed for execution on a computer system, and may include a subroutine, a function, a procedure, a module, an object method, an object implementation, an executable application, an applet, a servlet, source code, object code, a shared library, a dynamic linked library, and/or other sequences of instructions designed for execution on a computer system. The storage medium may be a magnetic disc (such as a hard drive or a floppy disc), an optical disc (such as a CD-ROM, a DVD-ROM or a BluRay disc), or a memory (such as a ROM, a RAM, EEPROM, EPROM, Flash memory or a portable/removable memory device), etc. The transmission medium may be a communications signal, a data broadcast, a communications link between two or more computers, etc.

Claims

1 . A method for enabling a content player to access an item of content from a content provider, wherein the item of content comprises a plurality of content chunks, the method comprising the content player:
generating, in the content player, content metadata based at least in part on content selection data, wherein the content metadata comprises one or more references, each reference being either (a) a content chunk reference or (b) a content item reference that references one or more respective content chunk references, wherein each content chunk reference corresponds to a respective content chunk of the plurality of content chunks; and
using at least one of the content chunk references to obtain at least one respective content chunk.
2. The method according to claim 1 further comprising:
obtaining, from the content provider, further content metadata, wherein the further content metadata comprises one or more further content chunk references, each further content chunk reference corresponding to a further respective content chunk of the plurality of content chunks; and
using at least one of the one or more further content chunk references to obtain at least one or more further respective content chunks.
3. A method for enabling a local content player to access an item of content from a remote content provider, wherein the item of content comprises a plurality of content chunks, the method comprising a metadata generator:
receiving a request from the local content player;
in response to the request, generating, in the metadata generator, content metadata based at least in part on content selection data, wherein the content metadata comprises one or more references, each reference being either (a) a content chunk reference or (b) a content item reference that references one or more respective content chunk references, wherein each content chunk reference corresponds to a respective content chunk of the plurality of content chunks; and providing the content metadata to the local content player to thereby enable the local content player to use at least one of the one or more content chunk references to obtain at least one or more respective content chunks.
4. The method according to claim 3 further comprising:
receiving a further request from the local content player;
in response to the further request, obtaining, from the remote content provider, further content metadata, wherein the further content metadata comprises one or more further content chunk references, each further content chunk reference corresponding to a further respective content chunk of the plurality of content chunks;
providing the further content metadata to the local content player to thereby enable the local content player to use at least one of the one or more further content chunk references to obtain at least one or more respective further content chunks.
5. The method according to any one claims 3 or 4 further comprising:
receiving a request from the local content player for the at least one or more respective content chunks;
obtaining the at least one or more respective content chunks from the remote content provider; and
providing the at least one or more respective content chunks to the local content player.
6. A method for enabling a content player to access an item of content from a content provider, wherein the item of content comprises a plurality of content chunks, the method comprising the content provider:
generating content selection data; and
providing the content selection data;
wherein the content selection is configured to enable a metadata generation module of the content player or local to the content player to generate content metadata, wherein the content metadata comprises one or more references, each reference being either (a) a content chunk reference or (b) a content item reference that references one or more respective content chunk references, wherein each content chunk reference corresponds to a respective content chunk of the plurality of content chunks.
7. The method of any one of the preceding claims wherein the content selection data comprises the one or more references.
8. The method of any one of the preceding claims wherein the content selection data comprises executable program code to be executed to perform at least part of said generation of the content metadata.
9. The method of any one of the preceding claims wherein the content selection data comprises electronic program guide data.
10. The method of any one of the preceding claims wherein the at least one of the content chunk references is a default content chunk reference corresponding to a respective default content chunk.
1 1 . The method of any one of the preceding claims wherein the content metadata is a manifest.
12. The method of any one of claims 1 to 10 wherein the content metadata is a playlist corresponding to the item of content.
13. The method of any one of the preceding claims wherein the one or more references are generated based at least in part on a naming convention.
14. The method of any one of the preceding claims wherein the one or more references are generated based at least in part on a network time synchronization signal.
15. A method for enabling a local content player to access an item of content on a remote content provider, wherein the item of content comprises a plurality of content chunks, the method comprising local generation of content metadata based at least in part on content selection data, wherein the content metadata comprises one or more content chunk references each corresponding to a respective content chunk of the plurality of content chunks, thereby enabling the local content player to request, based at least in part on at least one of the one or more content chunk references, at least one or more respective content chunks without having to first obtain the one or more content chunk references from the remote content provider.
16. An apparatus arranged to carry out a method according to any one of claims 1 to 15.
17. A computer program which, when executed by one or more processors, causes the one or more processors to carry out a method according to any one of claims 1 to 15.
18. A computer-readable medium storing a computer program according to claim 17.
PCT/EP2015/057059 2015-03-31 2015-03-31 Accessing content WO2016155800A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/057059 WO2016155800A1 (en) 2015-03-31 2015-03-31 Accessing content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/057059 WO2016155800A1 (en) 2015-03-31 2015-03-31 Accessing content

Publications (1)

Publication Number Publication Date
WO2016155800A1 true WO2016155800A1 (en) 2016-10-06

Family

ID=52814974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/057059 WO2016155800A1 (en) 2015-03-31 2015-03-31 Accessing content

Country Status (1)

Country Link
WO (1) WO2016155800A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10827181B1 (en) 2019-05-03 2020-11-03 At&T Intellectual Property I, L.P. Differential adaptive bitrate streaming based on scene complexity

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013033565A1 (en) * 2011-08-31 2013-03-07 Qualcomm Incorporated Switch signaling methods providing improved switching between representations for adaptive http streaming
US8495675B1 (en) * 2012-07-30 2013-07-23 Mdialog Corporation Method and system for dynamically inserting content into streaming media
US20140156865A1 (en) * 2012-11-30 2014-06-05 Futurewei Technologies, Inc. Generic Substitution Parameters in DASH

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013033565A1 (en) * 2011-08-31 2013-03-07 Qualcomm Incorporated Switch signaling methods providing improved switching between representations for adaptive http streaming
US8495675B1 (en) * 2012-07-30 2013-07-23 Mdialog Corporation Method and system for dynamically inserting content into streaming media
US20140156865A1 (en) * 2012-11-30 2014-06-05 Futurewei Technologies, Inc. Generic Substitution Parameters in DASH

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIN YOUNG LEE ET AL: "DASH Evaluation Experiment #1: Compositions of Media Presentation (CMP) Proposal Comparison", 94. MPEG MEETING; 11-10-2010 - 15-10-2010; GUANGZHOU; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. M18009, 10 September 2010 (2010-09-10), XP030046599 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10827181B1 (en) 2019-05-03 2020-11-03 At&T Intellectual Property I, L.P. Differential adaptive bitrate streaming based on scene complexity

Similar Documents

Publication Publication Date Title
US11539989B2 (en) Media content redirection
US9491499B2 (en) Dynamic stitching module and protocol for personalized and targeted content streaming
US9426543B1 (en) Server-based video stitching
EP2941897B1 (en) Connected-media end user experience using an overlay network
US9609340B2 (en) Just-in-time (JIT) encoding for streaming media content
US10149020B2 (en) Method for playing a media stream in a browser application
CN103583051A (en) Playlists for real-time or near real-time streaming
US20150172353A1 (en) Method and apparatus for interacting with a media presentation description that describes a summary media presentation and an original media presentation
KR20090077762A (en) Sharing television clips
US9319455B2 (en) Method and system for seamless navigation of content across different devices
US20220060532A1 (en) Method for transmitting resources and electronic device
US20210021655A1 (en) System and method for streaming music on mobile devices
US20130007863A1 (en) Streaming video to cellular phones
JP7443544B2 (en) Client-based storage for remote factorization
US20200099987A1 (en) Systems and methods for displaying a live video stream in a graphical user interface
JP6063952B2 (en) Method for displaying multimedia assets, associated system, media client, and associated media server
WO2016155800A1 (en) Accessing content
US20140245347A1 (en) Control layer indexed playback
JP7282981B2 (en) METHOD AND SYSTEM FOR PLAYING STREAMING CONTENT USING LOCAL STREAMING SERVER
KR102432376B1 (en) Method and system for reproducing contents

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15714790

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15714790

Country of ref document: EP

Kind code of ref document: A1