US20170264683A1 - Streaming Digital Content Synchronization - Google Patents
Streaming Digital Content Synchronization Download PDFInfo
- Publication number
- US20170264683A1 US20170264683A1 US15/069,839 US201615069839A US2017264683A1 US 20170264683 A1 US20170264683 A1 US 20170264683A1 US 201615069839 A US201615069839 A US 201615069839A US 2017264683 A1 US2017264683 A1 US 2017264683A1
- Authority
- US
- United States
- Prior art keywords
- time
- response
- header
- digital content
- last
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004044 response Effects 0.000 claims abstract description 80
- 238000000034 method Methods 0.000 claims abstract description 69
- 238000009826 distribution Methods 0.000 claims description 38
- 238000009877 rendering Methods 0.000 claims description 32
- 238000012546 transfer Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims 1
- 238000003860 storage Methods 0.000 description 23
- 238000004891 communication Methods 0.000 description 13
- 230000001360 synchronised effect Effects 0.000 description 12
- 238000012545 processing Methods 0.000 description 10
- 230000003139 buffering effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- H04L65/4084—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/612—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/23439—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/23605—Creation or processing of packetized elementary streams [PES]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/242—Synchronization processes, e.g. processing of PCR [Program Clock References]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/262—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
- H04N21/26208—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
- H04N21/26241—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints involving the time of distribution, e.g. the best time of the day for inserting an advertisement or airing a children program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/262—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
- H04N21/26258—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for generating a list of items to be played back in a given order, e.g. playlist, or scheduling item distribution according to such list
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4305—Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
- H04N21/43076—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of the same content streams on multiple devices, e.g. when family members are watching the same movie on different devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
- H04N21/4343—Extraction or processing of packetized elementary streams [PES]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/637—Control signals issued by the client directed to the server or network components
- H04N21/6373—Control signals issued by the client directed to the server or network components for rate control, e.g. request to the server to modify its transmission rate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/28—Timers or timing mechanisms used in protocols
Definitions
- Synchronization of digital content rendering is a primary consideration not only for co-located client devices but also for remotely located client devices. For example, consider a sports bar having multiple televisions that are viewable at any one time. A person viewing these televisions simultaneously may quickly become lost as to “what is going on” when shown different parts of a sporting event, even if just a few seconds off. Accordingly, lack of synchronization between these televisions may become distracting to the point of removing the benefit of including the multiple televisions.
- live playback begins at a number of segments behind the most recently posted segment according to a manifest file.
- the number of segments “behind” depends on the timing of the acquisition of a manifest file and when a new revision and new segment is posted as well as time taken to select, obtain, and render the new segment. Accordingly, it has been observed that client devices may be out of sync by up to two segments based on differences in this timing, e.g., anywhere from six to twelve seconds.
- local “wall-clock” times are specified in a manifest file to indicate a time at which a segment is to be rendered.
- this approach requires the clocks on each of the client devices to be synchronized, one to another, which is not typically the case.
- Other proprietary techniques have also been developed to determine “what time it is” in order to render an appropriate segment of content. These proprietary techniques, however, typically require inclusion of additional software and hardware resources (e.g., network synchronization) which are typically not be available on each client device.
- Streaming digital content synchronization techniques are described.
- rendering of content is synchronized by determining a time to render the content.
- a response is received to a request to stream the digital content.
- the response includes a time at which the digital content was last modified (e.g., a last-modified header) and a time at which the response was generated (e.g., a date header).
- An age is calculated by subtracting the time at which the digital content was last modified, e.g., the last-modified header, from the time at which the response was generated (e.g., the date header). An amount of time the response spent in one or more caches, if available (e.g., an age header), is added as part of this age.
- the time is determined by subtracting the age from a predefined setback time, and the stream of the digital content is rendered based at least in part on the determined time.
- times may be indicated using fractional seconds.
- the setback time may be based at least in part on an amount of time determined between revisions to the digital content.
- FIG. 1 is an illustration of an environment in an example implementation that is operable to employ streaming digital content synchronization techniques described herein.
- FIG. 2 depicts a system in an example implementation in which digital content is streamed by a content distribution service over a network to a client device of FIG. 1 .
- FIG. 3 depicts an example implementation of setting a last-modified header, a date header, and an age header of a response of FIG. 2 .
- FIG. 4 depicts an example implementation of use of a last-modified header, a date header, and an age header of a response of FIG. 3 to define when to render the digital content.
- FIG. 5 is a flow diagram depicting a procedure in an example implementation in which a time used as a basis to define when to render content is determined.
- FIG. 6 is a flow diagram depicting a procedure in an example implementation in which an amount of time between revisions to digital content is ascertained and used to reduce latency in rendering of the digital content.
- FIG. 7 illustrates an example system including various components of an example device that can be implemented as any type of computing device as described and/or utilize with reference to FIGS. 1-6 to implement embodiments of the techniques described herein.
- manifest and segment based streaming techniques such as streaming techniques that use a Hypertext Transfer Protocol (HTTP)
- HTTP Hypertext Transfer Protocol
- a manifest file is used to map time periods to segments of digital content within a media presentation, the segments typically being a few seconds in duration. Playback of the digital content thus begins at a number of segments behind a most recently posted segment according to a manifest file.
- the number of segments “behind” depends on the timing of the acquisition of a manifest file, when a new revision and new segment is posted, and so on. Accordingly, this may vary from client device to client device, causing a lack of synchronization which can be disconcerting when rendered as previously described.
- a content distribution service may form a response to a request to stream digital content, e.g., for a manifest file.
- the response specifies a time at which the requested resource (e.g., the manifest file) was last modified and a time at which the response was formed.
- HTTP hypertext transfer protocol
- This may be performed using existing hypertext transfer protocol (HTTP) headers, e.g., a last-modified header and a date header, and thus may be performed without using additional resources or requiring special configuration of a client device that is to receive the response.
- HTTP hypertext transfer protocol
- An age may also be specified for an amount of time the response has spent in a cache (e.g., an HTTP age header) during communication of the response from the content distribution service to the client device.
- a cache e.g., an HTTP age header
- This may include a cache of the content distribution service or caches of intermediaries used to communicate the response via a network between the content distribution service and the client device.
- the client device is able to determine a time at which to render the digital content that is synchronized with other client devices that are also to render the digital content. To do so, the client device first calculates an age by subtracting the time at which the digital content was last modified from the time at which the response was generated. In an HTTP example, this is performed by subtracting the last-modified header from the date header. Additionally, an age header may also be employed to add an amount of time that the response spent in a cache as part of this age, if available.
- a time is then determined to define when to render the digital content.
- the time is determined by subtracting the age from a setback time.
- the setback time may include an amount of time for buffering to promote consistent playback.
- the setback time is set as the time behind the end of the most recently available segment. This time is then used as a basis to render segments of the digital content by the client device. Further, use of this technique by a plurality of client devices promotes synchronized rendering of the content between those devices without requiring synchronization of local clocks or proprietary communication techniques.
- the setback time may be set based on an ascertained time between revisions of segments of the digital content. For example, latency may be reduced by using this technique to predict how long until a next revision of the manifest file (and therefore a corresponding segment) will be published. A setback time is then set based on this time in order to provide sufficient buffering yet render the content as fast as feasible. Further reductions may be achieved through use of headers that specify fractional seconds and thus may further promote tighter synchronization between client devices. In this way, synchronized rendering of streaming content is promoted by client devices whether located locally or remotely from each other by using a common technique to define “when” this rendering is to occur. Further discussion of these and other examples is included in the following sections.
- Example procedures are then described which may be performed in the example environment as well as other environments. Consequently, performance of the example procedures is not limited to the example environment and the example environment is not limited to performance of the example procedures.
- FIG. 1 is an illustration of an environment 100 in an example implementation that is operable to employ digital content streaming techniques described herein.
- the illustrated environment 100 includes a content distribution service 102 that is communicatively coupled to a plurality of client devices (examples of which are illustrated as first and second client devices 104 , 106 ) via a network 108 .
- the content distribution service 102 is configurable in a variety of ways, such as one or more computing devices to implement a website provider, service provider, web service, satellite provider, terrestrial cable provider, or any other distributor of content employing a network 108 .
- the network 108 is also configurable in a variety of ways, such as the internet or “World Wide Web,” a peer-to-peer network, and so forth.
- the first and second client devices 104 , 106 are also configurable using a variety of computing devices as further described in relation to FIG. 7 .
- a computing device may be configured as a desktop computer, a laptop computer, a mobile device (e.g., assuming a handheld configuration such as a tablet or mobile phone as illustrated), and so forth.
- the computing device may range from full resource devices with substantial memory and processor resources (e.g., personal computers, game consoles) to a low-resource device with limited memory and/or processing resources (e.g., mobile devices) that is configured to communicate via the network 108 .
- the first and second client devices 104 , 106 may be implemented using a plurality of different devices, e.g., multiple servers.
- the content distribution service 102 includes a content distribution module 110 that is implemented at least partially in hardware to control streaming of digital content 112 via the network 108 , which is illustrated as stored in storage 114 .
- Digital content 112 may take a variety of forms, such as media, video, audio, and other forms of media that is configured for digital storage, communication (e.g., streaming), and rendering.
- the first and second client devices 104 , 106 are illustrated as including respective communication modules 116 , 118 .
- the communication modules 116 , 118 are representative of functionality implemented at least partially in hardware to communicate via the network 108 , such as to communicate with the content distribution service 102 to stream the digital content 112 .
- An example of functionality employed by the communication modules 116 , 118 is represented by playback modules 120 , 122 implemented at least partially in hardware to control navigation and rendering of the digital content 112 .
- client devices such as television receivers are configured to receive the same broadcast signal simultaneously and immediately render the received signal for display. This causes the display of the traditional broadcast television to be inherently synchronized.
- client devices such as television receivers are configured to receive the same broadcast signal simultaneously and immediately render the received signal for display. This causes the display of the traditional broadcast television to be inherently synchronized.
- conventional live streaming techniques that involve use of a manifest file and segments (e.g., according to a hypertext transfer protocol)
- this is not so due to timing of acquisition of the manifest file, timing of when a new revision and corresponding segment is posted, time taken to request and receive a response to the request that includes the segment, and so forth. Consequently, this lack of synchronization of conventional live streaming techniques may run counter to user expectations.
- the playback modules 120 , 122 are configured to employ techniques to time rendering of a stream of the digital content 112 such that synchronized rendering is promoted between the first and second client devices 104 , 106 .
- synchronization as described herein may be achieved without use of a dedicated communication channel between the first and second client devices 104 , 106 , e.g., to synchronize local clocks of the client devices 104 , 106 , or use of proprietary techniques.
- An example of timing of the rendering of the digital content which achieves this synchronization is described in the following and shown in corresponding figures.
- FIG. 2 depicts a system 200 in an example implementation in which the digital content 112 is streamed by the content distribution service 102 over the network 108 to the first client device 104 .
- the content distribution service 102 in this example receives digital content 112 that is “live,” e.g., that is captured in real time or linear as pre-recorded content that is presented in real time as though it was live.
- the content distribution module 110 then configures this digital content 112 for live streaming using a manifest and segment technique. To do so, the content distribution module 110 employs a manifest generation module 202 and a segment generation module 204 .
- the segment generation module 204 is implemented at least partially in hardware to form segments 206 in a media presentation 208 .
- the segments 206 may be formed by the segment generation module 204 into lengths of a few seconds each from packets collected from the digital content 112 .
- the manifest generation module 202 is implemented at least partially in hardware to form the manifest file 210 that maps respective time periods to corresponding ones of the segments 206 of the media presentation 208 .
- a request 212 and response 214 technique is then used to stream the digital content over the network 108 between the content distribution service 102 and the first client device 104 .
- the playback module 120 of the first client device 104 may form a request 212 for the manifest file 210 that corresponds to desired digital content 112 to be streamed.
- the content distribution module 110 receives this request 212 via the network 108 and forms a response 214 that includes the manifest file 210 .
- the playback module 120 may determine which segments 206 of the media presentation 208 map to corresponding time periods that are desired for rendering (e.g., a most recent) and use a similar request 212 and response 214 technique request communication of and receive those segments 206 .
- the content distribution module 110 may leverage existing header fields and semantics found in manifest and segments based streaming techniques such as according to HTTP, and may do so without synchronizing local clocks of the client devices 104 , 106 .
- existing response header fields that are usable to define this “when” include a last-modified header 216 , a date header 218 , and an age header 220 .
- An example of setting values of these headers is further described in the following and shown in a corresponding figure.
- FIG. 3 depicts an example implementation 300 of setting the last-modified header 216 , the date header 218 , and the age header 220 of the response 214 of FIG. 2 .
- This implementation 300 is shown using first, second, and third stages 320 , 304 , 306 .
- a request 212 is formed and communicated by the communication module 116 for receipt by the content distribution module 110 of the content distribution service 102 via the network 108 .
- the request 212 may request communication of a manifest file 210 for use in specifying segments of content to be streamed.
- the content distribution module 110 employs a response generation module 308 implemented at least partially in hardware to form the response 214 .
- the response includes a last-modified header 216 , date header 218 , age header 220 (optionally), and the manifest file 210 .
- the last-modified header 216 is set by the content distribution module 110 according to a time indicated by a clock 222 associated with the content distribution service 102 as to when the media presentation 208 of the digital content 112 was last modified, e.g., a segment 206 was added in a live streaming context.
- the date header 218 is set by the content distribution module 110 according to a time indicated by the clock 222 when the response 214 is formed by the response generation module 308 .
- the clock 222 used to set the last-modified header 216 is synchronized with and/or is the same clock 222 as used to set the date header 218 .
- the age header 220 is optionally used to indicate an amount of time the response 214 has spent in a cache since being formed by response generation module 308 , e.g., at the content distribution service 102 or stored in one or more intermediaries (e.g., intermediary servers, routers, firewalls, and so on) used to communicate the response 214 via the network 108 .
- the response 214 is communicated by the content distribution service 102 via the network 108 for receipt by the first client device 104 in this example.
- the first client device 104 may then use these headers to determine when to render segments of the digital content in a manner that is synchronized with other client devices, an example of which is described in the following.
- FIG. 4 depicts an example implementation 400 of use of the last-modified header 216 , the date header 218 , and the age header 220 of the response 214 of FIG. 3 to define when to render the digital content.
- This implementation 400 is shown using first and second stages 402 , 404 .
- the playback module 120 determines an age of the digital content, and more particularly an age of respective segments of the digital content to be rendered. This is performed by the playback module 120 by subtracting the last-modified header 216 from the date header 218 . In other words, this acts to subtract the time at which the digital content was last modified, e.g., a segment was added to the digital content, from the time at which the response 214 was generated.
- the age header 220 if available, is added to this result to address an amount of time the response spent in a cache when communicated, e.g., in a cache of an intermediate server of the network 108 , in a cache of the content distribution service 102 , and so forth. In this way, a “true age” of the response is determined without use of a clock on the first client device 104 .
- a time is calculated that is to be used as a basis to define when the content is rendered. This time is calculated by subtracting the age calculated at the first stage 402 from a setback time.
- the setback time includes an amount of time that is determined to include an amount of buffer time to promote consistent playback.
- the setback time may be predefined as a static amount of time that is consistent between the first and second clients 104 , 106 .
- the setback time may also be defined dynamically to reduce latency as further described in relation to FIG. 6 .
- This time is then used as a basis to define “when” the segments of the digital content are to be rendered.
- synchronization between the first and second client devices 104 , 106 may be achieved.
- the first client device 104 may determine an age that is different than an age determined by a second client device 106 .
- the “when” of the rendering is synchronized by taking these differences into account.
- times that include higher precision e.g., including fractional second portions
- FIG. 5 depicts a procedure 500 in an example implementation in which a time used as a basis to define when to render content is determined.
- a request is communicated to stream content (block 502 ).
- the request 212 may request a manifest file 210 to be used to stream digital content 112 .
- a response is received to the request.
- the response includes a time at which the digital content was last modified and a time at which the response was generated (block 504 ).
- the last modified time may be specified as a last-modified header 216 (e.g., an HTTP “Last-Modified” response header) that specifies a time at which a segment 206 was added to a media presentation.
- the time at which the response is generated is specified using a date header 218 , e.g., as an HTTP “Date” response header.
- the response may also include an age indicative of an amount of time the response has spent in at least one cache through use of an age header 220 , e.g., as an HTTP “Age” header.
- Logical consistency of the response is checked (block 506 ).
- the playback module 120 may check to determine that the date header 218 specifies a time that is not before a time specified by the last-modified header 216 . If not logically consistent, the following processing is not performed, thereby protecting against errors and conserving computing resources.
- An age is calculated by subtracting the time at which the digital content was last modified from the time at which the response was generated (block 508 ).
- the playback module 120 may subtract the last-modified header 216 from the date header 218 .
- the age header 220 may also be added to this value, if available. In this way, the age describes an amount of time that has passed between availability of a newly added segment through the last-modified header 216 and generation of the response 214 , to which the amount of time spent in a cache is added, if specified.
- a time is determined by subtracting the age from a setback time, the time usable to define when rendering of the stream of digital content is to occur (block 510 ).
- the setback time may include a buffering time to ensure smooth playback of the digital content 112 .
- the setback time is set as a time behind an end of a most recently available segment.
- the setback time may be defined statically (e.g., a set amount of time) or dynamically to reduce latency as further described in relation to FIG. 6 .
- the stream of the digital content is rendered based at least in part on the determined time (block 512 ).
- FIG. 6 depicts a procedure 600 in an example implementation in which an amount of time between revisions to digital content is ascertained and used to reduce latency in rendering of the digital content.
- An amount of time is ascertained between revisions of digital content (block 602 ).
- the manifest file can specify a maximum permitted segment duration of media presentation 208 . Based on this, an interval may be determined describing by when “new” segments 206 of the digital content 112 are made available, which is usable to reduce latency in rendering of the content as described in the following.
- an age is calculated as a difference between a last-modified header and a date header in a response to a request to stream the digital content (block 604 ).
- a time is determined at which the rendering of the stream of digital content is to occur. The time is determined by subtracting the age from a setback time as before. However, in this instance the setback time is based at least in part on the ascertained amount of time between the revisions (block 606 ).
- the setback time for instance, may be calculated by adding a set buffering time (e.g., to ensure smooth playback) to the ascertained amount of time.
- rendering of the stream of digital content based at least in part on the determined time (block 608 ) is performed with minimal latency based on when the segments are made available for rendering.
- this may also be used to enforce synchronization between the devices as well as reduce latency.
- FIG. 7 illustrates an example system generally at 700 that includes an example computing device 702 that is representative of one or more computing systems and/or devices that may implement the various techniques described herein. This is illustrated through inclusion of the playback module 120 .
- the computing device 702 may be, for example, a server of a service provider, a device associated with a client (e.g., a client device), an on-chip system, and/or any other suitable computing device or computing system.
- the example computing device 702 as illustrated includes a processing system 704 , one or more computer-readable media 706 , and one or more I/O interface 708 that are communicatively coupled, one to another.
- the computing device 702 may further include a system bus or other data and command transfer system that couples the various components, one to another.
- a system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures.
- a variety of other examples are also contemplated, such as control and data lines.
- the processing system 704 is representative of functionality to perform one or more operations using hardware. Accordingly, the processing system 704 is illustrated as including hardware element 710 that may be configured as processors, functional blocks, and so forth. This may include implementation in hardware as an application specific integrated circuit or other logic device formed using one or more semiconductors.
- the hardware elements 710 are not limited by the materials from which they are formed or the processing mechanisms employed therein.
- processors may be comprised of semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)).
- processor-executable instructions may be electronically-executable instructions.
- the computer-readable storage media 706 is illustrated as including memory/storage 712 .
- the memory/storage 712 represents memory/storage capacity associated with one or more computer-readable media.
- the memory/storage component 712 may include volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth).
- the memory/storage component 712 may include fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth).
- the computer-readable media 706 may be configured in a variety of other ways as further described below.
- Input/output interface(s) 708 are representative of functionality to allow a user to enter commands and information to computing device 702 , and also allow information to be presented to the user and/or other components or devices using various input/output devices.
- input devices include a keyboard, a cursor control device (e.g., a mouse), a microphone, a scanner, touch functionality (e.g., capacitive or other sensors that are configured to detect physical touch), a camera (e.g., which may employ visible or non-visible wavelengths such as infrared frequencies to recognize movement as gestures that do not involve touch), and so forth.
- Examples of output devices include a display device (e.g., a monitor or projector), speakers, a printer, a network card, tactile-response device, and so forth.
- the computing device 702 may be configured in a variety of ways as further described below to support user interaction.
- modules include routines, programs, objects, elements, components, data structures, and so forth that perform particular tasks or implement particular abstract data types.
- module generally represent software, firmware, hardware, or a combination thereof.
- the features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of commercial computing platforms having a variety of processors.
- Computer-readable media may include a variety of media that may be accessed by the computing device 702 .
- computer-readable media may include “computer-readable storage media” and “computer-readable signal media.”
- Computer-readable storage media may refer to media and/or devices that enable persistent and/or non-transitory storage of information in contrast to mere signal transmission, carrier waves, or signals per se. Thus, computer-readable storage media refers to non-signal bearing media.
- the computer-readable storage media includes hardware such as volatile and non-volatile, removable and non-removable media and/or storage devices implemented in a method or technology suitable for storage of information such as computer readable instructions, data structures, program modules, logic elements/circuits, or other data.
- Examples of computer-readable storage media may include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, hard disks, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage device, tangible media, or article of manufacture suitable to store the desired information and which may be accessed by a computer.
- Computer-readable signal media may refer to a signal-bearing medium that is configured to transmit instructions to the hardware of the computing device 702 , such as via a network.
- Signal media typically may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as carrier waves, data signals, or other transport mechanism.
- Signal media also include any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media.
- hardware elements 710 and computer-readable media 706 are representative of modules, programmable device logic and/or fixed device logic implemented in a hardware form that may be employed in some embodiments to implement at least some aspects of the techniques described herein, such as to perform one or more instructions.
- Hardware may include components of an integrated circuit or on-chip system, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), and other implementations in silicon or other hardware.
- ASIC application-specific integrated circuit
- FPGA field-programmable gate array
- CPLD complex programmable logic device
- hardware may operate as a processing device that performs program tasks defined by instructions and/or logic embodied by the hardware as well as a hardware utilized to store instructions for execution, e.g., the computer-readable storage media described previously.
- software, hardware, or executable modules may be implemented as one or more instructions and/or logic embodied on some form of computer-readable storage media and/or by one or more hardware elements 710 .
- the computing device 702 may be configured to implement particular instructions and/or functions corresponding to the software and/or hardware modules. Accordingly, implementation of a module that is executable by the computing device 702 as software may be achieved at least partially in hardware, e.g., through use of computer-readable storage media and/or hardware elements 710 of the processing system 704 .
- the instructions and/or functions may be executable/operable by one or more articles of manufacture (for example, one or more computing devices 702 and/or processing systems 704 ) to implement techniques, modules, and examples described herein.
- the techniques described herein may be supported by various configurations of the computing device 702 and are not limited to the specific examples of the techniques described herein. This functionality may also be implemented all or in part through use of a distributed system, such as over a “cloud” 714 via a platform 716 as described below.
- the cloud 714 includes and/or is representative of a platform 716 for resources 718 .
- the platform 716 abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud 714 .
- the resources 718 may include applications and/or data that can be utilized while computer processing is executed on servers that are remote from the computing device 702 .
- Resources 718 can also include services provided over the Internet and/or through a subscriber network, such as a cellular or Wi-Fi network.
- the platform 716 may abstract resources and functions to connect the computing device 702 with other computing devices.
- the platform 716 may also serve to abstract scaling of resources to provide a corresponding level of scale to encountered demand for the resources 718 that are implemented via the platform 716 .
- implementation of functionality described herein may be distributed throughout the system 700 .
- the functionality may be implemented in part on the computing device 702 as well as via the platform 716 that abstracts the functionality of the cloud 714 .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
- Synchronization of digital content rendering is a primary consideration not only for co-located client devices but also for remotely located client devices. For example, consider a sports bar having multiple televisions that are viewable at any one time. A person viewing these televisions simultaneously may quickly become lost as to “what is going on” when shown different parts of a sporting event, even if just a few seconds off. Accordingly, lack of synchronization between these televisions may become distracting to the point of removing the benefit of including the multiple televisions.
- This consideration even exists in situations in which the client devices are remotely located from each other. For example, social media and other communication techniques enable users to communicate and comment in real time with each other as events occur. In a home viewing example, if viewers are sharing reactions on social media, live digital content that is significantly out of sync may cause some viewers to spoil exciting plot developments, or otherwise lose assumed shared context for the program. Accordingly, a lack of synchronization in rendering by remote devices may cause a lack of synchronization in these communications, which may quickly become frustrating to these viewers.
- As viewers expect and have experienced close synchronization with conventional broadcast television, viewers also want a similar experience with Internet streaming media. In conventional broadcast television, multiple television receivers receive the same broadcast signal simultaneously and display the transmitted video immediately. Accordingly, the presentations are inherently in sync. However, conventional live HTTP streaming media techniques are typically out of sync by up to two or more segment durations, where segments are typically six to ten seconds in length.
- In one conventional HTTP streaming example, live playback begins at a number of segments behind the most recently posted segment according to a manifest file. The number of segments “behind” depends on the timing of the acquisition of a manifest file and when a new revision and new segment is posted as well as time taken to select, obtain, and render the new segment. Accordingly, it has been observed that client devices may be out of sync by up to two segments based on differences in this timing, e.g., anywhere from six to twelve seconds. In another example, local “wall-clock” times are specified in a manifest file to indicate a time at which a segment is to be rendered. However, this approach requires the clocks on each of the client devices to be synchronized, one to another, which is not typically the case. Other proprietary techniques have also been developed to determine “what time it is” in order to render an appropriate segment of content. These proprietary techniques, however, typically require inclusion of additional software and hardware resources (e.g., network synchronization) which are typically not be available on each client device.
- Streaming digital content synchronization techniques are described. In a digital medium environment to stream digital content, rendering of content is synchronized by determining a time to render the content. To do so, a response is received to a request to stream the digital content. The response includes a time at which the digital content was last modified (e.g., a last-modified header) and a time at which the response was generated (e.g., a date header).
- An age is calculated by subtracting the time at which the digital content was last modified, e.g., the last-modified header, from the time at which the response was generated (e.g., the date header). An amount of time the response spent in one or more caches, if available (e.g., an age header), is added as part of this age.
- The time is determined by subtracting the age from a predefined setback time, and the stream of the digital content is rendered based at least in part on the determined time. In order to improve synchronization accuracy between client devices, times may be indicated using fractional seconds. In order to reduce latency, the setback time may be based at least in part on an amount of time determined between revisions to the digital content.
- This Summary introduces a selection of concepts in a simplified form that are further described below in the Detailed Description. As such, this Summary is not intended to identify essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
- The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different instances in the description and the figures may indicate similar or identical items. Entities represented in the figures may be indicative of one or more entities and thus reference may be made interchangeably to single or plural forms of the entities in the discussion.
-
FIG. 1 is an illustration of an environment in an example implementation that is operable to employ streaming digital content synchronization techniques described herein. -
FIG. 2 depicts a system in an example implementation in which digital content is streamed by a content distribution service over a network to a client device ofFIG. 1 . -
FIG. 3 depicts an example implementation of setting a last-modified header, a date header, and an age header of a response ofFIG. 2 . -
FIG. 4 depicts an example implementation of use of a last-modified header, a date header, and an age header of a response ofFIG. 3 to define when to render the digital content. -
FIG. 5 is a flow diagram depicting a procedure in an example implementation in which a time used as a basis to define when to render content is determined. -
FIG. 6 is a flow diagram depicting a procedure in an example implementation in which an amount of time between revisions to digital content is ascertained and used to reduce latency in rendering of the digital content. -
FIG. 7 illustrates an example system including various components of an example device that can be implemented as any type of computing device as described and/or utilize with reference toFIGS. 1-6 to implement embodiments of the techniques described herein. - Overview
- Conventional techniques to stream digital content that rely on segments and manifest files often fail to achieve synchronized playback between client devices. In manifest and segment based streaming techniques, such as streaming techniques that use a Hypertext Transfer Protocol (HTTP), a manifest file is used to map time periods to segments of digital content within a media presentation, the segments typically being a few seconds in duration. Playback of the digital content thus begins at a number of segments behind a most recently posted segment according to a manifest file. The number of segments “behind” depends on the timing of the acquisition of a manifest file, when a new revision and new segment is posted, and so on. Accordingly, this may vary from client device to client device, causing a lack of synchronization which can be disconcerting when rendered as previously described.
- Techniques and systems are described to stream digital content to support synchronized rendering of content by client devices. These techniques are usable in streaming techniques that rely on manifest files and segments within a media file, such as according to a hypertext transfer protocol (HTTP). For example, a content distribution service may form a response to a request to stream digital content, e.g., for a manifest file. The response specifies a time at which the requested resource (e.g., the manifest file) was last modified and a time at which the response was formed. This may be performed using existing hypertext transfer protocol (HTTP) headers, e.g., a last-modified header and a date header, and thus may be performed without using additional resources or requiring special configuration of a client device that is to receive the response. An age may also be specified for an amount of time the response has spent in a cache (e.g., an HTTP age header) during communication of the response from the content distribution service to the client device. This may include a cache of the content distribution service or caches of intermediaries used to communicate the response via a network between the content distribution service and the client device.
- From this information in the response, the client device is able to determine a time at which to render the digital content that is synchronized with other client devices that are also to render the digital content. To do so, the client device first calculates an age by subtracting the time at which the digital content was last modified from the time at which the response was generated. In an HTTP example, this is performed by subtracting the last-modified header from the date header. Additionally, an age header may also be employed to add an amount of time that the response spent in a cache as part of this age, if available.
- A time is then determined to define when to render the digital content. The time is determined by subtracting the age from a setback time. The setback time, for instance, may include an amount of time for buffering to promote consistent playback. In live streaming, the setback time is set as the time behind the end of the most recently available segment. This time is then used as a basis to render segments of the digital content by the client device. Further, use of this technique by a plurality of client devices promotes synchronized rendering of the content between those devices without requiring synchronization of local clocks or proprietary communication techniques.
- Additionally, in order to shorten latency in the rendering of the digital content, the setback time may be set based on an ascertained time between revisions of segments of the digital content. For example, latency may be reduced by using this technique to predict how long until a next revision of the manifest file (and therefore a corresponding segment) will be published. A setback time is then set based on this time in order to provide sufficient buffering yet render the content as fast as feasible. Further reductions may be achieved through use of headers that specify fractional seconds and thus may further promote tighter synchronization between client devices. In this way, synchronized rendering of streaming content is promoted by client devices whether located locally or remotely from each other by using a common technique to define “when” this rendering is to occur. Further discussion of these and other examples is included in the following sections.
- In the following discussion, an example environment is first described that may employ the techniques described herein. Example procedures are then described which may be performed in the example environment as well as other environments. Consequently, performance of the example procedures is not limited to the example environment and the example environment is not limited to performance of the example procedures.
- Example Environment
-
FIG. 1 is an illustration of anenvironment 100 in an example implementation that is operable to employ digital content streaming techniques described herein. The illustratedenvironment 100 includes acontent distribution service 102 that is communicatively coupled to a plurality of client devices (examples of which are illustrated as first andsecond client devices 104, 106) via anetwork 108. Thecontent distribution service 102 is configurable in a variety of ways, such as one or more computing devices to implement a website provider, service provider, web service, satellite provider, terrestrial cable provider, or any other distributor of content employing anetwork 108. Accordingly, thenetwork 108 is also configurable in a variety of ways, such as the internet or “World Wide Web,” a peer-to-peer network, and so forth. - The first and
second client devices FIG. 7 . A computing device, for instance, may be configured as a desktop computer, a laptop computer, a mobile device (e.g., assuming a handheld configuration such as a tablet or mobile phone as illustrated), and so forth. Thus, the computing device may range from full resource devices with substantial memory and processor resources (e.g., personal computers, game consoles) to a low-resource device with limited memory and/or processing resources (e.g., mobile devices) that is configured to communicate via thenetwork 108. Additionally, the first andsecond client devices - The
content distribution service 102 includes acontent distribution module 110 that is implemented at least partially in hardware to control streaming ofdigital content 112 via thenetwork 108, which is illustrated as stored instorage 114.Digital content 112 may take a variety of forms, such as media, video, audio, and other forms of media that is configured for digital storage, communication (e.g., streaming), and rendering. - The first and
second client devices respective communication modules 116, 118. Thecommunication modules 116, 118 are representative of functionality implemented at least partially in hardware to communicate via thenetwork 108, such as to communicate with thecontent distribution service 102 to stream thedigital content 112. This includes dedicated applications, plug-in modules, network enabled applications, browsers, and so forth. An example of functionality employed by thecommunication modules 116, 118 is represented byplayback modules digital content 112. - As previously described, users expect synchronization in the rendering of streaming content. In traditional broadcast television, for instance, client devices such as television receivers are configured to receive the same broadcast signal simultaneously and immediately render the received signal for display. This causes the display of the traditional broadcast television to be inherently synchronized. However, in conventional live streaming techniques that involve use of a manifest file and segments (e.g., according to a hypertext transfer protocol), this is not so due to timing of acquisition of the manifest file, timing of when a new revision and corresponding segment is posted, time taken to request and receive a response to the request that includes the segment, and so forth. Consequently, this lack of synchronization of conventional live streaming techniques may run counter to user expectations.
- Accordingly, the
playback modules digital content 112 such that synchronized rendering is promoted between the first andsecond client devices second client devices client devices -
FIG. 2 depicts asystem 200 in an example implementation in which thedigital content 112 is streamed by thecontent distribution service 102 over thenetwork 108 to thefirst client device 104. Thecontent distribution service 102 in this example receivesdigital content 112 that is “live,” e.g., that is captured in real time or linear as pre-recorded content that is presented in real time as though it was live. Thecontent distribution module 110 then configures thisdigital content 112 for live streaming using a manifest and segment technique. To do so, thecontent distribution module 110 employs a manifest generation module 202 and asegment generation module 204. - The
segment generation module 204 is implemented at least partially in hardware to formsegments 206 in amedia presentation 208. Thesegments 206, for instance, may be formed by thesegment generation module 204 into lengths of a few seconds each from packets collected from thedigital content 112. The manifest generation module 202 is implemented at least partially in hardware to form themanifest file 210 that maps respective time periods to corresponding ones of thesegments 206 of themedia presentation 208. - A
request 212 andresponse 214 technique is then used to stream the digital content over thenetwork 108 between thecontent distribution service 102 and thefirst client device 104. For example, theplayback module 120 of thefirst client device 104 may form arequest 212 for themanifest file 210 that corresponds to desireddigital content 112 to be streamed. Thecontent distribution module 110 receives thisrequest 212 via thenetwork 108 and forms aresponse 214 that includes themanifest file 210. Using themanifest file 210, theplayback module 120 may determine whichsegments 206 of themedia presentation 208 map to corresponding time periods that are desired for rendering (e.g., a most recent) and use asimilar request 212 andresponse 214 technique request communication of and receive thosesegments 206. - In order to define when the
segments 206 are to be rendered, thecontent distribution module 110 may leverage existing header fields and semantics found in manifest and segments based streaming techniques such as according to HTTP, and may do so without synchronizing local clocks of theclient devices header 216, adate header 218, and anage header 220. An example of setting values of these headers is further described in the following and shown in a corresponding figure. -
FIG. 3 depicts anexample implementation 300 of setting the last-modifiedheader 216, thedate header 218, and theage header 220 of theresponse 214 ofFIG. 2 . Thisimplementation 300 is shown using first, second, andthird stages first stage 302, arequest 212 is formed and communicated by thecommunication module 116 for receipt by thecontent distribution module 110 of thecontent distribution service 102 via thenetwork 108. Therequest 212, for instance, may request communication of amanifest file 210 for use in specifying segments of content to be streamed. - At the
second stage 304, thecontent distribution module 110 employs aresponse generation module 308 implemented at least partially in hardware to form theresponse 214. The response includes a last-modifiedheader 216,date header 218, age header 220 (optionally), and themanifest file 210. The last-modifiedheader 216 is set by thecontent distribution module 110 according to a time indicated by aclock 222 associated with thecontent distribution service 102 as to when themedia presentation 208 of thedigital content 112 was last modified, e.g., asegment 206 was added in a live streaming context. - The
date header 218 is set by thecontent distribution module 110 according to a time indicated by theclock 222 when theresponse 214 is formed by theresponse generation module 308. In one or more implementations, theclock 222 used to set the last-modifiedheader 216 is synchronized with and/or is thesame clock 222 as used to set thedate header 218. - The
age header 220 is optionally used to indicate an amount of time theresponse 214 has spent in a cache since being formed byresponse generation module 308, e.g., at thecontent distribution service 102 or stored in one or more intermediaries (e.g., intermediary servers, routers, firewalls, and so on) used to communicate theresponse 214 via thenetwork 108. Theresponse 214 is communicated by thecontent distribution service 102 via thenetwork 108 for receipt by thefirst client device 104 in this example. Thefirst client device 104 may then use these headers to determine when to render segments of the digital content in a manner that is synchronized with other client devices, an example of which is described in the following. -
FIG. 4 depicts anexample implementation 400 of use of the last-modifiedheader 216, thedate header 218, and theage header 220 of theresponse 214 ofFIG. 3 to define when to render the digital content. Thisimplementation 400 is shown using first andsecond stages first stage 402, theplayback module 120 determines an age of the digital content, and more particularly an age of respective segments of the digital content to be rendered. This is performed by theplayback module 120 by subtracting the last-modifiedheader 216 from thedate header 218. In other words, this acts to subtract the time at which the digital content was last modified, e.g., a segment was added to the digital content, from the time at which theresponse 214 was generated. Theage header 220, if available, is added to this result to address an amount of time the response spent in a cache when communicated, e.g., in a cache of an intermediate server of thenetwork 108, in a cache of thecontent distribution service 102, and so forth. In this way, a “true age” of the response is determined without use of a clock on thefirst client device 104. - At the
second stage 404, a time is calculated that is to be used as a basis to define when the content is rendered. This time is calculated by subtracting the age calculated at thefirst stage 402 from a setback time. The setback time includes an amount of time that is determined to include an amount of buffer time to promote consistent playback. The setback time may be predefined as a static amount of time that is consistent between the first andsecond clients FIG. 6 . - This time is then used as a basis to define “when” the segments of the digital content are to be rendered. Through an ability to determine an “age” of
response 214 through use of the headers and setback times, synchronization between the first andsecond client devices first client device 104 may determine an age that is different than an age determined by asecond client device 106. Through use of the setback times and these different respective ages, the “when” of the rendering is synchronized by taking these differences into account. In one or more implementations, times that include higher precision (e.g., including fractional second portions) are used in supplemental headers to further improve accuracy over conventional HTTP headers that are limited solely to specifying time using whole seconds and above. In this way, synchronization of less than a second may be achieved. Additional examples are described in relation to the following procedures. - Example Procedures
- The following discussion describes streaming digital content synchronization techniques that may be implemented utilizing the previously described systems and devices. Aspects of each of the procedures may be implemented in hardware, firmware, or software, or a combination thereof. The procedures are shown as a set of blocks that specify operations performed by one or more devices and are not necessarily limited to the orders shown for performing the operations by the respective blocks. In portions of the following discussion, reference will be made to
FIGS. 1-4 . -
FIG. 5 depicts aprocedure 500 in an example implementation in which a time used as a basis to define when to render content is determined. A request is communicated to stream content (block 502). Therequest 212, for instance, may request amanifest file 210 to be used to streamdigital content 112. - A response is received to the request. The response includes a time at which the digital content was last modified and a time at which the response was generated (block 504). The last modified time, for instance, may be specified as a last-modified header 216 (e.g., an HTTP “Last-Modified” response header) that specifies a time at which a
segment 206 was added to a media presentation. The time at which the response is generated is specified using adate header 218, e.g., as an HTTP “Date” response header. The response may also include an age indicative of an amount of time the response has spent in at least one cache through use of anage header 220, e.g., as an HTTP “Age” header. - Logical consistency of the response is checked (block 506). The
playback module 120, for instance, may check to determine that thedate header 218 specifies a time that is not before a time specified by the last-modifiedheader 216. If not logically consistent, the following processing is not performed, thereby protecting against errors and conserving computing resources. - An age is calculated by subtracting the time at which the digital content was last modified from the time at which the response was generated (block 508). The
playback module 120, for instance, may subtract the last-modifiedheader 216 from thedate header 218. Theage header 220 may also be added to this value, if available. In this way, the age describes an amount of time that has passed between availability of a newly added segment through the last-modifiedheader 216 and generation of theresponse 214, to which the amount of time spent in a cache is added, if specified. - A time is determined by subtracting the age from a setback time, the time usable to define when rendering of the stream of digital content is to occur (block 510). The setback time, for instance, may include a buffering time to ensure smooth playback of the
digital content 112. In live streaming, the setback time is set as a time behind an end of a most recently available segment. The setback time may be defined statically (e.g., a set amount of time) or dynamically to reduce latency as further described in relation toFIG. 6 . The stream of the digital content is rendered based at least in part on the determined time (block 512). By using this technique for both the first andsecond client devices digital content 112 may be achieved. -
FIG. 6 depicts aprocedure 600 in an example implementation in which an amount of time between revisions to digital content is ascertained and used to reduce latency in rendering of the digital content. An amount of time is ascertained between revisions of digital content (block 602). For example, the manifest file can specify a maximum permitted segment duration ofmedia presentation 208. Based on this, an interval may be determined describing by when “new”segments 206 of thedigital content 112 are made available, which is usable to reduce latency in rendering of the content as described in the following. - As before, an age is calculated as a difference between a last-modified header and a date header in a response to a request to stream the digital content (block 604). A time is determined at which the rendering of the stream of digital content is to occur. The time is determined by subtracting the age from a setback time as before. However, in this instance the setback time is based at least in part on the ascertained amount of time between the revisions (block 606). The setback time, for instance, may be calculated by adding a set buffering time (e.g., to ensure smooth playback) to the ascertained amount of time. In this way, rendering of the stream of digital content based at least in part on the determined time (block 608) is performed with minimal latency based on when the segments are made available for rendering. When performed by a plurality of client devices, this may also be used to enforce synchronization between the devices as well as reduce latency.
- Example System and Device
-
FIG. 7 illustrates an example system generally at 700 that includes anexample computing device 702 that is representative of one or more computing systems and/or devices that may implement the various techniques described herein. This is illustrated through inclusion of theplayback module 120. Thecomputing device 702 may be, for example, a server of a service provider, a device associated with a client (e.g., a client device), an on-chip system, and/or any other suitable computing device or computing system. - The
example computing device 702 as illustrated includes aprocessing system 704, one or more computer-readable media 706, and one or more I/O interface 708 that are communicatively coupled, one to another. Although not shown, thecomputing device 702 may further include a system bus or other data and command transfer system that couples the various components, one to another. A system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. A variety of other examples are also contemplated, such as control and data lines. - The
processing system 704 is representative of functionality to perform one or more operations using hardware. Accordingly, theprocessing system 704 is illustrated as includinghardware element 710 that may be configured as processors, functional blocks, and so forth. This may include implementation in hardware as an application specific integrated circuit or other logic device formed using one or more semiconductors. Thehardware elements 710 are not limited by the materials from which they are formed or the processing mechanisms employed therein. For example, processors may be comprised of semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)). In such a context, processor-executable instructions may be electronically-executable instructions. - The computer-readable storage media 706 is illustrated as including memory/
storage 712. The memory/storage 712 represents memory/storage capacity associated with one or more computer-readable media. The memory/storage component 712 may include volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth). The memory/storage component 712 may include fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth). The computer-readable media 706 may be configured in a variety of other ways as further described below. - Input/output interface(s) 708 are representative of functionality to allow a user to enter commands and information to
computing device 702, and also allow information to be presented to the user and/or other components or devices using various input/output devices. Examples of input devices include a keyboard, a cursor control device (e.g., a mouse), a microphone, a scanner, touch functionality (e.g., capacitive or other sensors that are configured to detect physical touch), a camera (e.g., which may employ visible or non-visible wavelengths such as infrared frequencies to recognize movement as gestures that do not involve touch), and so forth. Examples of output devices include a display device (e.g., a monitor or projector), speakers, a printer, a network card, tactile-response device, and so forth. Thus, thecomputing device 702 may be configured in a variety of ways as further described below to support user interaction. - Various techniques may be described herein in the general context of software, hardware elements, or program modules. Generally, such modules include routines, programs, objects, elements, components, data structures, and so forth that perform particular tasks or implement particular abstract data types. The terms “module,” “functionality,” and “component” as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of commercial computing platforms having a variety of processors.
- An implementation of the described modules and techniques may be stored on or transmitted across some form of computer-readable media. The computer-readable media may include a variety of media that may be accessed by the
computing device 702. By way of example, and not limitation, computer-readable media may include “computer-readable storage media” and “computer-readable signal media.” - “Computer-readable storage media” may refer to media and/or devices that enable persistent and/or non-transitory storage of information in contrast to mere signal transmission, carrier waves, or signals per se. Thus, computer-readable storage media refers to non-signal bearing media. The computer-readable storage media includes hardware such as volatile and non-volatile, removable and non-removable media and/or storage devices implemented in a method or technology suitable for storage of information such as computer readable instructions, data structures, program modules, logic elements/circuits, or other data. Examples of computer-readable storage media may include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, hard disks, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage device, tangible media, or article of manufacture suitable to store the desired information and which may be accessed by a computer.
- “Computer-readable signal media” may refer to a signal-bearing medium that is configured to transmit instructions to the hardware of the
computing device 702, such as via a network. Signal media typically may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as carrier waves, data signals, or other transport mechanism. Signal media also include any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. - As previously described,
hardware elements 710 and computer-readable media 706 are representative of modules, programmable device logic and/or fixed device logic implemented in a hardware form that may be employed in some embodiments to implement at least some aspects of the techniques described herein, such as to perform one or more instructions. Hardware may include components of an integrated circuit or on-chip system, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), and other implementations in silicon or other hardware. In this context, hardware may operate as a processing device that performs program tasks defined by instructions and/or logic embodied by the hardware as well as a hardware utilized to store instructions for execution, e.g., the computer-readable storage media described previously. - Combinations of the foregoing may also be employed to implement various techniques described herein. Accordingly, software, hardware, or executable modules may be implemented as one or more instructions and/or logic embodied on some form of computer-readable storage media and/or by one or
more hardware elements 710. Thecomputing device 702 may be configured to implement particular instructions and/or functions corresponding to the software and/or hardware modules. Accordingly, implementation of a module that is executable by thecomputing device 702 as software may be achieved at least partially in hardware, e.g., through use of computer-readable storage media and/orhardware elements 710 of theprocessing system 704. The instructions and/or functions may be executable/operable by one or more articles of manufacture (for example, one ormore computing devices 702 and/or processing systems 704) to implement techniques, modules, and examples described herein. - The techniques described herein may be supported by various configurations of the
computing device 702 and are not limited to the specific examples of the techniques described herein. This functionality may also be implemented all or in part through use of a distributed system, such as over a “cloud” 714 via aplatform 716 as described below. - The
cloud 714 includes and/or is representative of aplatform 716 forresources 718. Theplatform 716 abstracts underlying functionality of hardware (e.g., servers) and software resources of thecloud 714. Theresources 718 may include applications and/or data that can be utilized while computer processing is executed on servers that are remote from thecomputing device 702.Resources 718 can also include services provided over the Internet and/or through a subscriber network, such as a cellular or Wi-Fi network. - The
platform 716 may abstract resources and functions to connect thecomputing device 702 with other computing devices. Theplatform 716 may also serve to abstract scaling of resources to provide a corresponding level of scale to encountered demand for theresources 718 that are implemented via theplatform 716. Accordingly, in an interconnected device embodiment, implementation of functionality described herein may be distributed throughout thesystem 700. For example, the functionality may be implemented in part on thecomputing device 702 as well as via theplatform 716 that abstracts the functionality of thecloud 714. - Although the invention has been described in language specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed invention.
Claims (20)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/069,839 US10079884B2 (en) | 2016-03-14 | 2016-03-14 | Streaming digital content synchronization |
CN201610952730.0A CN107197351B (en) | 2016-03-14 | 2016-11-02 | Method and system for synchronizing streaming digital content |
DE102016013109.8A DE102016013109A1 (en) | 2016-03-14 | 2016-11-03 | Sync while streaming digital content |
AU2016253673A AU2016253673B2 (en) | 2016-03-14 | 2016-11-04 | Streaming digital content synchronization |
GB1618588.6A GB2548440B (en) | 2016-03-14 | 2016-11-04 | Streaming digital content synchronization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/069,839 US10079884B2 (en) | 2016-03-14 | 2016-03-14 | Streaming digital content synchronization |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170264683A1 true US20170264683A1 (en) | 2017-09-14 |
US10079884B2 US10079884B2 (en) | 2018-09-18 |
Family
ID=59677474
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/069,839 Active 2037-04-14 US10079884B2 (en) | 2016-03-14 | 2016-03-14 | Streaming digital content synchronization |
Country Status (5)
Country | Link |
---|---|
US (1) | US10079884B2 (en) |
CN (1) | CN107197351B (en) |
AU (1) | AU2016253673B2 (en) |
DE (1) | DE102016013109A1 (en) |
GB (1) | GB2548440B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021011317A3 (en) * | 2019-07-12 | 2021-02-25 | Apple Inc. | Low latency streaming media |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8701010B2 (en) * | 2007-03-12 | 2014-04-15 | Citrix Systems, Inc. | Systems and methods of using the refresh button to determine freshness policy |
CN102055773B (en) * | 2009-11-09 | 2013-10-09 | 华为技术有限公司 | Method for realizing HTTP-based stream media service, system and network equipment |
BR112012001150B1 (en) * | 2009-11-09 | 2021-06-29 | Snaptrack, Inc | METHOD FOR IMPLEMENTING HTTP-BASED TRANSMISSION SERVICE |
CN102137130A (en) * | 2010-01-22 | 2011-07-27 | 华为技术有限公司 | Synchronized method and device based on hypertext transport protocol (HTTP) |
US10712771B2 (en) * | 2010-08-13 | 2020-07-14 | Netflix, Inc. | System and method for synchronized playback of streaming digital content |
WO2012046487A1 (en) | 2010-10-05 | 2012-04-12 | シャープ株式会社 | Content reproduction device, content delivery system, synchronization method for content reproduction device, control program, and recording medium |
US20120207088A1 (en) * | 2011-02-11 | 2012-08-16 | Interdigital Patent Holdings, Inc. | Method and apparatus for updating metadata |
WO2012124999A2 (en) * | 2011-03-17 | 2012-09-20 | 엘지전자 주식회사 | Method for providing resources by a terminal, and method for acquiring resources by a server |
CN104221390B (en) * | 2012-04-26 | 2018-10-02 | 高通股份有限公司 | Enhanced block for disposing low latency streaming asks approach system |
US20140010517A1 (en) * | 2012-07-09 | 2014-01-09 | Sensr.Net, Inc. | Reduced Latency Video Streaming |
CN102821108A (en) * | 2012-08-24 | 2012-12-12 | 北龙中网(北京)科技有限责任公司 | Method for precisely synchronizing time of client and time of server |
US9226011B2 (en) | 2012-09-11 | 2015-12-29 | Comcast Cable Communications, Llc | Synchronizing program presentation |
US9426196B2 (en) * | 2013-01-04 | 2016-08-23 | Qualcomm Incorporated | Live timing for dynamic adaptive streaming over HTTP (DASH) |
US9521179B2 (en) * | 2014-07-16 | 2016-12-13 | Verizon Patent And Licensing Inc. | Validation of live media stream based on predetermined standards |
US9954746B2 (en) * | 2015-07-09 | 2018-04-24 | Microsoft Technology Licensing, Llc | Automatically generating service documentation based on actual usage |
-
2016
- 2016-03-14 US US15/069,839 patent/US10079884B2/en active Active
- 2016-11-02 CN CN201610952730.0A patent/CN107197351B/en active Active
- 2016-11-03 DE DE102016013109.8A patent/DE102016013109A1/en active Pending
- 2016-11-04 AU AU2016253673A patent/AU2016253673B2/en active Active
- 2016-11-04 GB GB1618588.6A patent/GB2548440B/en active Active
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021011317A3 (en) * | 2019-07-12 | 2021-02-25 | Apple Inc. | Low latency streaming media |
US11197052B2 (en) | 2019-07-12 | 2021-12-07 | Apple Inc. | Low latency streaming media |
CN114145024A (en) * | 2019-07-12 | 2022-03-04 | 苹果公司 | Low latency streaming media |
Also Published As
Publication number | Publication date |
---|---|
CN107197351B (en) | 2020-12-04 |
CN107197351A (en) | 2017-09-22 |
AU2016253673B2 (en) | 2021-07-15 |
GB2548440A (en) | 2017-09-20 |
US10079884B2 (en) | 2018-09-18 |
AU2016253673A1 (en) | 2018-05-24 |
GB2548440B (en) | 2019-07-10 |
DE102016013109A1 (en) | 2017-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200201490A1 (en) | Providing content via multiple display devices | |
US20150127284A1 (en) | Sensor Data Time Alignment | |
US20180121322A1 (en) | Methods and Systems for Testing Versions of Applications | |
TWI470983B (en) | Method and apparatus for updating http content descriptions | |
US20140337408A1 (en) | Systems, methods and media for minimizing data downloads | |
US10241982B2 (en) | Modifying web pages based upon importance ratings and bandwidth | |
US9626066B2 (en) | Video playback analytics collection | |
CN110809189A (en) | Video playing method and device, electronic equipment and computer readable medium | |
US20190121861A1 (en) | Change detection in a string repository for translated content | |
CN110619096A (en) | Method and apparatus for synchronizing data | |
CN111163336B (en) | Video resource pushing method and device, electronic equipment and computer readable medium | |
US20180249017A1 (en) | Data Usage Based Data Transfer Determination | |
US9648098B2 (en) | Predictive peer determination for peer-to-peer digital content download | |
GB2548440B (en) | Streaming digital content synchronization | |
US20130145258A1 (en) | Incremental Synchronization for Magazines | |
US9525818B2 (en) | Automatic tuning of images based on metadata | |
US9998536B2 (en) | Metered network synchronization | |
CN110996155B (en) | Video playing page display method and device, electronic equipment and computer readable medium | |
CN112333462A (en) | Live broadcast room page jumping method, returning device and electronic equipment | |
US20160127496A1 (en) | Method and system of content caching and transmission | |
US20210058742A1 (en) | Techniques for location-based alert of available applications | |
AU2016256802B2 (en) | Digital content streaming to loss intolerant streaming clients | |
US10171622B2 (en) | Dynamic content reordering for delivery to mobile devices | |
CN111291011B (en) | File synchronization method and device, electronic equipment and readable storage medium | |
US10168982B2 (en) | Display control of a portion of a document by primary and secondary display devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ADOBE SYSTEMS INCORPORATED, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THORNBURGH, MICHAEL CHRISTOPHER;REEL/FRAME:037991/0034 Effective date: 20160311 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: ADOBE INC., CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:ADOBE SYSTEMS INCORPORATED;REEL/FRAME:048867/0882 Effective date: 20181008 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |