US20140119709A1 - Systems and methods to modify playout or playback - Google Patents
Systems and methods to modify playout or playback Download PDFInfo
- Publication number
- US20140119709A1 US20140119709A1 US13/950,218 US201313950218A US2014119709A1 US 20140119709 A1 US20140119709 A1 US 20140119709A1 US 201313950218 A US201313950218 A US 201313950218A US 2014119709 A1 US2014119709 A1 US 2014119709A1
- Authority
- US
- United States
- Prior art keywords
- content
- primary content
- entertainment
- receiving device
- advertisement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/005—Reproducing at a different information rate from the information rate of recording
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
- G11B27/30—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on the same track as the main recording
- G11B27/3027—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on the same track as the main recording used signal is digitally coded
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
- H04N21/2387—Stream processing in response to a playback request from an end-user, e.g. for trick-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/432—Content retrieval operation from a local storage medium, e.g. hard-disk
- H04N21/4325—Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47217—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6587—Control parameters, e.g. trick play commands, viewpoint selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/87—Regeneration of colour television signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
Systems and methods to modify playout/playback include responding to a trick mode request (e.g., fast forward, rewind). First, the system generates a transmission that includes primary content and a secondary information identifier. Next, the system communicates the transmission to a receiving device that stores the transmission in a local storage device. Next, the receiving device retrieves the transmission from the local storage device and utilizes the secondary information identifier to associate the primary content with a secondary content. Next, the receiving device to renders the secondary content, instead of the primary content, to an output device, at the receiving device, responsive to receipt of a request to render the primary content to the output device at an accelerated speed of the primary content.
Description
- Embodiments relate generally to the technical field of communications and more specifically to systems and methods to modify playout or playback of primary content.
- Many receiving devices such as personal video recorders (PVRs) or digital video recorders (DVRs) may provide support for trick mode requests that enable a user to fast forward or rewind content (e.g. primary content). For example, a user who has recorded a movie on a PVR may fast forward through a scene while playing the movie. In response to the request, the PVR may render the movie to a display device at an accelerated speed. Two disadvantages may be identified in processing the users request to fast forward. First, the content played out in response to the fast forward request is the same content, nevertheless played at an accelerated speed. Second, the content played out in response to the fast forward request may appear jerky and reproduce poorly making identification of scenes difficult.
- Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
-
FIG. 1 is a block diagram illustrating a system, according to an example embodiment, to modify playout of primary content; -
FIG. 2 is a block diagram illustrating a database, according to an example embodiment; -
FIG. 3 is a block diagram illustrating example embodiments of entertainment secondary information, according to an example embodiment; -
FIG. 4 is a block diagram illustrating example embodiments of advertisement secondary information; -
FIG. 5 is a block diagram illustrating frames and packets, according to an example embodiment; -
FIG. 6 is a flowchart illustrating a method, according to an example embodiment; -
FIG. 7 is a flowchart illustrating a method, according to an example embodiment, to identify secondary information based on a trick mode request; -
FIG. 8 is a flowchart illustrating a method, according to an example embodiment; -
FIG. 9 is a block diagram illustrating a system, according to an example embodiment, to modify simulated primary content at a receiving device; -
FIG. 10 is a block diagram illustrating a database, according to an example embodiment; -
FIG. 11 is a flow chart illustrating a method, according to an example embodiment, to modify simulated primary content at a receiving device; -
FIG. 12 is a block diagram illustrating a system, according to an example embodiment; -
FIG. 13 is a block diagram illustrating a database, according to an example embodiment; -
FIG. 14 is a block diagram illustrating a database, according to an example embodiment; -
FIG. 15 is a block diagram illustrating a receiving device, according to an example embodiment; -
FIG. 16A is a block diagram illustrating a component transmission, according to an example embodiment; -
FIG. 16B is a block diagram illustrating a component transmission, according to an example embodiment; -
FIG. 16C is a block diagram illustrating a component transmission, according to an example embodiment; -
FIG. 16D is a block diagram illustrating a transmission, according to an example embodiment; -
FIG. 17 is a block diagram illustrating streams associated with a channel, according to an example embodiment; -
FIG. 18 is a block diagram illustrating the packet, according to an example embodiment; -
FIG. 19 is a block diagram illustrating a secondary information table, according to an example embodiment; -
FIG. 20 is a block diagram illustrating primary content and secondary information communicated in the video stream and the audio stream of a single channel, according to an example embodiment; -
FIG. 21 is a block diagram illustrating primary content communicated in a first channel and secondary information communicated in a second channel, according to an example embodiment; -
FIG. 22 is a block diagram illustrating the primary content communicated in a video stream and an audio stream of a channel and the secondary information communicated in the metadata stream of the same channel, according to an example embodiment; -
FIG. 23 is a block diagram illustrating end of primary content markers, according to an example embodiment; -
FIG. 24 is flowchart illustrating a method, according to an example embodiment, to modify playback of primary content at a receiving device; -
FIG. 25 is a flow chart illustrating a method, according to an example embodiment, to communicate a transmission that facilitates modification of playback of primary content at a receiving device; -
FIG. 26 is a diagram illustrating a user interface, according to an example embodiment; -
FIG. 27 is a block diagram of a machine, according to an example embodiment, including instructions to perform any one or more of the methodologies described herein. - In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of example embodiments of the present invention. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details.
- Embodiments described below use one of two approaches to respond to a trick mode request (e.g., fast forward, rewind, skip request). First, a trick mode request may be responded to by associating primary content to secondary content and playing out the secondary content on a receiving device, the secondary content not being derived from the primary content. For example, a user viewing a movie (e.g., primary content) may select a fast forward button that causes fast forwarding of the movie; however, instead of viewing the movie at an accelerated speed, the user may view and/or hear secondary content. Taking this approach, the author of the secondary content is empowered with complete editorial control over the secondary content. Accordingly, the author may create secondary content of the same subject matter as the primary content or create secondary content of a different subject matter altogether. Further, the author may create secondary content of the same medium (e.g., audio and/or video) and presentation (e.g., full motion and/or slide show) of the primary content or create secondary content of a different medium (e.g., audio and/or video) and presentation (e.g., full motion and/or slide show). In addition, the author of the primary content need not be the author of the secondary content or be legally or otherwise related to the author of the secondary content.
- Second, a trick mode request may be responded to by associating primary content to secondary content and playing out the secondary content on a receiving device, the secondary content being derived from the primary content but played at a normal speed for the secondary content. Taking this approach, the author of the secondary content is empowered with limited editorial control over the secondary content because the secondary content is derived from the primary content. For example, the derivative secondary content may include selected samples (e.g., audio and/or visual; motion and/or slide show) from the associated primary content. Further, the secondary content may be played at a normal speed for the secondary content thereby eliminating the jerkiness and poor reproduction normally associated with rendering primary content that is fast forwarded or rewound.
- Primary Content in this document is intended to include content that may be played on a receiving device or interacted with on a receiving device. Primary content may include but is not limited to entertainment content and advertisement content. Further, primary content may include video content and/or audio content and/or associated metadata.
- Secondary Content in this document is intended to include content that may be substituted for primary content responsive to receipt of a trick mode request (e.g., fast forward, rewind, reverse, etc.). The secondary content may be played or interacted with on a receiving device. Further, secondary content may include video content and/or audio content and/or associated metadata.
- Secondary Information in this document may include secondary content, information to generate secondary content or information to access secondary content.
- Derivative Secondary Content in this document is intended to include secondary content that is generated from the associated primary content. For example, derivative secondary content may include samples (e.g., audio and/or visual) from the associated primary content.
- Non-Derivative Secondary Content in this document is intended to include secondary content that is not generated from the associated primary content. For example, derivative secondary content does not include samples (e.g., audio and/or visual) from the associated primary content.
- Normal Speed in this document is intended to include an instantaneous speed to render a discrete unit of content (e.g., primary content or secondary content) to an output device, the normal speed being the speed necessary to completely render the discrete unit of content from beginning to end in a predetermined play time that is associated with the content. For example, an episode of Gilligan's Island may be rendered at a receiving device at a normal speed such that the episode completes in a predetermined running time (e.g., play time) of twenty-five minutes. Play times may be published with the primary and secondary content. For example, movies may be stored on media and labeled with the play time of the movie. A normal speed may be applicable to advancing the discrete unit of content in forward or reverse directions.
- Accelerated Speed in this document is intended to include an instantaneous speed to render a discrete unit of content to an output device, the accelerated speed being any speed greater than the normal speed associated with the discrete unit of content. An accelerated speed may be applicable to advancing the discrete unit of content in forward or reverse directions.
- This section describes aspects of the present disclosure that may be embodied using point to point communications. For example, point to point communications may be embodied as a receiving device that requests a video on demand asset from a video on demand server.
- According to a first example aspect of the present disclosure a request for primary content may be received at a system. In response, the system may communicate the primary content to a receiving device that may render the primary content to an output device at a normal speed of the primary content. Also, in response, the system may associate primary content to secondary information that is communicated to a receiving device. Next, the receiving device may receive a request to render the primary content at the receiving device at an accelerated speed of the primary content (e.g., fast forward, rewind). In response, the receiving device may use the secondary information to render secondary non-derivative content to the output device instead of the primary content.
- According to a second example aspect of the present disclosure processing is substantially similar as the first example aspect of the present disclosure except the secondary information may be used to render secondary derivative content instead of secondary non-derivate derivative content. Further, the receiving device may render the secondary derivative content at a normal speed for the secondary non-derivative content. For example, the secondary non-derivative content may include a full motion recording of selected scenes from the primary content.
- Other embodiments of the first and second aspects may include the primary content being stored to a storage device at the receiving device before rendering to the output device, the secondary content being already generated at the time of the trick mode request, and the secondary content to be generated at the time of the trick mode request.
- According to a third example aspect of the present disclosure a system receives a request for primary content. In response to the request, the system may communicate the primary content to a receiving device that renders the primary content to an output device at a normal speed of the primary content. Next, the system may receive a request from the receiving device to communicate the primary content for rendering at the output device at the receiving device at an accelerated speed of the primary content (e.g., fast forward, rewind). In response, the system may associate the primary content to secondary non-derivative content and communicate the secondary non-derivative content to the receiving device. Next, the receiving device may render the secondary non-derivative content to the output device.
- According to a fourth example aspect of the present disclosure processing is substantially similar as the third example aspect of the present disclosure except the secondary derivative content may be utilized instead of secondary non-derivate derivative content. Further, the receiving device may render the secondary derivative content at a normal speed for the secondary derivative content.
- Other embodiments of the third and fourth aspects may include the primary content being stored to a storage device at the receiving device before rendering to the output device, the secondary content being already generated at the time of the trick mode request, and the secondary content to be generated at the time of the trick mode request.
- According to a fourth example aspect of the present disclosure a receiving device may receive a request for primary content. In response, the receiving device may render the primary content to an output device at the receiving device at a normal speed for the primary content. Next, the receiving device may receive a request to render the primary content to the output device at an accelerated speed for the primary content (e.g., fast forward, rewind). Next, the receiving device may receive a simulated primary content at the receiving device for render to the output device at the receiving device so as to simulate render of the primary content to the output device at the receiving device at an accelerated speed (e.g., fast forward, rewind). Next, the receiving device may generate secondary derivative content based on the simulated primary content. Finally, the receiving device may render the secondary derivative content to the output device instead of the simulated primary content. Further, the receiving device may render the secondary derivative content at a normal speed for the secondary derivative content.
- This section describes aspects of the present disclosure that may be embodied using point to multi-point communications. For example, point to multi-point communications may be embodied using an insertion system that transmits an Internet Protocol (IP) transport streams in Moving Picture Experts Group-two (MPEG-2) compression formats to multiple receiving devices (e.g., settop boxes).
- According to a fifth example aspect of the present disclosure a receiving device receives a transmission that includes primary content and a secondary information identifier. The receiving device stores the transmission on a local storage device (e.g. Pause). Next, the receiving device may retrieve the transmission from the local storage device to render the primary content to an output device at the receiving device at a normal speed for the primary content (e.g., Play). Next, the receiving device may receive a request to render the primary content to an output device at the receiving device at an accelerated speed of the primary content (e.g., Fast forward, rewind). Next, the receiving device may associate the primary content to secondary non-derivative content based on the secondary information identifier. Finally, the receiving device may render the secondary non-derivative content to the receiving device.
- According to a sixth example aspect of the present disclosure processing is substantially similar as the fifth example aspect of the present disclosure except the secondary derivative content may be utilized instead of secondary non-derivate derivative content. Further, the receiving device may render the secondary derivative content at a normal speed for the secondary non-derivative content.
- Other embodiments of the fifth and sixth aspects may include the secondary content being already generated at the time of the trick mode request, the secondary content being generated responsive to the trick mode request, and the secondary content being retrieved from remote storage rather than local storage.
- According to a seventh example aspect of the present disclosures a system generates a transmission that includes primary content and a secondary information identifier. Next, the system communicates the transmission to a receiving device that may process the transmission according the fifth aspect described above.
- According to an eight example aspect of the present disclosures a system generates a transmission that includes primary content and a secondary information identifier. Next, the system communicates the transmission to a receiving device that may process the transmission according the sixth aspect described above.
-
FIG. 1 is a block diagram illustrating asystem 10, according to an example embodiment. Thesystem 10 is shown to include a receivingdevice 12, a video ondemand system 14, and anetwork 16. The receivingdevice 12 may, for example, include a set top box (STB), a personal computer, an iPod, a personal video recorder (PVR) (e.g., analog or digital input), a personal digital recorder (PDR) (e.g., analog or digital input), a mobile phone, a portable media player, a game console or any other device capable of playing video and/or audio content. The receivingdevice 12 is shown to be coupled to anoutput device 18 and adatabase 22. In an example embodiment, the receivingdevice 12 may be operated or controlled withcontrol buttons 19 or aremote control 20. Theoutput device 18 may include asound device 24 and adisplay device 26, however, it will be appreciated by those skilled in the art that theoutput device 18 may also include a machine device to communicate machine interface information (e.g., SGML) to a machine (e.g., client, server, peer to peer). Thenetwork 16 may be any network capable of communicating video and/or audio and may include the Internet, closed IP networks such as DSL or FTTH, digital broadcast satellite, cable, digital, terrestrial, analog and digital (satellite) radio, etc. and/or hybrid solutions combining one or more networking technologies. - The video on
demand system 14 is shown to include a streaming server 28 alive feed 29, and adatabase 30. Thedatabase 30 that may be a source of prerecordedprimary content 32 andsecondary information 34 and thelive feed 29 may be a source of liveprimary content 32 and livesecondary information 34. Theprimary content 32 may be played on theoutput device 18 at the receivingdevice 12. Thesecondary information 34 may include entertainment secondary information and advertisement secondary information. Thesecondary information 34 may further includesecondary content 35 that also may be played on theoutput device 18 at the receivingdevice 12. Other embodiments may includesecondary information 34 that may be used to generatesecondary content 35, as described further below. - The streaming
server 28 includes arequest module 36 and acommunication module 38. Therequest module 36 may receive requests from the receivingdevice 12. For example, therequest module 36 may receive a request to playprimary content 32, a request to fast forwardprimary content 32, a request to rewindprimary content 32, and a request to pauseprimary content 32. In one example embodiment, the streamingserver 28 and the receivingdevice 12 may utilize the real time streaming protocol (RTSP) to communicate. In another example embodiment the streamingserver 28 and the receivingdevice 12 may utilize the digital storage media command and control protocol (DSM-CC) to communicate. - The
communication module 38 may respond to requests received by the receivingmodule 218. For example, thecommunication module 38 may respond by communicatingprimary content 32 to the receivingdevice 12, communicating a secondary information identifier to the receivingdevice 12, or communicatingsecondary content 35 to the receivingdevice 12. - While the
system 10 shown inFIG. 1 employs a client-server architecture, the present disclosure is of course not limited to such an architecture, and could equally well find application in a distributed, or peer-to-peer, architecture system. Therequest module 36 andcommunication module 38 may also be implemented as standalone software programs, which do not necessarily have networking capabilities. -
FIG. 2 is a block diagram illustrating adatabase 30, according to an example embodiment. Thedatabase 30 is shown to include an entertainment asset table 40, and advertisement asset table 42, an entertainment secondary information table 48, and an advertisement secondary information table 50. The entertainment asset table 40 includesprimary content 32 in the form of entertainment assets 44 (e.g., video on demand assets). Theentertainment asset 44 may be embodied as an audio/video asset such as a movie, television program such as a documentary, a biography, a cartoon, a program, music, or music video or an audio asset such as music track, audio interview or news program or any other form of entertainment that may be requested from the receivingdevice 12. Aparticular entertainment asset 44 may be accessed in the entertainment asset table 40 with an entertainment asset identifier. - The advertisement asset table 42 includes
primary content 32 in the form of advertisement assets 46 (e.g., video on demand assets). For example, theadvertisement asset 46 may be embodied as a commercial, a public service announcement, an infomercial or any other form of advertisement. Aparticular advertisement asset 46 may be accessed in the advertisement asset table 42 with an advertisement asset identifier. - The entertainment secondary information table 48 includes
secondary information 34 that includessecondary content 35 that may be embodied as anentertainment recording 52. For example, theentertainment recording 52 may include key scenes from a movie that may be presented in full motion with sound thereby enabling the user to easily identify where the user wishes to resume play. The entertainment secondary information table 48 may includemultiple entertainment recordings 52 that respectively correspond toentertainment assets 44 in the entertainment asset table 40. Accordingly, aspecific entertainment asset 44 may be associated to a corresponding secondary information 34 (e.g., entertainment recording 52) in the entertainment secondary information table 48. - The advertisement secondary information table 50 includes
secondary information 34 in the form ofsecondary content 35 the may be embodied as anadvertisement recording 54. For example, the advertisement recording 54 may include an abbreviated form of the fulllength advertisement asset 46. The advertisement secondary information table 50 may includemultiple advertisement recordings 54 that respectively correspond toadvertisement assets 46 in the advertisement asset table 42. Accordingly, aspecific advertisement asset 46 may be associated to a corresponding secondary information 34 (e.g., advertisement recording 54) in the advertisement secondary information table 50. - The
entertainment recordings 52 and theadvertisement recordings 54 are respectively shown to include six versions that correspond to types of trick mode requests to fast forward or reverse (e.g., rewind)primary content 32. Further the trick mode may specify an accelerated speed to fast forward or rewind theprimary content 32. For example, the request to fast forward or rewind may be twice-times (e.g., 2×), four-times (e.g., 4×) and six-times (e.g., 6×) of the normal speed at which theprimary content 32 is rendered to theoutput device 18. Other example embodiments may include additional or fewer versions. - The various versions may correspond to
secondary content 35 that has play times of different duration. For example,secondary content 35 corresponding to twice-times (e.g., 2×), a four-times (e.g., 4×), and six-times (e.g., 6×) may have play times of 10, 5, and 2 seconds, respectively. Further, it will be appreciated by a person having ordinary skill in the art that the above describedsecondary content 35 may be designed to be played at normal speed or at any speed within a range of speeds around the normal speed (e.g., accelerated speeds) to achieve a high quality play out. - In some embodiments, the
primary content 32 andsecondary content 35 may be accompanied with an interactive application that may result in a presentation to an end user that enables interaction with the user. For example, anentertainment asset 44 in the form of an episode of “American Idol” may include an interactive application that may cause a pop-up that enables an end user to cast a vote. The episode of “American Idol” may further be interleaved withadvertisements assets 46 that may enable the voting to continue while theadvertisement asset 46 is playing. Further, theentertainment asset 44 and the advertisement recording 54 may be respectively associated with secondary content 35 (e.g., anentertainment recording 52 and an advertisement recording 54) that may also include interactive applications that may also result in a presentation to an end user that has an interactive quality. For example, anentertainment recording 52 associated with the episode of “American Idol” may include an interactive application that causes a pop-up that presents a current tally of the previously described vote. -
FIG. 3 is a block diagram illustrating example embodiments of entertainmentsecondary information 37. The entertainmentsecondary information 37 may includesecondary content 35,secondary metadata 58 or asecondary application 60. - The secondary content 56 may be immediately rendered by the receiving
device 12 to theoutput device 18 and may be embodied as the previously describedentertainment recording 52 or anentertainment slide show 62. Theentertainment slide show 62 may include one or more still images and sound that be rendered to theoutput device 18 at the receivingdevice 12. The still images may have video effects applied to them, including but not limited to fade-ins and fade-outs dissolves, splits, wipes, etc. - The
secondary content 35 may include derivative secondary content and non-derivative secondary content. For example, the derivative secondary content may include samples (e.g., audio and/or visual) from the associated primary content. In contrast, the non-derivative secondary content does not include samples (e.g., audio and/or visual) from the associated primary content. - The
secondary metadata 58 may be utilized to generate secondary content 35 (e.g., anentertainment recording 52 or an entertainment slide show 62). Thesecondary metadata 58 may be embodied asentertainment recording metadata 64 and an entertainmentslide show metadata 66. Theentertainment recording metadata 64 may be utilized by thecommunication module 38 or the receivingdevice 12 to generate theentertainment recording 52. In addition, the entertainmentslide show metadata 66 may be utilized by thecommunication module 38 or the receivingdevice 12 to generate theentertainment slide show 62. For example, thecommunication module 38 or the receivingdevice 12 may utilize themetadata 72, 74 to identify and collect samples (e.g., audio, visual) from the associatedprimary content 32. - The
secondary application 60 may be an application that may be executed by thecommunication module 38 or the receivingdevice 12 to generate secondary content 56. For example, thesecondary application 60 may include anentertainment application 68 that may be executed bycommunication module 38 or the receivingdevice 12 to generate anentertainment recording 52 or anentertainment slide show 62. - The
secondary content 35,secondary metadata 58, and thesecondary application 60 may be prerecorded and stored on thedatabase 30. Further, thesecondary content 35 may be live (e.g., sporting events, election results, etc.) and communicated to the streamingserver 28 from thelive feed 29. Accordingly, thesecondary information 34 received from thelive feed 302 may include an entertainment recording 52 (e.g. live content), an entertainment slide show 62 (e.g. live content), an advertisement recording 54 (e.g. live content), and an advertisement slide show (e.g. live content). -
FIG. 4 is a block diagram illustrating example embodiments of advertisementsecondary information 39. The advertisementsecondary information 39 may includesecondary content 35,secondary metadata 58, or asecondary application 60. - The secondary content 56 may be immediately rendered by the receiving
device 12 to theoutput device 18. The secondary content 56 may be embodied as the previously described advertisement recording 54 or anadvertisement slide show 70. Theadvertisement slide show 70 may include one or more still images and sound that may be rendered to theoutput device 18 at the receivingdevice 12. The still images may have video effects applied to them, including but not limited to fade-ins and fade-outs dissolves, splits, wipes, etc. - The
secondary content 35 may include derivative secondary content and non-derivative secondary content. For example, derivative secondary content may include samples (e.g., audio and/or visual) from the associated primary content. In contrast, non-derivative secondary content does not include samples (e.g., audio and/or visual) from the associatedprimary content 32. - The
secondary metadata 58 may be utilized to generate secondary content 35 (e.g., advertisement recording 54 or an advertisement slide show 70). Thesecondary metadata 58 may be embodied as advertisement recording metadata 72 and an advertisementslide show metadata 66. The advertisement recording metadata 72 may be utilized by thecommunication module 38 or the receivingdevice 12 to generatesecondary content 35 in the form of theadvertisement recording 54. In addition, the advertisementslide show metadata 74 may be utilized by thecommunication module 38 or the receivingdevice 12 to generatesecondary content 35 in the form of theadvertisement slide show 70. For example, thecommunication module 38 or the receivingdevice 12 may utilize themetadata 72, 74 to identify and collect samples (e.g., audio, visual) from the associatedprimary content 32. - The
secondary application 60 may be executed by thecommunication module 38 or the receivingdevice 12 to generate secondary content 56. For example, thesecondary application 60 may include anadvertisement application 68 that may be executed bycommunication module 38 or the receivingdevice 12 to generate an advertisement recording 54 or anadvertisement slide show 70. -
FIG. 5 is a blockdiagram illustrating frames 80 andpackets 82 according to an example embodiment. In an example embodiment theprimary content 32 and thesecondary information 34 may be stored asframes 80 on thedatabase 30. In another example embodiment theprimary content 32 and thesecondary information 34 may be stored aspackets 82 on thedatabase 30. - Moving from left to right, analog image data and analog sound data may be encoded by an encoder to produce the
frames 80. Theframes 80 includereference frames 86, reference frame changes 84, and ametadata frame 87. Thereference frame 86 may contain reference frame data that is sufficient to completely render an image on thedisplay device 26. In contrast, thereference frame change 84 may contain reference frame change data representing the differences between twosuccessive frames 80. Thereference frame change 84 thereby enables bandwidth savings proportional to the similarity between the successive frames 80 (e.g., redundant information is not communicated). Themetadata frame 87 contains metadata frame data that may be used to synchronize the corresponding image and sound data. - The reference frames 86, reference frame changes 84, and metadata frames 87 may further be packetized by a multiplexer into
packets 82. Thepackets 82 are shown to include video information, audio information and metadata. -
FIG. 6 is a flowchart illustrating amethod 100, according to an example embodiment. Illustrated on the right are operations performed on the receivingdevice 12 and illustrated on the left are operations performed on the streamingserver 28. Themethod 100 commences at the receivingdevice 12, atoperation 102, with the user requesting anentertainment asset 44. For example, the user may use aremote control 20 to select a video on demand asset from a menu that is displayed on thedisplay device 26. In response to the user's request, the receivingdevice 12 may communicate the request over thenetwork 16 to the streamingserver 28. In an example embodiment the receivingdevice 12 and the streaming server may utilize the real time streaming protocol (RTSP). - At
operation 104, at the streamingserver 28, therequest module 36 receives the request to play the video on demand asset. For example, the request may include a primary content identifier that may be used to access the appropriate entry in the entertainment asset table 40. Atoperation 106, thecommunication module 38 communicates (e.g., streams, playout) theentertainment asset 44 over thenetwork 16 to the receivingdevice 12. - At
operation 108 the receivingdevice 12 receives and renders theentertainment asset 44 to thedisplay device 26 at the normal speed for theentertainment asset 44 until a scheduled advertisement. - At
operation 110, at the streamingserver 28, thecommunication module 38 communicatesprimary content 32 embodied as anadvertisement asset 46. - At
operation 112, the receivingdevice 12 receives and renders theadvertisement asset 46 at normal speed on thedisplay device 26 and thesound device 24. Atoperation 114, the user may decide not to watch the advertisement and select the fast forward button on theremote control 20 to accelerate the forward speed of the advertisement. Responsive to the request, the receivingdevice 12 may communicate the fast forward trick mode request to the streamingserver 28. For example, the user may request fast forwarding at twice the normal speed (e.g., 2× FF) of theadvertisement asset 46 by pressing a fast forward button on theremote control 20 once. - At
operation 116, at the streamingserver 28, therequest module 36 receives the trick mode request from the receivingdevice 12. For example, the trick mode request may include a primary content identifier, a direction identifier (e.g., forward or reverse) and a speed identifier (e.g., 2×, 4×, 6×, etc.). - At
operation 118, thecommunication module 38 associatesprimary content 32 tosecondary content 35 in the form of theadvertisement asset 46 to the correspondingsecondary content 35 in the form of an advertisement recording 54 responsive to the request. For example, thecommunication module 38 may associate theadvertisement asset 46 to a version that is twice the normal speed (e.g., 2× FF) of theadvertisement recording 54. In addition, thecommunication module 38 may initiate fast forwarding of theadvertisement asset 46 at twice the normal speed without streaming theadvertisement asset 46 to the receivingdevice 12. Atoperation 120, thecommunication module 38 may communicate (e.g., playout, stream, etc.)secondary content 35 embodied as the advertisement recording 54 to the receivingdevice 12. - At
operation 122, the receivingdevice 12 may receive and render the advertisement recording 54 (e.g., derivative secondary content) at normal speed to theoutput device 18 until the advertisement recording 54 ends atoperation 124. Atoperation 126 the user requests the play mode by pressing the play button on theremote control 20 and the receivingdevice 12 communicates the request to the streamingserver 28. - At
operation 128, at the streamingserver 28, therequest module 36 receives the request and atoperation 130 thecommunication module 38 communicates theentertainment asset 44 to the receivingdevice 12. - At
operation 132 the receivingdevice 12 receives and renders theentertainment asset 44 to thedisplay device 26 and thesound device 24 at a normal speed for theadvertisement asset 44. - Other Examples—Offsets into Primary and Secondary Content
- The user in the above example entered a fast forward trick mode request at the beginning of a discrete unit of primary content 32 (e.g., advertisement asset 46) and the
communication module 38 responded by causing the rendering of a discrete unit of secondary content 35 (e.g., advertisement recording 54) from the beginning of the discrete unit of secondary content 35 (e.g., advertisement recording 54). It will be appreciated by one skilled in the art that other examples may include the user entering a fast forward trick mode request at some offset into theprimary content 32 and thecommunication module 38 responding by advancing to a corresponding offset from the beginning of the secondary content 35 (e.g., associated advertisement recording 54) and commencing the rendering of the secondary content 35 (e.g., advertisement recording 54) from the identified offset. For example, a user that enters a fast forward trick mode request in the middle of anadvertisement asset 46 may cause thecommunication module 38 to begin rendering the associated advertisement recording 54 in the middle of theadvertisement recording 54. In general, the author of thesecondary content 35 may exercise complete editorial control over selection of the offset into thesecondary content 35 from which rendering is to begin based on the offset into theprimary content 32 that may detected responsive to the trick mode request. It will further be appreciated that the author ofsecondary metadata 58 and asecondary application 60 may exercise the same editorial control. - A user that continues to fast forward after the secondary content 35 (e.g., advertisement) has ended may, in one embodiment, view
primary content 32 that may be rendered at an accelerated speed. - In response to the trick mode request, the
communication module 38, in the above described example embodiment, communicated advertisementsecondary information 39 in the form of theadvertisement recording 54. It will be appreciated by one skilled in the art that other example embodiments may utilize different advertisementsecondary information 39. For example, other types of advertisementsecondary information 39 may includesecondary metadata 58,secondary applications 60 orsecondary content 35 in the form of anadvertisement slide show 70. - In response to the trick mode request, the
communication module 38 may utilize advertisement recording metadata 72 or the advertisement slide show metadata 78, according to one embodiment. For example, the advertisement recording metadata 72 may be processed by thecommunication module 38 to generate anadvertisement recording 54 and the advertisement recording metadata 72 may be processed by thecommunication module 38 to generate anadvertisement slide show 70. In both examples, thecommunication module 38 may utilize therespective metadata 72, 74 to identify a subset ofreference frames 86 and reference frame changes 84 in the associatedadvertisement asset 46 to respectively generate the advertisement recording 54 and the advertisement recording metadata 72. - In response to the trick mode request, the
communication module 38 may utilize asecondary application 60, according to one embodiment. For example, thesecondary application 60 may be embodied as theadvertisement application 76. Theadvertisement application 76 may be executed by thecommunication module 38 to generatesecondary content 35 in the form of the advertisement recording 54 or the advertisement slide show. - Other example may include
primary content 32 andsecondary content 35 that may be embodied in one or more mediums (e.g., visual, audio, kinetic, etc.), the visual medium presented as motion or still. It will be appreciated by one skilled in the art that the medium and presentation ofprimary content 32 does not necessarily determine the medium and presentation ofsecondary content 35 and that any combination of the medium and presentation of theprimary content 35 may be associated to secondary content in any combination of medium and presentation. For example,primary content 32 embodied solely in audio may be associated withsecondary content 35 embodied as audio and visual (e.g., motion or still). In another embodiment,secondary content 35 may include non-derivativesecondary content 35 and derivativesecondary content 35. For example,secondary content 35 may include video that may be derived from the correspondingprimary content 32 and audio that may not be derived from the correspondingprimary content 32. - It will be appreciated by one skilled in the art that
primary content 32 may also be embodied in the form ofentertainment assets 46. Accordingly, theentertainment asset 46 may be associated to corresponding entertainment secondary information 37 (e.g.,entertainment recording 52,entertainment slide show 62,entertainment recording metadata 64, entertainmentslide show metadata 66, entertainment application 68). - Other Example—Primary Content Played from Local Storage Device
- Further, it will be appreciated by one skilled in the art that the
primary content 32 may not be immediately played on theoutput device 18 but rather stored to a local storage device (e.g., memory, database 22) for later or delayed playback. - Other example may include
primary content 32 andsecondary content 35 that may be embodied in one or more mediums (e.g., visual, audio, kinetic, etc.), the visual medium presented as motion or still. It will be appreciated by one skilled in the art that the medium and presentation ofprimary content 32 does not necessarily determine the medium and presentation ofsecondary content 35 and that any combination of the medium and presentation of theprimary content 35 may be associated to secondary content in any combination of medium and presentation. For example,primary content 32 embodied solely in audio may be associated withsecondary content 35 embodied as audio and visual (e.g., motion or still). In another embodiment,secondary content 35 may include non-derivativesecondary content 35 and derivativesecondary content 35. For example,secondary content 35 may include video that may be derived from the correspondingprimary content 32 and audio that may not be derived from the correspondingprimary content 32. - In response to the trick mode request, the
communication module 38, in the above described example embodiment, communicated derivative secondary content (e.g., advertisement recording 54) for rendering to anoutput device 18 at a normal speed for the derivative secondary content. In another example, thecommunication module 38 may have communicated non-derivative secondary content (e.g., advertisement recording 54). -
FIG. 7 is a flowchart illustrating amethod 160, according to an example embodiment, to identifysecondary information 34 based on a trick mode request. Themethod 160 commences atdecision operation 162 with thecommunication module 38 determining the direction of the trick mode request. If thecommunication module 38 determines that the trick mode request is a fast forward request then a branch is made todecision operation 164. Otherwise, thecommunication module 38 determines the trick mode request is a rewind or reverse request and branches todecision operation 172. - At
decision operation 164, thecommunication module 38 determines the speed of the trick mode request. If thecommunication module 38 determines the trick mode request is twice-times normal speed then a branch is made tooperation 166. If thecommunication module 38 determines the trick mode request is four-times normal speed then a branch is made tooperation 168. If thecommunication module 38 determines speed of the trick mode request is eight-times the normal speed then a branch is made tooperation 170. Atoperations communication module 38 identifies two-times, four-times and eight-times normal fast forward versions respectively. - At
decision operation 172 thecommunication module 38 determines the speed of the rewind or reverse trick mode request. If the speed of the rewind trick mode request is two-times, four-times, or six-times the normal speed then a branch is made tooperation -
FIG. 8 is a flowchart illustrating amethod 180, according to an example embodiment. Illustrated on the right are operations performed on the receivingdevice 12 and illustrated on the left are operations performed on the streamingserver 28. Themethod 180 commences at the receivingdevice 12, atoperation 181, with the user requesting anentertainment asset 44. For example, the user may use aremote control 20 to select a video on demand asset from a menu that is displayed on thedisplay device 26. In response to the user's request, the receivingdevice 12 may communicate the request over thenetwork 16 to the streamingserver 28. In an example embodiment the receivingdevice 12 and the streaming server may utilize the real time streaming protocol (RTSP). - At
operation 182, at the streamingserver 28, therequest module 36 receives the request to play the video on demand asset. For example, the request may include primary content identifier that may be used to access the appropriate entry in the entertainment asset table 40. Atoperation 183, thecommunication module 38 communicates (e.g., streams, playout) theentertainment asset 44 over thenetwork 16 to the receivingdevice 12. - At
operation 184, the receivingdevice 12 receives and renders theentertainment asset 44 to thedisplay device 26 at the normal speed for theentertainment asset 44. - At
operation 185, at the streamingserver 28, thecommunication module 38 associates theprimary content 32 tosecondary information 34. For example, thecommunication module 38 may utilize the primary content identifier to identify correspondingsecondary information 34 in the entertainment secondary information table 48 (e.g., entertainment application). - At
operation 186, at the streamingserver 28, thecommunication module 38 may communicate theentertainment application 68 to the receivingdevice 12. For example, thecommunication module 38 may communicate all versions of the entertainment application 68 (e.g., 2× FF VERSION, 4× FF VERSION, 6× FF VERSION, 2× REW VERSION, 4× REW VERSION, 6× REW VERSION) to the receivingdevice 12. - At
operation 187, the receivingdevice 12 receives and stores all versions of theentertainment application 68 on thedatabase 22. - At
operation 188, the user may select the fast forward button on theremote control 20 to accelerate the forward speed of the entertainment asset. Responsive to the request, the receivingdevice 12 may communicate the fast forward trick mode request to the streamingserver 28. For example, the user may request fast forwarding at twice the normal speed (e.g., 2× FF) of theadvertisement asset 46 by pressing a fast forward button on theremote control 20 once. - At
operation 189, at the streamingserver 28, therequest module 36 receives the trick mode request from the receivingdevice 12. For example, the trick mode request may include a primary content identifier, a direction identifier (e.g., forward or reverse) and a speed identifier (e.g., 2×, 4×, 6×, etc.). - At
operation 190, at the streamingserver 28, thecommunication module 38 stops streaming or communicating theentertainment asset 44 to the receivingdevice 12. Atoperation 191, thecommunication module 38 fast forwards theentertainment asset 44. - At
operation 192, the receivingdevice 12 executes the appropriate version of the entertainment application 68 (e.g., 2× FF VERSION) to generate non-derivative secondary content in the form of anentertainment slide show 62. Atoperation 193, the receivingdevice 12 renders theentertainment slide show 62 to theoutput device 18. - At
operation 194 the user requests the play mode by pressing the play button on theremote control 20. In response, atoperation 195, the receivingdevice 12 stops rendering theentertainment slide show 62 to theoutput device 18 and communicates a play request to the streamingserver 28. - At
operation 196, at the streamingserver 28, therequest module 36 receives the request to play theentertainment asset 44. Atoperation 196, thecommunication module 38 stops fast forwarding theentertainment asset 44 and communicates (e.g., streams, playout) theentertainment asset 44 over thenetwork 16 to the receivingdevice 12. - At
operation 198, the receivingdevice 12 receives and renders theentertainment asset 44 to theoutput device 18 at a normal speed for theadvertisement asset 44. - In response to the trick mode request, the receiving
device 12, in the above described example embodiment, utilized entertainmentsecondary information 37 in the form of anentertainment application 68 to generate anentertainment slide show 62. It will be appreciated by one skilled in the art that other example embodiments may utilize different entertainmentsecondary information 37. For example, other types of entertainmentsecondary information 37 may includesecondary content 35 and asecondary application 60 that may generate anentertainment recording 52. - In response to the trick mode request, the
communication module 38, in other example embodiments, may rendersecondary content 35. For example, thesecondary content 35 may include anentertainment recording 52 or anentertainment slide show 62. - Further, it will be appreciated by one skilled in the art that
primary content 32 may also include anadvertisement asset 46. Accordingly, theadvertisement asset 46 may be associated to corresponding advertisement secondary information 39 (e.g., advertisement recording 54,advertisement slide show 70, advertisement application 76). - Other Examples—Offsets into Primary and Secondary Content
- As previously described, in like manner, the author of
secondary content 35 may exercise complete editorial control over selection of the offset into thesecondary content 35 from which rendering is to begin based on the offset into theprimary content 32 that may detected responsive to the trick mode request. It will further be appreciated that the author ofsecondary metadata 58 and asecondary application 60 may exercise the same editorial control. - Other Example Embodiments—Primary Content Played from Local Storage Device
- Further, it will be appreciated by one skilled in the art that the
primary content 32 may not be immediately played on theoutput device 18 but rather stored to a local storage device (e.g., memory, database 22) for later or delayed playback. - Other example may include
primary content 32 andsecondary content 35 that may be embodied in one or more mediums (e.g., visual, audio, kinetic, etc.), the visual medium presented as motion or still. It will be appreciated by one skilled in the art that the medium and presentation ofprimary content 32 does not necessarily determine the medium and presentation ofsecondary content 35 and that any combination of the medium and presentation of theprimary content 35 may be associated to secondary content in any combination of medium and presentation. For example,primary content 32 embodied solely in audio may be associated withsecondary content 35 embodied as audio and visual (e.g., motion or still). In another embodiment,secondary content 35 may include non-derivativesecondary content 35 and derivativesecondary content 35. For example,secondary content 35 may include video that may be derived from the correspondingprimary content 32 and audio that may not be derived from the correspondingprimary content 32. - In response to the trick mode request, in the above described example embodiment, the receiving device used the
entertainment application 68 to generate non-derivative secondary content (e.g., entertainment slide show 62) for rendering to anoutput device 18. In another example, the receivingdevice 12 may have used theentertainment application 68 to generate derivative secondary content (e.g., entertainment slide show 62) for rendering to theoutput device 18 at a normal speed for the derivative secondary content. - A user that continues to fast forward after the secondary content 35 (e.g., advertisement) has ended may, in one embodiment, result in the receiving
device 12 viewing correspondingprimary content 32 that may be rendered at an accelerated speed. For example, the receivingdevice 12 may request the streamingserver 28 to communicateprimary content 32 that may be rendered at an accelerated speed. -
FIG. 9 is a block diagram illustrating asystem 200, according to an example embodiment, to modify simulatedprimary content 238 at a receivingdevice 12. Thesystem 200 is shown to include a receivingdevice 12, anetwork 16 and a video ondemand system 206. - The receiving
device 12 has previously been described. Further description is provided below for previously unmentioned components. The receivingdevice 12 may include adecoder system 208, aprocessor 210, amemory 212, acontent communication module 216, ademultiplexer 217, anaudio module 219, avideo module 221, adescrambler 225, a receivingmodule 218,control buttons 19, aninterface 222, and aninterface 223, and alocal storage device 309. - The
processor 210 may execute instructions and move data to and from thememory 212 and thememory 226. Thecontent communication module 216 may receiveprimary content 32 and/or simulatedprimary content 238 from the network 204 via theinterface 223 and communicate theprimary content 32 and simulatedprimary content 238 to thedemultiplexer 217. Further, thecontent communication module 216 may utilize the simulatedprimary content 238 to generate thesecondary content 35 in the form of a programmatically generated entertainment slide show, a programmatically generated entertainment recording, a programmatically generated advertisement slide show, or a programmatically generated advertisement recording. The receivingmodule 218 may receive a request from thecontrol buttons 19 or theremote control 20. For example, the receivingmodule 218 may receive a request to fast forward or reverse (e.g., rewind) primary content at an accelerated speed that may be 2×, 4×, or 6× normal speed. Thedemultiplexer 217 may demultiplex theprimary content 32 and the simulatedprimary content 238 into audio, video, and metadata streams that may be respectively communicated to theaudio module 219, thevideo module 221 and thedescrambler 225. The metadata stream may include descrambling information that includes conditional access decryption keys that may be used by thedescrambler 225 to descramble or decrypt the audio and video streams. Other embodiments may not include thedescrambler 225. Theaudio module 219 may process the audio and communicate the audio to thememory 226. Similarly, thevideo module 221 may process the video and communicate the video to thememory 226. - The
decoder system 208 is shown to include aprocessor 224, amemory 226, adecoder 230 and a rendermodule 234. Theprocessor 224 may be used for executing instructions and moving data. For example, theprocessor 224 may be used to move theprimary content 32, the simulatedprimary content 238 or other data from thememory 226 to thedecoder 230. Thedecoder 230 may decode the packets/frames into image and sound data. The rendermodule 234 may render the sound data to thesound device 24 and render image data to thedisplay device 26. - The
local storage device 309 may include a circular buffer that includes both thememory 226 and thedatabase 22. The circular buffer may be utilized by the receivingdevice 12 to store theprimary content 32 and/or simulatedprimary content 238. For example, a user may be watching a movie and select a pause button on theremote control 20 to answer a telephone call. Responsive to selection of the pause button, the movie may be stored in the circular buffer. Subsequent to completing the telephone call the user may select the play button on theremote control 20 to prompt the receivingdevice 12 to resume rendering of the move to theoutput device 18 by retrieving the movie from the circular buffer. In addition, thelocal storage device 309 may include a file structure for storing and retrieving theprimary content 32 and/or simulatedprimary content 238. - The video on
demand system 206 is shown to include a streamingserver 28 and adatabase 235. The streamingserver 28 responds to requests forprimary content 32 by readingprimary content 32 from thedatabase 235 and communicating theprimary content 32 over thenetwork 16 to the receivingdevice 12. Further, the streamingserver 28 may respond to a trick mode request by associating theprimary content 32 to simulatedprimary content 238 and communicating (e.g., stream, playout) the simulatedprimary content 238 over thenetwork 14 to the receivingdevice 12. - Generally speaking, a user may operate the
control buttons 19 or theremote control 20 to fast forward or rewind (e.g., reverse) theprimary content 32 that is presently rendered on theoutput device 18. In response to receiving the trick mode request, the receivingdevice 12 may communicate the trick mode request over the network 204 to the streamingserver 28. The streamingserver 28 may receive theprimary content 32 and associate theprimary content 32 to simulatedprimary content 238. Next, the streamingserver 28 may communicate the simulatedprimary content 238 to the receivingdevice 12. At the receivingdevice 12, thecontent communication module 216 may receive the simulatedprimary content 238 and utilize thesimulated primer content 238 to generate derivative secondary content. For example, the generated derivative secondary content may be embodied as a programmatically generated entertainment slide show. Finally, the programmatically generated entertainment slide show may be rendered to theoutput device 18 at a normal speed. - While the
system 10 shown inFIG. 9 employs a client-server architecture, the present disclosure is of course not limited to such an architecture, and could equally well find application in a distributed, or peer-to-peer, architecture system. Thecontent communication module 216 and the receivingmodule 218 may also be implemented as standalone software programs, which do not necessarily have networking capabilities. -
FIG. 10 is a block diagram illustrating adatabase 235, according to an example embodiment. Thedatabase 235 includes an entertainment asset table 40 as previously described, an advertisement asset table 42 as previously described, an entertainment simulated primary content table 236, and an advertisement simulated primary content table 241. - The entertainment simulated primary content table 236 contains simulated
primary content 238 in the form of acceleratedspeed entertainment assets 240. Each acceleratedspeed entertainment assets 240 may be associated with acorresponding entertainment asset 44. For example, the streamingserver 28 may associate theentertainment asset 44 to the appropriate acceleratedspeed entertainment asset 240 responsive to receiving a trick mode request. The acceleratedspeed entertainment asset 240 may be a prerecorded version of theentertainment asset 44 played at an accelerated speed. In an example embodiment, the acceleratedspeed entertainment asset 240 may be prerecorded at different speeds and directions (e.g., 2× or 4× or 6×-Fast forward or 2× or 4× or 6×-Rewind). - The advertisement simulated primary content table 241 contains simulated
primary content 238 in the form of acceleratedspeed advertisement assets 242. Each acceleratedspeed advertisement asset 242 may be associated with anadvertisement asset 46. For example, the streamingserver 28 may associate theadvertisement asset 46 to the corresponding acceleratedspeed advertisement asset 242 responsive to receiving a trick mode request. The acceleratedspeed advertisement asset 242 may be a prerecorded version of theadvertisement asset 46 played at an accelerated speed. In an example embodiment, the acceleratedspeed advertisement asset 242 may be prerecorded at different speeds and directions (e.g., 2× or 4× or 6×-Fast forward or 2× or 4× or 6×-Rewind). -
FIG. 11 is a flow chart illustrating amethod 250, according to an example embodiment, to modify simulatedprimary content 238 at a receivingdevice 12. Operations performed by the receivingdevice 12 are illustrated on the right and operations performed by the streamingserver 28 are illustrated on the left. Themethod 250 commences at the receivingdevice 12, atoperation 252 where the user requests anentertainment asset 44 that may be communicated to the streamingserver 28. - At
operation 254, the streamingserver 28 receives the request to play theentertainment asset 44 and retrieves the requestedentertainment asset 44 from thedatabase 235. For example, the request to play theentertainment asset 44 asset may include an entertainment asset identifier that may be used to access the requestedentertainment asset 44 in the entertainment asset table 40. Atoperation 256, the streamingserver 28 communicates theentertainment asset 44 to the receivingdevice 12. - At
operation 258, at the receivingdevice 12, thecontent communication module 216 receives theentertainment asset 44 and communicates theentertainment asset 44 to thedemultiplexer 217 that demultiplexes theentertainment asset 44 into audio, video, and metadata streams that are respectively communicated to theaudio module 219, thevideo module 221 anddescrambler 225. Theaudio module 219, thevideo module 221, and thedescrambler 225 process the respective streams and communicate the results to thememory 226. For example, thedescrambler 225 may utilize conditional access decryption keys in the metadata to interact with theaudio module 219 and thevideo module 221 to decrypt or descramble the video and/or the audio. - At
operation 260, the decoder 23, in thedecoder system 208, decodes theentertainment asset 44 and communicates theentertainment asset 44 to the rendermodule 234. Atoperation 260, the rendermodule 234 renders theentertainment asset 44 to theoutput device 18 including thedisplay device 26 and thesound device 24 at normal speed. - At
operation 262, at the receivingdevice 12, the user enters a trick mode request (e.g.,Fast Forward 2X normal speed) via theremote control 20 that is received by the receivingmodule 218 at the receivingdevice 12. The receivingmodule 218 may communicate the trick mode request over the network 204 to the streamingserver 28. In an example embodiment the trick mode request may be communicated utilizing the real time streaming protocol. - At
operation 264, the streamingserver 28 receives the trick mode request from the receivingdevice 12. Atoperation 265, the streamingserver 28 associates theentertainment asset 44 that is currently being communicated (e.g, streamed) to the receivingdevice 12 to the corresponding acceleratedspeed entertainment asset 240 and atoperation 266 the streamingserver 28 communicates the acceleratedspeed entertainment asset 240 to the receivingdevice 12. - At
operation 268, at the receivingdevice 12, thecontent communication module 216 receives the acceleratedspeed entertainment asset 240 and communicates the acceleratedspeed entertainment asset 240 to thedemultiplexer 217 that demultiplexes theentertainment asset 44 into audio, video, and metadata streams that are respectively communicated to theaudio module 219, thevideo module 221 anddescrambler 225. Theaudio module 219, thevideo module 221, and thedescrambler 225 process the respective streams and communicate the results to thememory 226. For example, thedescrambler 225 may utilize conditional access decryption keys in the metadata to interact with theaudio module 219 and thevideo module 221 to decrypt or descramble the video and/or the audio. - At
operation 270, thecontent communication module 216 generates secondary derivative content (e.g., programmatically generated entertainment slide show) from the acceleratedspeed entertainment asset 240. For example, the programmatically generated entertainment slide show may includereference frames 86 selected by thecontent communication module 216 from the acceleratedspeed entertainment asset 240 stored in thememory 226. In an example embodiment, thecontent communication module 216 may select reference frames by identifying different scenes in the acceleratedspeed entertainment asset 240. Further thecontent communication module 216 may add fade-ins and fade-outs. Next, thecontent communication module 216 communicates the programmatically generated entertainment slide show to thedecoder 230 that decodes the programmatically generated entertainment slide show and communicates the programmatically generated entertainment slide show to the rendermodule 234. - At
operation 272, the rendermodule 234 renders a programmatically generated entertainment slide show to theoutput device 18 at normal speed. - At
operation 274, the receivingmodule 218 receives a play request that may be entered by the user via theremote control 20 orcontrol buttons 19 and communicates the play request to the streamingserver 28. - At
operation 276, the streamingserver 28 receives the request from the receivingdevice 12. Atoperation 278, the streamingserver 28 may identify a location in theentertainment asset 44 based on the elapsed time from receipt of the fast forward request to receipt of the play request and may resume communicating (e.g., streaming) theentertainment asset 44 to the receivingdevice 12. - At
operation 280, at the receivingdevice 12, the rendermodule 234 renders theentertainment asset 44 to theoutput device 18 at normal speed. - The
content communication module 216 in the above example embodiment generated a programmatically generated entertainment slide show, however, it will be appreciated that other example embodiments may generate a programmatically generated entertainment recording, programmatically generated advertisement slide show, and a programmatically generated advertisement recording. - Other Examples—Offsets into Primary and Secondary Content
- As previously described, in like manner, the author of the
content communication module 216 may exercise complete editorial control, via thecommunication module 216, over the selection of the offset into the simulatedprimary content 238 from which rendering is to begin based on the offset into theprimary content 32 that may detected responsive to the trick mode request. -
FIG. 12 is a block diagram illustrating asystem 290, according to an example embodiment. Thesystem 290 may be utilized to communicate a transmission that facilitates modification of playback ofprimary content 32 at a receivingdevice 12. - The
system 290 includes a receivingdevice 12, abroadcast system 292 and a video ondemand system 294. Thebroadcast system 292 includes anentertainment server 296 and aninsertion system 298 that includes anadvertisement server 304, alive feed 302 and aninsertion server 308. - Broadly speaking, the
insertion server 308 may receive and a component transmission 291 (e.g., Internet Protocol (IP) that includes a stream that is formatted in MPEG-2 compression format from alive feed 302, acomponent transmission 293 that includes a stream that is formatted in an MPEG-2 compression format from theentertainment server 296, and acomponent transmission 295 that includes a stream that is formatted in an MPEG-2 compression format from theadvertisement server 304. Thecomponent transmission 291 that is received from thelive feed 302 may includeprimary content 32 andsecondary information 34 that is live (e.g., sporting events, election results, etc.). Accordingly, theprimary content 32 received from thelive feed 302 may include an entertainment asset 44 (e.g. live content) and an advertisement asset 46 (e.g. live content). Likewise, thesecondary information 34 received from thelive feed 302 may include an entertainment recording 52 (e.g. live content), an entertainment slide show 62 (e.g. live content), an advertisement recording 54 (e.g. live content), and an advertisement slide show (e.g. live content). - Each of the
component transmissions component transmissions advertisement server 304 may carryprimary content 32 in the form ofadvertisement assets 46 andsecondary information 34 relating to advertisements. The transmission from theentertainment server 296 may carryprimary content 32 in the form ofentertainment assets 44 andsecondary information 34 relating to entertainment. Next, theinsertion server 308 may utilize thecomponent transmissions transmission 297 that is communicated over thenetwork 16 to the receivingdevice 12. Other example embodiments may include thetransmission 297 embodied in other compression formats (e.g., MPEG-4, VC1) or other transport formats (e.g., Internet Protocol (IP)). Thesecondary information 34 may include a secondary information identifier that may be used by the receivingdevice 12 to associate theprimary content 32 tosecondary content 35 that may be played out at theoutput device 18 at the receivingdevice 12 responsive to receiving a trick mode request. - The
entertainment server 296 is coupled to adatabase 300 that may includeprimary content 32 andsecondary entertainment information 37 as previously described. - The
advertisement server 304 is shown to be coupled to adatabase 306 that may includeprimary content 32 and advertisementsecondary information 39 as previously described. Theinsertion server 308 is shown to include atransport module 310 and atransmission module 312. Thetransport module 310 may receive thecomponent transmission 291 from thelive feed 302 and thecomponent transmission 293 from theentertainment server 296 and thecomponent transmission 295 from theadvertisement server 304. Further, thetransport module 310 may generate thetransmission 297 based on thecomponent transmission 291 from thelive feed 302 and thecomponent transmission 293 received from theentertainment server 296 and thecomponent transmission 295 received from theadvertisement server 304. Thetransmission module 312 may communicate thetransmission 297 to the receivingdevice 12. - The video on
demand system 294 includes the streamingserver 28 that is shown to be coupled to aremote storage device 316 that may include adatabase 317 that may includesecondary information 34. The receivingdevice 12 may utilize thesecondary information 34 received in thetransmission 297 to request additionalsecondary information 34 that is stored on theremote storage device 316. - While the
system 290 shown inFIG. 12 employs a client-server architecture between the receivingdevice 12 and the video ondemand server 28, the present disclosure is of course not limited to such an architecture, and could equally well find application in a distributed, or peer-to-peer, architecture system. -
FIG. 13 is a block diagram illustrating adatabase 300, according to an example embodiment. Thedatabase 300 is coupled to theentertainment server 296 and is shown to include the entertainment asset table 40 and the entertainment secondary information table 48 as previously described. The entertainment secondary information table 48 is shown to include multiple entries ofentertainment recordings 52; however, it will be appreciated by a person having ordinary skill in the art that other example embodiments of the entertainment secondary information table 48 may include other forms ofsecondary information 34 including theentertainment slide show 62, theentertainment recording metadata 64, the entertainmentslide show metadata 66, and theentertainment application 68 all as previously described. -
FIG. 14 is a block diagram illustrating adatabase 306, according to an example embodiment. Thedatabase 306 is coupled to theadvertisement server 304 and is shown to include the advertisement asset table 42 and the advertisement secondary information table 50 as previously described. The advertisement secondary information table 50 is shown to include multiple entries ofadvertisement recordings 54; however, it will be appreciated by a person having ordinary skill in the art that other example embodiments of the advertisement secondary information table 50 may include other forms ofsecondary information 34 including theadvertisement slide show 70, the advertisement recording metadata 72, the advertisementslide show metadata 74, and theadvertisement application 76 all as previously described. -
FIG. 15 is a block diagram illustrating the receivingdevice 12, according to an example embodiment. The receivingdevice 12 has previously been described. Further description is provided below for previously unmentioned components or functions. - The receiving
device 12 includes ademultiplexer 217, alocal storage device 309, and aprocessing module 322. The demultiplexer may receive atransmission 297 from theinsertion system 298, demultiplexes thetransmission 297 according to channels and stores thedemultiplexed transmission 297 in thelocal storage device 309. For example, in one embodiment, thedemultiplexer 217 may utilize theaudio module 219, thevideo module 221, and thedescrambler 225 to process and store thetransmission 297 in thelocal storage device 309. In addition, thedemultiplexer 217 may identifysecondary information 34 in the form ofsecondary content 35,secondary metadata 58, and asecondary application 60 in thedemultiplexed transmission 297 and store thesecondary content 35,secondary metadata 58, and asecondary application 60 as addressable files on thelocal storage device 309. - The
local storage device 309 may include a circular buffer that includes both thememory 226 and thedatabase 22. The circular buffer may be utilized by the receivingdevice 12 to store thetransmission 297. For example, a user may be watching a baseball game that is broadcast live and select a pause button on theremote control 20 to answer a telephone call. Responsive to selection of the pause button, thetransmission 297 may be stored in the circular buffer. Subsequent to completing the telephone call the user may select the play button on theremote control 20 to prompt the receivingdevice 12 to resume rendering of the baseball game to theoutput device 18 by retrieving thetransmission 297 from the circular buffer and processing thetransmission 297. In addition, thelocal storage device 309 may include a file structure for storing and retrieving thesecondary information 34 including secondary content 56, thesecondary metadata 58 andsecondary applications 60. Accordingly, in an example embodiment, thelocal storage device 309 may be utilized to storesecondary information 34 in the form of an addressable file (e.g., accessed with a URL) or in the form of atransmission 297. - The
processing module 322 may receive and process requests. For example, theprocessing module 322 may process a request to renderprimary content 32 to theoutput device 18 at an accelerated speed of the primary content. Theprocessing module 322 may receive the request from theremote control 20 or thecontrol buttons 19. Responsive to receiving the request, theprocessing module 322 may associate theprimary content 32 tosecondary content 35 based onsecondary information 34 in the form of a secondary information identifier that is included in thetransmission 297 received by the multiplexer 214. -
FIG. 16A is a block diagram illustrating acomponent transmission 291, according to an example embodiment. Thecomponent transmission 291 may be communicated by thelive feed 302 and received by theinsertion server 308. Thecomponent transmission 291 may includemultiple channels 323 that may carryentertainment assets 44,advertisement assets 46 and associatedsecondary information 34 as described further below. -
FIG. 16B is a block diagram illustrating acomponent transmission 293, according to an example embodiment. Thecomponent transmission 293 may be communicated by theentertainment server 298 and received by theinsertion server 308. Thecomponent transmission 293 may includemultiple channels 323 that may carryentertainment assets 44 and associatedsecondary information 34 as described further below. -
FIG. 16C is a block diagram illustrating acomponent transmission 295, according to an example embodiment. Thecomponent transmission 295 may be communicated by theadvertisement server 304 and received by theinsertion server 308. Thecomponent transmission 295 may includemultiple channels 323 that may carryadvertisement assets 46 and associatedsecondary information 34 as described further below. -
FIG. 16D is a block diagram illustrating atransmission 297, according to an example embodiment. Thetransmission 297 may be communicated by theinsertion server 308 and received by the receivingdevice 12. Thetransmission 297 may be generated based thecomponent transmission 291 received from thelive feed 302 and thecomponent transmission 293 received from theentertainment server 296 and thecomponent transmission 295 received from theadvertisement server 304. Thetransmission 297 may includemultiple channels 323 that may be selected by the user via theremote control 20 or thecontrol buttons 19. Thetransmission 297 may carryentertainment assets 44 and correspondingsecondary information 34,advertisement assets 46 and corresponding secondary information. -
FIG. 17 is a block diagram illustrating multiple streams associated with asingle channel 323, according to an example embodiment. The streams may include avideo stream 327, anaudio stream 329, and ametadata stream 331. Eachstream 327 may be embodied aspackets 82 that may be received at thedemultiplexer 217 as they enter the receivingdevice 12. Thedemultiplexer 217 may concatenate the payload of the packets to generate frames 80. Theframes 80 are shown to includereference frames 86 and reference frame changes 84 as previously described. The reference frames 86, the reference frame changes 84, and the metadata frames 87 may be descrambled and communicated to thedecoder 230. Thedecoder 230 may decode theframes 80 into image data and sound data and communicate the image data and sound data to the rendermodule 234 that renders the image and sound data to theoutput device 18 including thedisplay device 26 and thesound device 24. -
FIG. 18 is a block diagram illustrating thepacket 82, according to an example embodiment. Thepacket 82 is shown to include aheader 340 and apayload 342. Theheader 340 may include astream identifier 344 that may be used to identifypackets 82 of a single stream. For example, a first stream identifier may identify a firststream carrying packets 82 with a video payload, a second stream identifier may identify a second stream that may includepackets 82 carrying an audio payload, and a third stream identifier may identify athird stream 327 that includespackets 82 carrying a metadata payload. Thepayload 342 may include frame information to construct theframes 80. -
FIG. 19 is a block diagram illustratingsecondary information 34 in the form of a secondary information table 350, according to an example embodiment. The secondary information table 350 may be carried in themetadata stream 331 of achannel 323 and may be read by theprocessing module 322 responsive to the receivingdevice 12 receiving a trick mode request. The secondary information table 350 may be utilized by theprocessing module 322 to identify the location of additionalsecondary information 34. The secondary information table 350 may include entries that correspond to the type of trick mode request. For example, trick mode requests may include fast forward and rewind versions at accelerated speeds as previously described. Each trick mode request is associated with asecondary information identifier 352 and a secondary information offset 354. Thesecondary information identifier 352 may identify the location of thesecondary information 34. For example, the secondary information identifier may identify theaudio stream 329 andvideo stream 327 of a channel that may be currently rendered to theoutput device 18, themetadata stream 331 of a channel that may be currently rendered to theoutput device 18, achannel 323 that is different from thechannel 323 that is currently being rendered to theoutput device 18, a file on thelocal storage device 309 or a file on theremote storage device 316. The secondary information offset 354 may be utilized to identify an offset from the beginning of the identifiedsecondary information 34. For example, the secondary information offset 354 may be expressed in bytes or time from the start of the identifiedsecondary information 34. -
FIG. 20 is a block diagram illustratingprimary content 32 andsecondary information 34 communicated in thevideo stream 327 and theaudio stream 329 of asingle channel 323, according to an example embodiment. Thechannel 323 is shown to include thevideo stream 327 communicatingprimary content 32 andsecondary information 34, theaudio stream 329 communicatingprimary content 32 andsecondary information 34, and themetadata stream 331 communicating metadata and a secondary information table 350. Responsive to theprimary content 32 being rendered to theoutput device 18 and receipt of a trick mode request, the secondary information table 350 may be accessed by theprocessing module 322 to identify the location of thesecondary information 34 in thevideo stream 327 and audio stream of thesame channel 323. -
FIG. 21 is a block diagram illustratingprimary content 32 communicated in afirst channel 323 andsecondary information 34 communicated in asecond channel 323, according to an example embodiment. Thefirst channel 323 is shown to include thevideo stream 327 communicatingprimary content 32, theaudio stream 329stream 327 communicatingprimary content 32, and themetadata stream 331 communicating metadata and a secondary information table 350. Responsive to theprimary content 32 being rendered to theoutput device 18 and receipt of a trick mode request, the secondary information table 350 may be accessed by theprocessing module 322 to identify the location of thesecondary information 34 in thevideo stream 327 and audio stream of thesecond channel 323. -
FIG. 22 is a block diagram illustrating theprimary content 32 communicated in thevideo stream 327 and theaudio stream 329 of achannel 323 and thesecondary information 34 communicated in ametadata stream 331 of thesame channel 323, according to an example embodiment. Thechannel 323 is shown to include thevideo stream 327 communicating theprimary content 32, theaudio stream 329 communicating theprimary content 32, and themetadata stream 331 communicating metadata, a secondary information table 350, andsecondary information 34. Responsive to theprimary content 32 being rendered to theoutput device 18 and receipt of a trick mode request, the secondary information table 350 may be accessed by theprocessing module 322 to identify the location of thesecondary information 34 in themetadata stream 331 of thesame channel 323. -
FIG. 23 is a block diagram illustrating atransmission 297 includingprimary content 32 that includes end ofprimary content markers 361, according to an example embodiment. Thetransmission 297 is shown to includeprimary content 32 in the form of anentertainment asset 44 and anadvertisement asset 46 and respectively correspondingsecondary content 35 in the form of anentertainment recording 52 and anadvertisement recording 54. The end ofprimary content markers 361 may be used by theprocessing module 322 to identify a location in theprimary content 32 to resume play. For example, responsive to receipt of a play request while rendering theentertainment recording 52 to theoutput device 18, theprocessing module 322 may skip to the end ofprimary content marker 361 associated with theentertainment asset 44. Also for example, responsive to receipt of a play request while rendering the advertisement recording 54 to theoutput device 18, theprocessing module 322 may skip to the end ofprimary content marker 361 associated with theadvertisement asset 46. Other example embodiments may utilize other forms of secondary content 35 (e.g.,advertisement slide show 70 and entertainment slide show 62). -
FIG. 24 is flowchart illustrating themethod 370, according to an example embodiment, to modify playback of primary content 32 a receivingdevice 12. Theoperation 370 commences atoperation 374 with thedemultiplexer 217 receiving thetransmission 297 via theinterface 223. Thetransmission 297 may includeprimary content 32 and a secondary information table 360 that may include secondary information identifiers 352. Thedemultiplexer 217 may demultiplex thetransmission 297 according tochannels 323 and store thedemultiplexed transmission 297 aspackets 82 in thelocal storage device 309. For example, thedemultiplexer 217 may utilize theaudio module 219, thevideo module 221 and thedescrambler 225 to store thedemultiplexed transmission 297. Other example embodiments may include ademultiplexer 217 that further depacketizes thetransmission 297 and concatenates thepayloads 342 to generateframes 86 that may be stored in thelocal storage device 309. - At
operation 376, thedescrambler 225 may identify thestreams transmission 297 associated with the most recent channel request received at the receivingdevice 12 and descramble the identifiedstreams metadata stream 331. For example, the user may have requested thechannel 323 that carries ESPN (e.g., the ESPN channel). Further, theprocessor 224, in thedecoder system 208, may communicate the descrambledstreams decoder 230. - At
operation 380, thedecoder 230 decodes theprimary content 32 in the identifiedstreams 327 and communicates theprimary content 32 to the rendermodule 234. - At
operation 382, the rendermodule 234 renders theprimary content 32 to theoutput device 18 that may include thedisplay device 26 and thesound device 24. For example, the rendermodule 234 may render an entertainment asset 44 (e.g., 2006 World Cup Soccer Game) to theoutput device 18. - At
operation 384, theprocessing module 322 receives a pause request via thecontrol buttons 19 to pause the rendering of the 2006 World Cup Soccer Game to theoutput device 18. Theprocessing module 322, in turn, may communicate the request to the descrambler 228 and thedecoder system 208. The descrambler 228stops descrambling packets 82 and thedecoder system 208 stops retrieving the descrambled streams from thestorage device 309. Accordingly, thedemultiplexer 217 continues to store thetransmission 297 to thememory 226 with possible overflow to thedatabase 22. - At
operation 386, theprocessing module 322 receives a play request via thecontrol buttons 19. Theprocessing module 322, in turn, may communicate the play request to thedecoder system 208 and thedescrambler 225. Thedescrambler 225 may respond by descrambling. Theprocessor 224, in thedecoder system 208, in turn, may respond by retrieving or reading the descrambled steams (e.g., transmission 297) from thelocal storage device 309 that may subsequently be utilized to renderprimary content 32 to theoutput device 18 at a normal speed for theprimary content 32. - At
operation 388, theprocessing module 322 receives a trick mode request via theremote control 20 to render theprimary content 32 at theoutput device 18 at an accelerated speed. For example, theprocessing module 322 may receive a request to fast forward theprimary content 327 at six-times the normal speed (e.g., 6× FF VERSION). - At
operation 390, theprocessing module 322 may modify the playback ofprimary content 32 by associating theprimary content 32 to thesecondary content 35 responsive to receiving the trick mode request. For example, theprocessing module 322 may retrieve the secondary information table 350 from themetadata stream 331 associated with thechannel 323 that carries ESPN (e.g., primary content 32). Further theprocessing module 322 may identify thesecondary information identifier 352 and the secondary information offset 354 in the secondary information table 350 based on the trick mode request (e.g., 6× FF VERSION). In the present example embodiment, the secondary information table 350 may identify thesecondary information 34 as located in avideo stream 327 and anaudio stream 329 of achannel 323 different from thechannel 323 that carries ESPN. Accordingly, theprocessing module 322 may, in an example embodiment, communicate the identifiedchannel 323 to the descrambler 328 that, in turn, processes the correspondingmetadata stream 331,video stream 327 andaudio stream 329. For example, the descrambler 328 may utilize the descrambling information in themetadata stream 331 to descramble thevideo stream 327 andaudio stream 329. In the present example, the descrambler 328 descramblessecondary information 34 in the form of anentertainment application 68. - At
operation 391, theprocessing module 322 completes the association ofprimary content 32 tosecondary content 35 by causing theentertainment application 68 to execute. Theentertainment application 68 executes to generatesecondary content 35 in the form of anentertainment recording 52. - At
operation 392, thedecoder 230 decodes theentertainment recording 52 and communicates the decodedentertainment recording 52 to the rendermodule 234. - At
operation 393, the rendermodule 234 may render theentertainment recording 52 to theoutput device 18 including thedisplay device 26 and thesound device 24 at a normal speed of theentertainment recording 52. For example, theentertainment recording 52 may introduce the players of the teams participating in the 2006 World Cup Soccer Game. - At
operation 394, theprocessing module 322 may receive a play request via thecontrol buttons 19. Theprocessing module 322, in turn, may communicate theESPN channel 323 to the descrambler 228 that, in turn, descrambles the associatedstreams ESPN channel 323. Next, theprocessing module 322 identifies the end ofprimary content marker 361 in the primary content 32 (e.g., 2006 World Cup Soccer Game) and communicates the identified location to thedecoder system 206. Theprocessor 224, in thedecoder system 208, in turn, communicates the video, audio, andmetadata streams decoder 230. - At
operation 395, thedecoder 230 decodes the primary content 32 (e.g., 2006 World Cup Soccer Game). - At
operation 396, the rendermodule 234 renders the primary content 21 in the form of the entertainment asset 44 (e.g., 2006 World Cup Soccer Game) to theoutput device 18. - The
processing module 322 in the above described example embodiment utilized asecondary information identifier 352 to identify achannel 323 in thetransmission 297 that carriedsecondary information 34 in the form of theentertainment application 68. Other example embodiments, however, may identify other locations from which to retrieve the secondary information 34 (e.g., entertainment application 68). For example, thesecondary information identifier 352 may further identify thesecondary information 34 as located in theaudio streams 329 andvideo stream 327 of thechannel 323 that is currently being rendered to the output device 18 (e.g., ESPN channel), themetadata stream 331 of thechannel 323 that is currently being rendered to theoutput device 18, thelocal storage device 309 or theremote storage device 316. - In this example embodiment the
processing module 322 may utilize the secondary information identifier 352 (e.g., stream, stream, channel) to retrieve the secondary information 34 (e.g., entertainment application 68) from theaudio stream 329 and the video streams 327 of theESPN channel 323 responsive to receipt of a trick mode request. Further, thedecoder 230 may retrieve theprimary content 32 from theaudio stream 329 and the video streams 327 in the absence of processing a trick mode request. - In this example embodiment the
processing module 322 may utilize the secondary information identifier 352 (e.g., stream, channel) to retrieve the secondary information 34 (e.g., entertainment application 68) from themetadata stream 331 of theESPN channel 323 responsive to receipt of a trick mode request. Further, theprocessing module 322 may retrieve theprimary content 32 frommetadata stream 331 in the absence of processing a trick mode request. - In this example embodiment the
processing module 322 may utilize the secondary information identifier (e.g., URL) to retrieve the secondary information 34 (e.g., entertainment application 68) from thelocal storage device 309. Accordingly this example embodiment requires thedemultiplexer 217 to retrieve the secondary information 34 (e.g., entertainment application 68) from thetransmission 297 and to store the retrievedsecondary information 34 in the form of an addressable file on thelocal storage device 309. It will be appreciated that the secondary information 34 (e.g., entertainment application 68) may be stored on thelocal storage device 309 asynchronous to receipt of the correspondingprimary content 32. For example, as described above, the secondary information 34 (e.g., entertainment application 68) utilized by theprocessing module 322 to generate theentertainment recording 52 may have been received and stored on thelocal storage device 309 device three days before the receivingdevice 12 received the entertainment asset 44 (e.g., 2006 World Cup Soccer Game). Indeed, the secondary information 34 (e.g., entertainment application 68) may be stored on thelocal storage device 309 any time (e.g., seconds, hours, months, days, etc.) prior to receipt of the correspondingprimary content 32. - In this example embodiment the
processing module 322 may utilize the secondary information identifier (e.g., URL) to retrieve a file from aremote storage device 316 that contains the secondary information 34 (e.g., entertainment application 68).Secondary information 34 may be stored on theremote storage device 316 asynchronous to receipt of the associatedprimary content 32 at the receivingdevice 12. - The
processing module 322 in the above described example embodiment associatedprimary content 32 in the form of an entertainment asset 44 (e.g., 2006 World Cup Soccer Game) to correspondingsecondary content 35 in the form of an entertainment recording 52 (e.g., Introduction of the players of the teams participating in the 2006 World Cup Soccer Game). Theprocessing module 322 generated thesecondary content 35 by executing theentertainment application 68. Other example embodiments may utilize other types ofsecondary information 34. For example, othersecondary information 34 may includesecondary content 35,secondary metadata 58 or asecondary application 60 to generate anentertainment slide show 62. - The
secondary content 35 may include anentertainment recording 52 or anentertainment slide show 62. Theprocessing module 322 may immediately utilize thesecondary content 35. - The
secondary metadata 58 may includeentertainment recording metadata 64 or entertainmentslide show metadata 66 that may be utilized by theprocessing module 322 to generatesecondary content 35. For example, theprocessing module 322 may use thesecondary metadata 58 in the form ofentertainment recording metadata 64 to identifyreference frames 86 reference frame changes 84 in theprimary content 32 to generate anentertainment recording 52. In another example theprocessing module 322 may use thesecondary metadata 58 to identifyreference frames 86 and add fade-ins and fade-outs to generate anentertainment slide show 62. - Finally, the
secondary application 60 may further be executed by theprocessing module 322 to generate anentertainment slide show 62. - Further, it will be appreciated by one skilled in the art that
primary content 32 may also include anadvertisement asset 46. Accordingly, theadvertisement asset 46 may be associated to corresponding advertisement secondary information 39 (e.g., advertisement recording 54,advertisement slide show 70, advertisement recording metadata 72, advertisement slide show metadata, advertisement application 76). - Other example may include
primary content 32 andsecondary content 35 that may be embodied in one or more mediums (e.g., visual, audio, kinetic, etc.), the visual medium presented as motion or still. It will be appreciated by one skilled in the art that the medium and presentation ofprimary content 32 does not necessarily determine the medium and presentation ofsecondary content 35 and that any combination of the medium and presentation of the primary content 3 may be associated to secondary content in any combination of medium and presentation. For example,primary content 32 embodied solely in audio may be associated withsecondary content 35 embodied as audio and visual (e.g., motion or still). In another embodiment,secondary content 35 may include non-derivativesecondary content 35 and derivativesecondary content 35. For example,secondary content 35 may include video that may be derived from the correspondingprimary content 32 and audio that may not be derived from the correspondingprimary content 32. - In response to the trick mode request, in the above described example embodiment, the
processing module 322 generated derivative secondary content (e.g., entertainment recording 52) for rendering to anoutput device 18 at a normal speed for the derivative secondary content. In another example, theprocessing module 322 may generate non-derivative secondary content (e.g., advertisement recording 54) for rendering to theoutput device 18. - Other Examples—Offsets into Primary and Secondary Content
- As previously described, in like manner, the author of
secondary content 35 may exercise complete editorial control over selection of the offset into thesecondary content 35 from which rendering is to begin based on the offset into theprimary content 32 that may detected responsive to the trick mode request. It will further be appreciated that the author ofsecondary metadata 58 and asecondary application 60 may exercise the same editorial control. - A user that continues to fast forward after the secondary content 35 (e.g., advertisement) has ended may, in one embodiment, view corresponding
primary content 32 that may be rendered at an accelerated speed. -
FIG. 25 is a flow chart illustrating amethod 400, according to an example embodiment, to communicate atransmission 297 that facilitates modification of playback ofprimary content 32 at a receivingdevice 12. Illustrated on the far right are operations performed by theadvertisement server 304. Illustrated on the center right are operations performed by theentertainment server 296. Illustrated on the center left are operations performed by theinsertion server 308. Illustrated on the far left are operations performed by the receivingdevice 12. Illustrated in the center are operations performed by thelive feed 302. - The
method 400 commences atoperation 401 with thelive feed 302 communicating acomponent transmission 291 to theinsertion server 308. Thecomponent transmission 291 may includeprimary content 32 including entertainment assets 44 (e.g., movie, serial episode, documentary, etc.) and advertisement assets 46 (e.g., advertisement, public service announcement, infomercial, etc.). Further, thecomponent transmission 291 may includesecondary information 34 including a secondary information table 350. The secondary information table 350 may includes asecondary information identifier 352 that may be utilized to associate theprimary content 32 tosecondary content 35 orsecondary information 34 that may be utilized to generate thesecondary content 35. - At
operation 402, thetransport module 310 at theinsertion server 308 may receive thecomponent transmission 291 from thelive feed 302. - At
operation 403 theentertainment server 296 communicates acomponent transmission 293 to theinsertion server 308. Thecomponent transmission 293 may includeprimary content 32 including entertainment assets 44 (e.g., movie, serial episode, documentary, etc.) andsecondary information 34 including a secondary information table 350. The secondary information table 350 may includes asecondary information identifier 352 that may be utilized to associate theprimary content 32 tosecondary content 35 orsecondary information 34 that may be utilized to generate thesecondary content 35. - At
operation 404, thetransport module 310 at theinsertion server 308 may receive thecomponent transmission 293 from theentertainment server 296. - At
operation 406, theadvertisement server 304 communicates acomponent transmission 295 to theinsertion server 308. Thecomponent transmission 295 may includeprimary content 32 including advertisement assets 46 (e.g., advertisement, public service announcement, infomercial, etc.) andsecondary information 34 including a secondary information table 350. The secondary information table 350 may include asecondary information identifier 352 that may be utilized to associate theprimary content 32 withsecondary content 35 orsecondary information 34 that may be utilized to generate thesecondary content 35. - At
operation 408, at theinsertion server 308, thetransport module 310 may receive thecomponent transmission 295 from theadvertisement server 304. - At
operation 410, thetransport module 310 may generate atransmission 297 based on thecomponent transmissions entertainment server 290 and theadvertisement server 304. For example, thetransmission 297 may include theprimary content 32 andsecondary information 34 from the component transmission 293 (e.g.,entertainment assets 44 and associated secondary information 34) and theprimary content 32 andsecondary information 34 from the component transmission 295 (e.g.,advertisement assets 46 and associated secondary information 34). - At
operation 412, thetransmission module 312 communicates thetransmission 297 to the receivingdevice 12. - At
operation 414, the receivingdevice 12 receives thetransmission 297. As described above, theprocessing module 322 at the receivingdevice 12 may utilize thesecondary information identifier 352 in thetransmission 297 to associate theprimary content 32 tosecondary content 35. For example, theprimary content 32 may include anentertainment asset 44 that may be associated tosecondary content 35 in the form of anentertainment recording 52. Another example may includeprimary content 32 that may include anadvertisement asset 46 that may be associated tosecondary content 35 in the form of anadvertisement recording 54. - In general, the
transmission 297 received from theinsertion server 308 may support the association of primary content to secondary content as previously described by themethod 370. -
FIG. 26 is adisplay device 26 with animage 134, according to an example embodiment, that was rendered from anadvertisement recording 54. Theimage 134 is shown to include aprogress bar 136 that provides a visual indication to the user of the amount of time remaining to fast forward theentire advertisement asset 46. Specifically, theprogress bar 136 provides the visual indication of theadvertisement asset 46 fast forwarding at two-times the normal speed. -
FIG. 27 shows a diagrammatic representation of a machine in the example form of acomputer system 600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative example embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a server computer, a client computer, a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Primary Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, an iPod, a personal video recorder (PVR) (e.g., analog or digital input), a personal digital recorder (PDR) (e.g., analog or digital input), a mobile phone, a portable media player, a game console or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. - The
example computer system 600 includes a processor 602 (e.g., a central processing unit (CPU) a graphics processing unit (GPU) or both), amain memory 604 and astatic memory 606, which communicate with each other via abus 608. Thecomputer system 600 may further include a video display unit 610 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). Thecomputer system 600 also includes an alphanumeric input device 612 (e.g., a keyboard), a cursor control device 614 (e.g., a mouse), adisk drive unit 616, a signal generation device 618 (e.g., a speaker) and anetwork interface device 620. - The
disk drive unit 616 includes a machine-readable medium 622 on which is stored one or more sets of instructions (e.g., software 624) embodying any one or more of the methodologies or functions described herein. Thesoftware 624 may also reside, completely or at least partially, within themain memory 604 and/or within theprocessor 602 during execution thereof by thecomputer system 600, themain memory 604 and theprocessor 602 also constituting machine-readable media. - The
software 624 may further be transmitted or received over anetwork 626 via thenetwork interface device 620. - While the machine-
readable medium 622 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signal. - Thus, systems and methods to modify playback or playback have been described. Although the present disclosure has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these example embodiments without departing from the broader spirit and scope of the disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Claims (21)
1-9. (canceled)
10. A system including:
a transport module to generate a transmission that includes primary content and a secondary information identifier; and
a transmission module to communicate the transmission to a receiving device that stores the transmission in a local storage device, the receiving device to retrieve the transmission from the local storage device, the receiving device to utilize the secondary information identifier to associate the primary content with a secondary content, the receiving device to render the secondary content, instead of the primary content, to an output device at the receiving device at a normal speed of the secondary content responsive to receipt of a request to render the primary content to the output device at an accelerated speed of the primary content.
11. The system of claim 10 , wherein the secondary content includes any one from a group including an entertainment recording, an advertisement recording, an entertainment slide show, and an advertisement slide show, entertainment recording metadata, advertisement recording metadata, entertainment slide show metadata, advertisement slide show metadata, an entertainment application, and an advertisement application.
12. The system of claim 10 , wherein the primary content includes an entertainment asset.
13. The system of claim 10 , wherein the primary content includes an advertisement asset.
14. The system of claim 10 , wherein the transmission includes a stream, and wherein the stream includes any one stream from a group including an MPEG-2 compression stream, an MPEG-4 compression stream, and a VC1 compression stream.
15. The system of claim 14 , wherein the stream is embedded in a transport that includes any one from a group of transports including an MPEG transport and an IP transport.
16. The system of claim 10 , wherein the secondary information identifier includes a universal resource locator.
17. The system of claim 16 , wherein the universal resource locator identifies a file on a remote storage device.
18. The system of claim 16 , wherein the universal resource locator identifies a file on the local storage device.
19. A method including:
generating a transmission that includes primary content and a secondary information identifier; and
communicating the transmission to a receiving device that stores the transmission in a local storage device, retrieves the transmission from the local storage device, and utilizes the secondary information identifier to associate the primary content with a secondary content, the receiving device to render the secondary content, instead of the primary content, to an output device at the receiving device at a normal speed of the secondary content responsive to receipt of a request to render the primary content to the output device at an accelerated speed of the primary content.
20. The method of claim 19 , wherein the secondary content includes any one from a group of content including an entertainment recording, an advertisement recording, an entertainment slide show, and an advertisement slide show, entertainment recording metadata, advertisement recording metadata, entertainment slide show metadata, advertisement slide show metadata, an entertainment application, and an advertisement application.
21. The method of claim 19 , wherein the primary content includes an entertainment asset.
22. The method of claim 19 , wherein the primary content includes an advertisement asset.
23. The method of claim 19 , wherein the transmission includes a transport stream, wherein the transport stream includes any one from a group of transport streams comprising an MPEG-2 transport stream, an MPEG-4 transport stream, and a VC1 transport stream.
24. The method of claim 23 , wherein the transport stream is embedded in a transport that includes any one of a group of transports including an MPEG transport and an IP transport.
25. The method of claim 19 , wherein the secondary information identifier includes a universal resource locator.
26. The method of claim 25 , wherein the universal resource locator identifies a file on a remote storage device.
27. The method of claim 25 , wherein the universal resource locator identifies a file on the local storage device.
28. A tangible machine readable medium storing a set of instructions that, when executed by a machine, cause the machine to:
generate a transmission that includes primary content and a secondary information identifier; and
communicate the transmission to a receiving device that stores the transmission in a local storage device, retrieves the transmission from the local storage device, and utilizes the secondary information identifier to associate the primary content with a secondary content, the receiving device to render the secondary content, instead of the primary content, to an output device at the receiving device at a normal speed of the secondary content responsive to receipt of a request to render the primary content to the output device at the receiving device at an accelerated speed of the primary content.
29. A system including:
a transport module to generate a transmission that includes primary content and a secondary information identifier;
a means for communicating over a network; and
a transmission module to communicate the transmission, via the means, to a receiving device that stores the transmission in a local storage device, the receiving device to retrieve the transmission from the local storage device, the receiving device to utilize the secondary information identifier to associate the primary content with a secondary content, the receiving device to render the secondary content, instead of the primary content, to an output device at the receiving device at a normal speed of the secondary content responsive to receipt of a request to render the primary content to the output device at an accelerated speed of the primary content.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/950,218 US20140119709A1 (en) | 2006-08-31 | 2013-07-24 | Systems and methods to modify playout or playback |
US14/215,596 US20140199053A1 (en) | 2006-08-31 | 2014-03-17 | Systems and methods to modify playout or playback |
US15/438,973 US20170221520A1 (en) | 2006-08-31 | 2017-02-22 | Systems and methods to play secondary media content |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/469,195 US8107786B2 (en) | 2006-08-31 | 2006-08-31 | Systems and methods to modify playout or playback |
US13/316,099 US8521009B2 (en) | 2006-08-31 | 2011-12-09 | Systems and methods to modify playout or playback |
US13/950,218 US20140119709A1 (en) | 2006-08-31 | 2013-07-24 | Systems and methods to modify playout or playback |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/316,099 Continuation US8521009B2 (en) | 2006-08-31 | 2011-12-09 | Systems and methods to modify playout or playback |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/215,596 Continuation US20140199053A1 (en) | 2006-08-31 | 2014-03-17 | Systems and methods to modify playout or playback |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140119709A1 true US20140119709A1 (en) | 2014-05-01 |
Family
ID=38657442
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/469,195 Active 2030-08-06 US8107786B2 (en) | 2006-08-31 | 2006-08-31 | Systems and methods to modify playout or playback |
US13/316,099 Active 2026-09-25 US8521009B2 (en) | 2006-08-31 | 2011-12-09 | Systems and methods to modify playout or playback |
US13/950,218 Abandoned US20140119709A1 (en) | 2006-08-31 | 2013-07-24 | Systems and methods to modify playout or playback |
US14/215,596 Abandoned US20140199053A1 (en) | 2006-08-31 | 2014-03-17 | Systems and methods to modify playout or playback |
US15/438,973 Abandoned US20170221520A1 (en) | 2006-08-31 | 2017-02-22 | Systems and methods to play secondary media content |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/469,195 Active 2030-08-06 US8107786B2 (en) | 2006-08-31 | 2006-08-31 | Systems and methods to modify playout or playback |
US13/316,099 Active 2026-09-25 US8521009B2 (en) | 2006-08-31 | 2011-12-09 | Systems and methods to modify playout or playback |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/215,596 Abandoned US20140199053A1 (en) | 2006-08-31 | 2014-03-17 | Systems and methods to modify playout or playback |
US15/438,973 Abandoned US20170221520A1 (en) | 2006-08-31 | 2017-02-22 | Systems and methods to play secondary media content |
Country Status (2)
Country | Link |
---|---|
US (5) | US8107786B2 (en) |
EP (1) | EP1895536A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160234295A1 (en) * | 2015-02-05 | 2016-08-11 | Comcast Cable Communications, Llc | Correlation of Actionable Events To An Actionable Instruction |
US9916077B1 (en) | 2014-11-03 | 2018-03-13 | Google Llc | Systems and methods for controlling network usage during content presentation |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6791020B2 (en) * | 2002-08-14 | 2004-09-14 | Sony Corporation | System and method for filling content gaps |
US11259059B2 (en) | 2004-07-30 | 2022-02-22 | Broadband Itv, Inc. | System for addressing on-demand TV program content on TV services platform of a digital TV services provider |
US7631336B2 (en) | 2004-07-30 | 2009-12-08 | Broadband Itv, Inc. | Method for converting, navigating and displaying video content uploaded from the internet to a digital TV video-on-demand platform |
US7590997B2 (en) | 2004-07-30 | 2009-09-15 | Broadband Itv, Inc. | System and method for managing, converting and displaying video content on a video-on-demand platform, including ads used for drill-down navigation and consumer-generated classified ads |
US9584868B2 (en) | 2004-07-30 | 2017-02-28 | Broadband Itv, Inc. | Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection |
US8107786B2 (en) | 2006-08-31 | 2012-01-31 | Open Tv, Inc. | Systems and methods to modify playout or playback |
US7853822B2 (en) * | 2006-12-05 | 2010-12-14 | Hitachi Global Storage Technologies Netherlands, B.V. | Techniques for enhancing the functionality of file systems |
EP2506569B1 (en) * | 2007-04-04 | 2015-06-03 | Visible World, Inc. | Computer program product and methods for modifying commercials during fast-forward reproduction |
US11570521B2 (en) | 2007-06-26 | 2023-01-31 | Broadband Itv, Inc. | Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection |
US9654833B2 (en) | 2007-06-26 | 2017-05-16 | Broadband Itv, Inc. | Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection |
US20090006189A1 (en) * | 2007-06-27 | 2009-01-01 | Microsoft Corporation | Displaying of advertisement-infused thumbnails of images |
US8060407B1 (en) | 2007-09-04 | 2011-11-15 | Sprint Communications Company L.P. | Method for providing personalized, targeted advertisements during playback of media |
US8949718B2 (en) | 2008-09-05 | 2015-02-03 | Lemi Technology, Llc | Visual audio links for digital audio content |
CN101729875A (en) * | 2008-10-24 | 2010-06-09 | 鸿富锦精密工业(深圳)有限公司 | Multimedia file playing method and media playing device |
US8200602B2 (en) * | 2009-02-02 | 2012-06-12 | Napo Enterprises, Llc | System and method for creating thematic listening experiences in a networked peer media recommendation environment |
US9183881B2 (en) | 2009-02-02 | 2015-11-10 | Porto Technology, Llc | System and method for semantic trick play |
US9055085B2 (en) * | 2009-03-31 | 2015-06-09 | Comcast Cable Communications, Llc | Dynamic generation of media content assets for a content delivery network |
US8990104B1 (en) * | 2009-10-27 | 2015-03-24 | Sprint Communications Company L.P. | Multimedia product placement marketplace |
US9813767B2 (en) * | 2010-07-08 | 2017-11-07 | Disney Enterprises, Inc. | System and method for multiple rights based video |
US8984144B2 (en) | 2011-03-02 | 2015-03-17 | Comcast Cable Communications, Llc | Delivery of content |
ES2629195T3 (en) * | 2013-01-21 | 2017-08-07 | Dolby Laboratories Licensing Corporation | Encoding and decoding of a bit sequence according to a confidence level |
US9674255B1 (en) * | 2014-03-26 | 2017-06-06 | Amazon Technologies, Inc. | Systems, devices and methods for presenting content |
US9786028B2 (en) | 2014-08-05 | 2017-10-10 | International Business Machines Corporation | Accelerated frame rate advertising-prioritized video frame alignment |
US20170063969A1 (en) * | 2015-06-30 | 2017-03-02 | Mobex, Inc. | Systems and methods for content distribution |
US10728624B2 (en) * | 2017-12-29 | 2020-07-28 | Rovi Guides, Inc. | Systems and methods for modifying fast-forward speeds based on the user's reaction time when detecting points of interest in content |
US10805651B2 (en) * | 2018-10-26 | 2020-10-13 | International Business Machines Corporation | Adaptive synchronization with live media stream |
GB2580647A (en) * | 2019-01-18 | 2020-07-29 | Cobalt Aerospace Ltd | A device for creating photoluminescent floor path marking elements |
WO2022161940A1 (en) | 2021-01-28 | 2022-08-04 | Biotronik Se & Co. Kg | Computer implemented method and electronic device for adjusting a delivery of audio and/or audiovisual content using a graphical user interface |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6614844B1 (en) * | 2000-11-14 | 2003-09-02 | Sony Corporation | Method for watermarking a video display based on viewing mode |
US20040103429A1 (en) * | 2002-11-25 | 2004-05-27 | John Carlucci | Technique for delivering entertainment programming content including commercial content therein over a communications network |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3603364B2 (en) * | 1994-11-14 | 2004-12-22 | ソニー株式会社 | Digital data recording / reproducing apparatus and method |
JP3329979B2 (en) | 1995-02-24 | 2002-09-30 | 株式会社日立製作所 | Optical disk and optical disk reproducing device |
US6041067A (en) * | 1996-10-04 | 2000-03-21 | Matsushita Electric Industrial Co., Ltd. | Device for synchronizing data processing |
US7245664B1 (en) * | 1999-05-20 | 2007-07-17 | Hitachi Kokusai Electric Inc. | Transmission control method of coded video signals and transmission system |
JP4407007B2 (en) * | 2000-05-02 | 2010-02-03 | ソニー株式会社 | Data transmission apparatus and method |
US20020007379A1 (en) * | 2000-05-19 | 2002-01-17 | Zhi Wang | System and method for transcoding information for an audio or limited display user interface |
US20030233422A1 (en) * | 2002-06-12 | 2003-12-18 | Andras Csaszar | Method and apparatus for creation, publication and distribution of digital objects through digital networks |
JP2007510316A (en) | 2003-09-12 | 2007-04-19 | オープン ティーヴィー インコーポレイテッド | Control method and system for recording and playback of interactive application |
AU2004285259A1 (en) * | 2003-10-29 | 2005-05-12 | Benjamin M. W. Carpenter | System and method for managing documents |
US8107786B2 (en) | 2006-08-31 | 2012-01-31 | Open Tv, Inc. | Systems and methods to modify playout or playback |
-
2006
- 2006-08-31 US US11/469,195 patent/US8107786B2/en active Active
-
2007
- 2007-08-29 EP EP07115246A patent/EP1895536A1/en not_active Withdrawn
-
2011
- 2011-12-09 US US13/316,099 patent/US8521009B2/en active Active
-
2013
- 2013-07-24 US US13/950,218 patent/US20140119709A1/en not_active Abandoned
-
2014
- 2014-03-17 US US14/215,596 patent/US20140199053A1/en not_active Abandoned
-
2017
- 2017-02-22 US US15/438,973 patent/US20170221520A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6614844B1 (en) * | 2000-11-14 | 2003-09-02 | Sony Corporation | Method for watermarking a video display based on viewing mode |
US20040028257A1 (en) * | 2000-11-14 | 2004-02-12 | Proehl Andrew M. | Method for watermarking a video display based on viewing mode |
US20040103429A1 (en) * | 2002-11-25 | 2004-05-27 | John Carlucci | Technique for delivering entertainment programming content including commercial content therein over a communications network |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9916077B1 (en) | 2014-11-03 | 2018-03-13 | Google Llc | Systems and methods for controlling network usage during content presentation |
US20160234295A1 (en) * | 2015-02-05 | 2016-08-11 | Comcast Cable Communications, Llc | Correlation of Actionable Events To An Actionable Instruction |
US11818203B2 (en) * | 2015-02-05 | 2023-11-14 | Comcast Cable Communications, Llc | Methods for determining second screen content based on data events at primary content output device |
Also Published As
Publication number | Publication date |
---|---|
US8521009B2 (en) | 2013-08-27 |
EP1895536A1 (en) | 2008-03-05 |
US20120082428A1 (en) | 2012-04-05 |
US20080124052A1 (en) | 2008-05-29 |
US8107786B2 (en) | 2012-01-31 |
US20140199053A1 (en) | 2014-07-17 |
US20170221520A1 (en) | 2017-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8521009B2 (en) | Systems and methods to modify playout or playback | |
US11503244B2 (en) | Systems and methods to position and play content | |
US20090119723A1 (en) | Systems and methods to play out advertisements | |
US10244291B2 (en) | Authoring system for IPTV network | |
US11930250B2 (en) | Video assets having associated graphical descriptor data | |
US9955107B2 (en) | Digital video recorder recording and rendering programs formed from spliced segments | |
US20030095790A1 (en) | Methods and apparatus for generating navigation information on the fly | |
US20060070106A1 (en) | Method, apparatus and program for recording and playing back content data, method, apparatus and program for playing back content data, and method, apparatus and program for recording content data | |
WO1999066722A1 (en) | Broadcasting method and broadcast receiver | |
US20100172625A1 (en) | Client-side Ad Insertion During Trick Mode Playback | |
US20080155581A1 (en) | Method and Apparatus for Providing Commercials Suitable for Viewing When Fast-Forwarding Through a Digitally Recorded Program | |
JP2000013759A (en) | Device and method for transmitting information, device and method for receiving information, and providing medium | |
EP2085972A1 (en) | Apparatus for recording digital broadcast and method of searching for final playback location |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |