US10455284B2 - Dynamic customization and monetization of audio-visual content - Google Patents

Dynamic customization and monetization of audio-visual content Download PDF

Info

Publication number
US10455284B2
US10455284B2 US13/602,058 US201213602058A US10455284B2 US 10455284 B2 US10455284 B2 US 10455284B2 US 201213602058 A US201213602058 A US 201213602058A US 10455284 B2 US10455284 B2 US 10455284B2
Authority
US
United States
Prior art keywords
audio
viewer
visual
circuitry
selection signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US13/602,058
Other versions
US20140068661A1 (en
Inventor
Daniel A. Gerrity
William H. Gates, III
Paul Holman
Roderick A. Hyde
Edward K. Y. Jung
Jordin T. Kare
Royce A. Levien
Richard T. Lord
Robert W. Lord
Mark A. Malamud
Nathan P. Myhrvold
John D. Rinaldo, Jr.
Keith D. Rosema
Clarence T. Tegreene
Lowell L. Wood, JR.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Elwha LLC
Original Assignee
Elwha LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elwha LLC filed Critical Elwha LLC
Priority to US13/602,058 priority Critical patent/US10455284B2/en
Priority to US13/689,488 priority patent/US9300994B2/en
Priority to US13/708,632 priority patent/US10237613B2/en
Priority to US13/714,195 priority patent/US20140039991A1/en
Priority to US13/720,727 priority patent/US20140040039A1/en
Priority to US13/801,079 priority patent/US20140040945A1/en
Priority to US13/827,167 priority patent/US20140040946A1/en
Assigned to Elwha LLC, a limited liability company of the State of Delaware reassignment Elwha LLC, a limited liability company of the State of Delaware ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GERRITY, DANIEL A., TEGREENE, CLARENCE T., ROSEMA, KEITH D., RINALDO, JOHN D., JR., KARE, JORDIN T., MALAMUD, MARK A., LORD, RICHARD T., LORD, ROBERT W., GATES, WILLIAM H., III, JUNG, EDWARD K.Y., MYHRVOLD, NATHAN P., HOLMAN, Paul, LEVIEN, ROYCE A., WOOD, LOWELL L., JR., HYDE, RODERICK A.
Priority to PCT/US2013/053444 priority patent/WO2014022783A2/en
Publication of US20140068661A1 publication Critical patent/US20140068661A1/en
Application granted granted Critical
Publication of US10455284B2 publication Critical patent/US10455284B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • H04N21/44224Monitoring of user activity on external systems, e.g. Internet browsing
    • H04N21/44226Monitoring of user activity on external systems, e.g. Internet browsing on social networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements
    • H04N21/4542Blocking scenes or portions of the received content, e.g. censoring scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6582Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number

Definitions

  • the present application is related to and claims the benefit of the earliest available effective filing date(s) from the following listed application(s) (the “Related Applications”) (e.g., claims earliest available priority dates for other than provisional patent applications or claims benefits under 35 USC ⁇ 119(e) for provisional patent applications, for any and all parent, grandparent, great-grandparent, etc. applications of the Related Application(s)). All subject matter of the Related Applications and of any and all parent, grandparent, great-grandparent, etc. applications of the Related Applications, including any priority claims, is incorporated herein by reference to the extent such subject matter is not inconsistent herewith.
  • the present disclosure relates generally to dynamic customization of audio-visual broadcasts (e.g. television broadcasts, data streams, etc.), and more specifically, to monetization of dynamically customized audio-visual broadcasts.
  • audio-visual broadcasts e.g. television broadcasts, data streams, etc.
  • Conventional audio-visual content streams typically consist of either pre-recorded content or live events that do not allow viewers to interact with or control any of the audio-visual content that is displayed.
  • Various concepts have recently been introduced that allow for television broadcasts to be modified to a limited degree to accommodate viewer choices, as disclosed by U.S. Pat. Nos. 7,945,926 and 7,631,327 entitled “Enhanced Custom Content Television” issued to Dempski et al.
  • Such prior art systems and methods are relatively limited, however, in their ability to accommodate and assimilate viewer-related information to provide a dynamically tailored audio-visual content stream.
  • Systems and methods for monetization of dynamically customized audio-visual broadcasts that provide an improved degree of accommodation or assimilation of viewer-related choices and characteristics would have considerable utility.
  • a process for providing audio-visual content in accordance with the teachings of the present disclosure may include receiving at least one audio-visual core portion, receiving at least one selection signal indicative of a viewer preference, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content, outputting the dynamically-customized audio-visual content, and receiving a consideration for the dynamically-customized audio-visual content.
  • FIGS. 1-5 show schematic views of systems for dynamic customization and monetization of audio-visual content in accordance with possible implementations of the present disclosure.
  • FIGS. 6 through 33 are flowcharts of processes for dynamic customization and monetization of audio-visual content in accordance with further possible implementations of the present disclosure.
  • Embodiments of methods and systems in accordance with the present disclosure may be implemented in a variety of environments. Initially, methods and systems in accordance with the present disclosure will be described in terms of dynamic customization of broadcasts. It should be remembered, however, that inventive aspects of such methods and systems may be applied to other environments that involve audio-visual content streams, and are not necessarily limited to the specific audio-visual broadcast implementations shown herein.
  • FIG. 1 is a schematic view of a representative system 100 for dynamic customization and monetization of audio-visual content in accordance with an implementation of the present disclosure.
  • the system 100 includes a processing component 110 that receives an audio-visual core portion 102 , such as a television broadcast, and provides a dynamically customized audio-visual content 112 to a display 130 .
  • a viewer 140 uses a control device 142 to provide one or more selection signals 144 to a sensor 150 which, in turn, provides inputs corresponding to the selection signals 144 to the processing component 110 .
  • the processing component 110 may operate without selection signals 144 , such as by accessing default inputs stored within a memory.
  • the sensor 150 may receive further supplemental selection signals 145 from a processing device 146 (e.g. laptop, desktop, personal data assistant, cell phone, iPad, iPhone, etc.) associated with the viewer 140 .
  • a processing device 146 e.g. laptop, desktop, personal data assistant, cell phone, iPad, iPhone, etc.
  • the processing component 110 may modify one or more aspects of the incoming audio-visual core portion 102 to provide the dynamically customized audio-visual content 112 that is shown on the display 130 .
  • the processing component 110 may access a data store 120 having revised content portions stored therein to perform one or more aspects of the processes described below.
  • the processing component 110 may modify the core portion 102 by a rendering process.
  • the rendering process is preferably a real-time (or approximately real-time) process.
  • the rendering process may receive the core portion 102 as a digital signal stream, and may modify one or more aspects of the core portion 102 , such as by replacing one or more portions of the core portion 102 with one or more revised content portions retrieved from the data store 120 , in accordance with the selection signals 144 (and/or default inputs).
  • the audio-visual core portion 102 may consist of solely an audio portion, or solely a visual (or video) portion, or may include a separate audio portion and a separate visual portion.
  • the audio-visual core portion 102 may include a plurality of audio portions or a plurality of visual portions, or any suitable combination thereof.
  • the term “visual” in such phrases as “audio-visual portion,” “audio-visual core portion,” “visual portion,” etc. is used broadly to refer to signals, data, information, or portions thereof that are associated with something which may eventually be viewed on a suitable display device by a viewer (e.g. video, photographs, images, etc.). It should be understood that a “visual portion” is not intended to mean that the signals, data, information, or portions thereof are themselves visible to a viewer.
  • each of the components of the system 100 may be implemented using software, hardware, firmware, or any suitable combinations thereof.
  • one or more of the components of the system 100 may be combined, or may be divided or separated into additional components, or additional components may be added, or one or more of the components may simply be eliminated, depending upon the particular requirements or specifications of the operating environment.
  • the display 130 may be that associated with a conventional television or other conventional audio-visual display device
  • the processing component 110 may be a separate component, such as a gaming device (e.g. Microsoft Xbox®, Sony Playstation®, Nintendo Wii®, etc.), a media player (e.g. DVD player, Blu Ray device, Tivo, etc.), or any other suitable component.
  • the sensor 150 may be a separate component or may alternately be integrated into the same component with the display 130 or the processing component 110 .
  • the information store 120 may be a separate component or may alternately be integrated into the same component with the processing component 110 , the display 130 , or the sensor 150 . Alternately, some or all of the components (e.g. the processing component 110 , the information store 120 , the display 130 , the sensor 150 , etc.) may be integrated into a common component 160 .
  • FIG. 2 is a schematic view of another representative system 200 for dynamic customization of television broadcasts in accordance with an implementation of the present disclosure.
  • the system 200 includes a processing component 210 that receives an audio-visual core portion 202 , and provides a dynamically customized audio-visual content 212 to a display 230 .
  • a viewer 240 uses a control device 242 to provide one or more selection signals 244 to a sensor 250 which, in turn, provides inputs corresponding to the selection signals 244 to the processing component 210 .
  • the processing component 210 may also operate without selection signals 244 , such as by accessing default inputs stored within a memory 220 .
  • the sensor 250 may sense a field of view 260 to detect the viewer 240 or other one or more other persons 262 .
  • the processing component 210 , the memory 220 , and the sensor 250 are housed within a single device 225 .
  • the processing component 210 may modify one or more aspects of the incoming audio-visual core portion 202 to provide the dynamically customized audio-visual content 212 that is shown on the display 230 .
  • the processing component 210 may also modify one or more aspects of the incoming audio-visual core portion 202 based on one or more persons (e.g. viewer 240 , other person 262 ) sensed within the filed of view 260 .
  • the processing component 210 may retrieve revised content portions stored in the memory 220 to perform one or more aspects of the processes described below.
  • FIG. 3 shows another representative implementation of a system 300 for dynamic customization of audio-visual content in accordance with another possible embodiment.
  • the system 300 may include one or more processors (or processing units) 302 , special purpose circuitry 382 , a memory 304 , and a bus 306 that couples various system components, including the memory 304 , to the one or more processors 302 and special purpose circuitry 382 (e.g. ASIC, FPGA, etc.).
  • the bus 306 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • the memory 304 includes read only memory (ROM) 308 and random access memory (RAM) 310 .
  • ROM read only memory
  • RAM random access memory
  • a basic input/output system (BIOS) 312 containing the basic routines that help to transfer information between elements within the system 300 , such as during start-up, is stored in ROM 308 .
  • the exemplary system 300 further includes a hard disk drive 314 for reading from and writing to a hard disk (not shown), and is connected to the bus 306 via a hard disk driver interface 316 (e.g., a SCSI, ATA, or other type of interface).
  • a magnetic disk drive 318 for reading from and writing to a removable magnetic disk 320 is connected to the system bus 306 via a magnetic disk drive interface 322 .
  • an optical disk drive 324 for reading from or writing to a removable optical disk 326 such as a CD ROM, DVD, or other optical media, connected to the bus 306 via an optical drive interface 328 .
  • the drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the system 300 .
  • exemplary system 300 described herein employs a hard disk, a removable magnetic disk 320 and a removable optical disk 326 , it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs) read only memories (ROM), and the like, may also be used.
  • RAMs random access memories
  • ROM read only memories
  • a number of program modules may be stored on the memory 304 (e.g. the ROM 308 or the RAM 310 ) including an operating system 330 , one or more application programs 332 , other program modules 334 , and program data 336 (e.g. the data store 320 , image data, audio data, three dimensional object models, etc.). Alternately, these program modules may be stored on other computer-readable media, including the hard disk, the magnetic disk 320 , or the optical disk 326 .
  • programs and other executable program components such as the operating system 330 , are illustrated in FIG. 3 as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the system 300 , and may be executed by the processor(s) 302 or the special purpose circuitry 382 of the system 300 .
  • a user may enter commands and information into the system 300 through input devices such as a keyboard 338 and a pointing device 340 .
  • Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
  • These and other input devices are connected to the processing unit 302 and special purpose circuitry 382 through an interface 342 that is coupled to the system bus 306 .
  • a monitor 325 e.g. display 130 , display 230 , or any other display device
  • the system 300 may also include other peripheral output devices (not shown) such as speakers and printers.
  • the system 300 may operate in a networked environment using logical connections to one or more remote computers (or servers) 358 .
  • Such remote computers (or servers) 358 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and may include many or all of the elements described above relative to system 300 .
  • the logical connections depicted in FIG. 3 may include one or more of a local area network (LAN) 348 and a wide area network (WAN) 350 .
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.
  • the system 300 also includes one or more broadcast tuners 356 .
  • the broadcast tuner 356 may receive broadcast signals directly (e.g., analog or digital cable transmissions fed directly into the tuner 356 ) or via a reception device (e.g., via sensor 150 , sensor 250 , an antenna, a satellite dish, etc.).
  • a reception device e.g., via sensor 150 , sensor 250 , an antenna, a satellite dish, etc.
  • the system 300 When used in a LAN networking environment, the system 300 may be connected to the local network 348 through a network interface (or adapter) 352 .
  • the system 300 When used in a WAN networking environment, the system 300 typically includes a modem 354 or other means for establishing communications over the wide area network 350 , such as the Internet.
  • the modem 354 which may be internal or external, may be connected to the bus 306 via the serial port interface 342 .
  • the system 300 may exchange (send or receive) wireless signals 353 (e.g. selection signals 144 , signals 244 , core portion 102 , core portion 202 , etc.) with one or more remote devices (e.g.
  • a wireless interface 355 coupled to a wireless communicator 357 (e.g., sensor 150 , sensor 250 , an antenna, a satellite dish, a transmitter, a receiver, a transceiver, a photoreceptor, a photodiode, an emitter, a receptor, etc.).
  • a wireless communicator 357 e.g., sensor 150 , sensor 250 , an antenna, a satellite dish, a transmitter, a receiver, a transceiver, a photoreceptor, a photodiode, an emitter, a receptor, etc.
  • program modules depicted relative to the system 300 may be stored in the memory 304 , or in a remote memory storage device. More specifically, as further shown in FIG. 3 , a dynamic customization component 380 may be stored in the memory 304 of the system 300 .
  • the dynamic customization component 380 may be implemented using software, hardware, firmware, or any suitable combination thereof.
  • the dynamic customization component 380 may be operable to perform one or more implementations of processes for dynamic customization in accordance with the present disclosure.
  • the system 300 shown in FIG. 3 is capable of receiving an audio-visual core portion (e.g. core portion 102 , core portion 202 , etc.) from an external source (e.g. via the wireless device 357 , the LAN 348 , the WAN 350 , etc.), in further embodiments, the audio-visual core portion may itself be generated within the system 300 , such as by playing media stored within the system memory 304 , or stored within the hard disk drive 314 , or played on the disk drive 318 , the optical drive 328 , or any other suitable component of the system 300 . In some implementations, the audio-visual core portion may be generated by suitable software routines operating within the system 300 .
  • an audio-visual core portion e.g. core portion 102 , core portion 202 , etc.
  • the audio-visual core portion may itself be generated within the system 300 , such as by playing media stored within the system memory 304 , or stored within the hard disk drive 314 , or played on the disk drive 318 , the optical drive
  • one or more of the core content providers 410 may be based or partially based in what is referred to as the “cloud” or “cloud computing,” or may be provided using one or more “cloud services.”
  • cloud computing is the delivery of computational capacity and/or storage capacity as a service.
  • the “cloud” refers to one or more hardware and/or software components that deliver or assist in the delivery of computational and/or storage capacity, including, but not limited to, one or more of a client, an application, a platform, an infrastructure, and a server, and associated hardware and/or software.
  • Cloud and cloud computing may refer to one or more of a computer, a processor, a storage medium, a router, a modem, a virtual machine (e.g., a virtual server), a data center, an operating system, a middleware, a hardware back-end, a software back-end, and a software application.
  • a cloud may refer to a private cloud, a public cloud, a hybrid cloud, and/or a community cloud.
  • a cloud may be a shared pool of configurable computing resources, which may be public, private, semi-private, distributable, scaleable, flexible, temporary, virtual, and/or physical.
  • a cloud or cloud service may be delivered over one or more types of network, e.g., the Internet.
  • a cloud or cloud services may include one or more of infrastructure-as-a-service (“IaaS”), platform-as-a-service (“Paas”), software-as-a-service (“SaaS”), and desktop-as-a-service (“DaaS”).
  • IaaS may include, e.g., one or more virtual server instantiations that may start, stop, access, and configure virtual servers and/or storage centers (e.g., providing one or more processors, storage space, and network resources on-demand, e.g., GoGrid and Rackspace).
  • PaaS may include, e.g., one or more software and/or development tools hosted on an infrastructure (e.g., a computing platform and/or a solution stack from which the client can create software interfaces and applications, e.g., Microsoft Azure.
  • SaaS may include, e.g., software hosted by a service provider and accessible over a network (e.g., the software for the application and the data associated with that software application are kept on the network, e.g., Google Apps, SalesForce).
  • DaaS may include, e.g., providing desktop, applications, data, and services for the user over a network (e.g., providing a multi-application framework, the applications in the framework, the data associated with the applications, and services related to the applications and/or the data over the network, e.g., Citrix).
  • a network e.g., providing a multi-application framework, the applications in the framework, the data associated with the applications, and services related to the applications and/or the data over the network, e.g., Citrix.
  • the foregoing is intended to be exemplary of the types of systems referred to in this application as “cloud” or “cloud computing” and should not be considered complete or exhaustive.
  • a viewer 440 may provide one or more selection signals 444 using a manual input device 442 .
  • the one or more selections signals 444 may be provided to a sensor 450 which, in turn, provides selection inputs 452 corresponding to the selection signals 444 to the one or more dynamic customization service providers 420 .
  • the sensor 450 may be eliminated, and the selection signals 444 may be communicated directly to the one or more dynamic customization service providers 420 .
  • the sensor 450 may receive one or more supplemental selection signals 445 from one or more electronic devices 446 (e.g. laptop, desktop, personal data assistant, cell phone, iPad, iPhone, etc.) associated with the viewer 440 .
  • the one or more supplemental selection signals 445 may be based on a variety of suitable information, including, for example, browsing histories, purchase records, call records, downloaded content, or any other suitable information or data.
  • one or more supplemental selection signals 445 may be automatically determined from one or more characteristics of a viewing area 460 , such as a presence of one or more additional viewers 442 (e.g. a child, spouse, friend, visitor, etc.).
  • the one or more customization service providers 420 receive the one or more selection inputs 452 (or default inputs if specific inputs are not provided), and the audio-visual core portion 412 from the one or more core content providers 610 , and using the one or more dynamic customization systems 422 , provide a dynamically customized audio-visual content 470 to a display 472 visible to the one or more viewers 440 , 442 in the viewing area 460 .
  • one or more viewers 440 , 442 may provide one or more payments (or other consideration) 480 to the one or more customization service providers 420 in exchange for the dynamically customized audio-visual content 470 .
  • the one or more customization service providers 420 may provide one or more payments (or other consideration) 482 to the one or more core content providers 410 in exchange for the core audio-visual content 412 .
  • the amounts of at least one of the one or more payments 480 , or the one or more payments 482 may be at least partially determined using one or more processes in accordance with the teachings of the present disclosure, as described more fully below.
  • the audio-visual core portion 412 may consist of solely an audio portion, or solely a visual (or video) portion, a separate audio portion, a separate visual portion, a plurality of audio portions, a plurality of visual portions, or any suitable combination thereof.
  • the dynamically customized audio-visual core portion 470 may consist of solely an audio portion, or solely a visual (or video) portion, a separate audio portion, a separate visual portion, a plurality of audio portions, a plurality of visual portions, or any suitable combination thereof.
  • FIG. 5 shows a schematic view of another representative system 500 for dynamic customization of audio-visual broadcasts in accordance with an alternate implementation of the present disclosure.
  • the system 500 includes several of the same components as described above for the system 500 shown in FIG. 5 , however, the one or more customization service providers 420 have been eliminated.
  • the components described above with respect to FIG. 4 will not be repeated, but rather, the significant new aspects of the system 500 shown in FIG. 5 will be described.
  • the one or more selection inputs 552 are provided to one or more core content providers 510 .
  • the one or more core content providers 510 have one or more dynamic customization systems 512 .
  • the one or more core content providers 510 receive the one or more selection inputs 512 (or default inputs if specific inputs are not provided), and modify an audio-visual core portion using the one or more dynamic customization systems 512 to provide a dynamically customized audio-visual content 470 to a display 472 visible to one or more viewers 440 , 442 in a viewing area 460 .
  • the one or more customization service providers 420 shown in FIG. 4 may be eliminated, and the same one or more entities that normally provide an audio-visual core portion (e.g. normal television broadcasts, etc.) may perform the dynamic customization to provide the desired dynamically customized audio-visual content to viewers.
  • the one or more viewers 440 , 442 may provide one or more payments (or other consideration) 490 to the one or more core content providers 510 in exchange for the dynamically customized audio-visual content 470 .
  • the amount of the one or more payments 490 may be defined using one or more processes in accordance with the teachings of the present disclosure, as described more fully below.
  • FIG. 6 shows a flowchart of a process 600 for dynamic-customization of audio-visual content in accordance with an implementation of the present disclosure.
  • the process 600 includes receiving at least one audio-visual core portion at 610 , receiving at least one selection signal indicative of a viewer preference at 620 , modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 , outputting the dynamically-customized audio-visual content at 640 , and receiving a consideration for the dynamically-customized audio-visual content at 650 .
  • an incoming audio-visual core portion may be dynamically customized in accordance with a viewer's preferences, thereby increasing the viewer's satisfaction.
  • the viewer e.g. viewer 140
  • the viewer may indicate preferences for actresses (and actors) 132 , vehicles 134 , depicted products (or props) 135 , environmental aspects 136 (e.g. buildings, scenery, setting, background, lighting, etc.), language 138 , or other suitable preferences.
  • virtually any desired aspect of the incoming core portion 102 may be dynamically customized in accordance with the viewer's selections, preferences, or characteristics as implemented by the selection signals 144 .
  • receiving at least one audio-visual core portion at 610 may include receiving at least one audio-visual core portion at a dynamic customization system proximate to a viewer at 702 (e.g. dynamic customization system 100 shown in FIG. 1 , a gaming console or other suitable processing device located in a viewer's home, etc.).
  • receiving at least one audio-visual core portion at 610 may include receiving at least one audio-visual core portion at a dynamic customization service that provides a dynamically customized audio-visual content to a viewer at 704 (e.g. customization service provider 420 shown in FIG. 4 ).
  • receiving at least one audio-visual core portion at 610 may include generating at least one audio-visual core portion by a core content provider at 706 (e.g. core content provider 410 shown in FIG. 4 ). In additional implementations, receiving at least one audio-visual core portion at 610 may include providing at least one audio-visual core portion from a memory device by a core content provider at 708 (e.g. core content provider 510 shown in FIG. 5 ).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one selection signal indicative of a viewer preference at a dynamic customization system proximate to a viewer at 712 (e.g. dynamic customization system 100 shown in FIG. 1 , an Xbox®, Playstation®, Wii®, personal computer, Mac®, or other suitable processing device located within a viewer's living space or sphere of influence, etc.).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one selection signal indicative of a viewer preference at a dynamic customization service that provides a dynamically customized audio-visual content to a viewer at 714 (e.g. customization service provider 420 shown in FIG. 4 ).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one selection signal indicative of a viewer preference by a core content provider at 716 (e.g. core content provider 510 shown in FIG. 5 ).
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying the audio-visual core portion at a dynamic customization system proximate to a viewer at 722 (e.g. dynamic customization system 100 shown in FIG. 1 ).
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying the audio-visual core portion at a dynamic customization service that provides a dynamically customized audio-visual content to a viewer at 724 (e.g. customization service provider 420 shown in FIG. 4 ).
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying the audio-visual core portion by a core content provider that provides the audio-visual core portion at 726 (e.g. core content provider 510 shown in FIG. 5 ).
  • outputting the dynamically-customized audio-visual content at 640 may include outputting the dynamically-customized audio-visual content from a dynamic customization system proximate to a viewer at 732 (e.g. dynamic customization system 100 shown in FIG. 1 , at the viewer's television set, at the viewer's viewing room, within the viewer's dwelling, etc.).
  • outputting the dynamically-customized audio-visual content at 640 may include outputting the dynamically-customized audio-visual content from a dynamic customization service that provides the dynamically-customized audio-visual content to a viewer at 734 (e.g. customization service provider 420 shown in FIG. 4 ).
  • outputting the dynamically-customized audio-visual content at 640 may include outputting the dynamically-customized audio-visual content from a core content provider that provides the audio-visual core portion at 736 (e.g. core content provider 510 shown in FIG. 5 ).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least one of a payment, a promise to pay, a promise to perform a deed, or a grant of a right at 741 .
  • the payment may be a one-time payment, a monthly subscription payment, a use-based or on-demand type of payment, or any other suitable payment.
  • the promise to pay may be a contractual commitment to provide future payment (or payments) based on amount or frequency of usage, or any other suitable terms.
  • the promise to perform a deed may include a promise to send payment, a promise to enable access private information, a promise to allow data gathering regarding viewing habits or preferences, or any other suitable promises.
  • the grant of a right may include a grant of access to gather personal data, a grant to share data gathered, a grant to perform market testing or market analysis, or any other suitable grant of one or more rights.
  • these examples are merely exemplary, and the consideration received at 650 may be any suitable consideration as that term is generally understood in accordance with the principles of contracts and contract law, and as described more fully below.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving one or more payments from a viewer of the dynamically-customized audio-visual content at 746 (e.g. viewer 132 , viewer 1840 , etc.).
  • receiving at least one audio-visual core portion at 610 may include receiving a television broadcast at 802 (e.g. conventional wireless television broadcast, cable television broadcast, satellite television broadcast, etc.).
  • receiving at least one audio-visual core portion at 610 may include receiving an audio-visual data stream at 804 (e.g. streaming audio-visual content via Internet, audio-visual data stream via LAN, etc.).
  • receiving at least one audio-visual core portion at 610 may include receiving at least one audio core portion and receiving at least one visual core portion at 806 (e.g.
  • receiving at least one audio-visual core portion at 610 may include receiving an internally-generated audio-visual core portion at 808 (e.g. receiving an audio-visual core portion from an internal media player, generating an audio-visual core portion using an internally-executing software routine, etc.).
  • receiving at least one audio-visual core portion at 610 may include receiving a virtual reality portion at 810 (e.g.
  • receiving at least one audio-visual core portion at 610 may include receiving a video game data stream portion at 812 (e.g. receiving a video game signal as a data stream and where a server determines what is displayed on a viewer's display device based on the at least one selection signal, etc.).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one selection signal generated by a user input device at 820 (e.g. receiving a signal generated by a keyboard, a joystick, a microphone, a touch screen, etc).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one selection signal based on a pre-determined default value at 822 (e.g. receiving one or more signals based on a user's previous selections stored in memory, or a pre-defined profile for a user stored in memory, etc.).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include sensing one or more viewers present within a viewing area and determining at least one selection signal based on the one or more viewers sensed within the viewing area at 824 (e.g. sensing a parent and a child within a television viewing area, and determining a first selection signal based on the parent and a second selection signal based on the child, sensing a female and a male within a television viewing area, and determining a first selection signal based on the female and a second selection signal based on the male, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of receiving at least one selection signal indicative of a viewer preference at 630 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on the receiving at least one selection signal generated by the user input device at 830 (e.g. receiving a payment at least partially based on receiving a signal generated by a keyboard, a joystick, a microphone, a touch screen, etc).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on the receiving at least one selection signal based on a pre-determined default value at 832 (e.g. receiving a payment at least partially based on receiving one or more signals based on a user's previous selections stored in memory, or a pre-defined profile for a user stored in memory, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of sensing one or more viewers present within a viewing area or determining at least one selection signal based on the one or more viewers sensed within the viewing area at 834 (e.g.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one supplemental signal from an electronic device associated with a viewer or determining at least one selection signal based on the at least one supplemental signal at 836 (e.g. receiving a payment based at least partially on receiving at least one supplemental signal from a cell phone, personal data assistant, laptop computer, desktop computer, smart phone, tablet, Apple iPhone, Apple iPad, Microsoft Surface, Kindle Fire, etc. associated with a viewer, and/or determine at least one selection signal based on such a supplemental signal).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include scanning an electronic device associated with a viewer (e.g. a cell phone, personal data assistant, laptop computer, desktop computer, smart phone, tablet, Apple iPhone®, Apple iPad®, Microsoft Surface®, Kindle Fire®, etc.) and determining at least one selection signal based on the scanning at 902 .
  • receiving at least one selection signal indicative of a viewer preference at 620 may include querying an electronic device associated with a viewer (e.g. a cell phone, personal data assistant, laptop computer, desktop computer, smart phone, tablet, Apple iPhone®, Apple iPad®, Microsoft Surface®, Kindle Fire®, etc.) and determining at least one selection signal based on the querying at 906 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of receiving at least one selection signal indicative of a viewer preference at 630 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of scanning an electronic device associated with a viewer or determining at least one selection signal based on the scanning at 912 (e.g.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of querying an electronic device associated with a viewer or determining at least one selection signal based on the querying at 914 (e.g.
  • one or more incoming signals may conflict with one or more other incoming signals.
  • Such conflicts may be resolved in a variety of suitable ways.
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least two selection signals, and arbitrating between at least two conflicting selection signals at 1002 (e.g. receiving a first selection signal indicating a desire to view R-rated subject matter, and a second selection signal indicating that a child is in the viewing area, and arbitrating between the first and second selection signals such that the R-rated subject matter is not shown).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least two selection signals, and between at least two conflicting selection signals, determining which signal to apply based on a pre-determined ranking at 1004 (e.g.
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least two selection signals, and between at least two conflicting selection signals, determining which signal to apply based on one or more rules at 1006 (e.g. receiving a first selection signal from a manual input device indicating a desire to view R-rated content, and a second selection signal from a scanning of a viewing area indicating a child in a viewing area, and determining not to display the R-rated content based on a rule that indicates that R-rated content will not be displayed when any child is present; receiving a first selection signal from a manual input device indicating a desire to view a first actor, and a second selection signal from an Android phone indicating a desire to view a second actor, and determining to apply the first selection signal based on a rule that gives priority to a manual input over an input determined from querying an electronic device, etc.).
  • rules at 1006 e.g. receiving a first selection signal from a manual input device indicating a desire to view R-
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving a selection signal, and determining whether to apply the selection signal based on an authorization level at 1008 (e.g. receiving a selection signal from a scanning of a viewer's electronic device indicating a desire to view R-rated content, and determining not to display the R-rated content based on a lack of authorization by an owner of the electronic device).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of receiving at least one selection signal indicative of a viewer preference at 630 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least two selection signals or arbitrating between at least two conflicting selection signals at 1012 (e.g.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include, receiving at least a portion of a consideration based at least partially on at least one of receiving at least two selection signals or between at least two conflicting selection signals, determining which signal to apply based on a pre-determined ranking at 1014 (e.g. receiving a payment based at least partially on receiving and/or determining which of two conflicting signals to apply based on a ranking hierarchy, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving a selection signal, and determining whether to apply the selection signal based on an authorization level at 1016 (e.g. receiving a payment based at least partially on receiving first and second selection signals that conflict, and/or determining which to apply based on one or more rules regarding a content maturity level, a language preference, a content violence level, etc.).
  • an authorization level at 1016 e.g. receiving a payment based at least partially on receiving first and second selection signals that conflict, and/or determining which to apply based on one or more rules regarding a content maturity level, a language preference, a content violence level, etc.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving a selection signal, and determining whether to apply the selection signal based on an authorization level at 1018 (e.g. receiving a payment based at least partially on receiving a selection signal from a scanning of a viewer's electronic device indicating a desire to view R-rated content and determining not to display the R-rated content based on a lack of authorization by an owner of the electronic device, etc.).
  • an authorization level at 1018 e.g. receiving a payment based at least partially on receiving a selection signal from a scanning of a viewer's electronic device indicating a desire to view R-rated content and determining not to display the R-rated content based on a lack of authorization by an owner of the electronic device, etc.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one actor of the audio-visual core portion with at least one replacement actor at 1102 (e.g. replacing the actor Brad Pitt in the movie Troy with replacement actor Mel Gibson, replacing the actor Meryl Streep in the movie The Manchurian Candidate with replacement actor Jessica Alba, the term “actor” being used herein a gender-neutral manner to include both males and females, etc.).
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing one or more of a facial appearance, a voice, a body appearance, or an apparel with a corresponding one or more of a replacement facial appearance, a replacement voice, a replacement body appearance, or a replacement apparel at 1104 (e.g.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one consumer product depicted in the audio-visual core portion with at least one replacement consumer product at 1106 (e.g. replacing a can of Coke® held by an actor in a television sitcom with a can of Dr. Pepper®, replacing a hamburger eaten by a character in a movie with a taco, replacing a Gibson® guitar played by a character in a podcast with a Fender® guitar, etc.).
  • replacing a can of Coke® held by an actor in a television sitcom with a can of Dr. Pepper® replacing a hamburger eaten by a character in a movie with a taco
  • replacing a Gibson® guitar played by a character in a podcast with a Fender® guitar etc.
  • replacing at least one consumer product depicted in the audio-visual core portion with at least one replacement consumer product at 864 may include replacing at least one of a beverage product, a food product, a vehicle, an article of clothing, an article of jewelry, a musical instrument, an electronic device, a household appliance, an article of furniture, an artwork, an office equipment, or an article of manufacture at 1108 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one actor of the audio-visual core portion with at least one replacement actor at 1122 (e.g. receiving a payment based at least partially on replacing an actor with a replacement actor, receiving a relatively higher payment based on replacing a lower-popularity actor with a higher-popularity actor, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing one or more of a facial appearance, a voice, a body appearance, or an apparel with a corresponding one or more of a replacement facial appearance, a replacement voice, a replacement body appearance, or a replacement apparel at 1124 (e.g. receiving a payment based on replacing a facial appearance and a voice of a first actor with a second actor, receiving a relatively higher payment based at least partially on replacing a first body appearance of a lower-popularity actress with a body appearance of a higher-popularity actress, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one consumer product depicted in the audio-visual core portion with at least one replacement consumer product at 1126 (e.g. receiving a payment based at least partially on replacing a can of Coke® held by an actor in a television sitcom with a can of Dr. Pepper®, receiving a payment based at least partially on replacing a hamburger eaten by a character in a movie with a taco, replacing a Gibson® guitar played by a character in a podcast with a Fender® guitar, etc.).
  • receiving a payment based at least partially on replacing a can of Coke® held by an actor in a television sitcom with a can of Dr. Pepper® receiving a payment based at least partially on replacing a hamburger eaten by a character in a movie with a taco, replacing a Gibson® guitar played by a character in a podcast with a Fender® guitar, etc.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one of a beverage product, a food product, a vehicle, an article of clothing, an article of jewelry, a musical instrument, an electronic device, a household appliance, an article of furniture, an artwork, an office equipment, or an article of manufacture at 1108 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one of a setting aspect, an environmental aspect, or a background aspect of the audio-visual core portion with a corresponding at least one of a replacement setting aspect, a replacement environmental aspect, or a replacement background aspect at 1202 .
  • a replacement setting aspect e.g., a replacement setting aspect, a replacement environmental aspect, or a replacement background aspect at 1202 .
  • one or more scenes from a movie may be set in a different location (e.g. scenes from Sleepless in Seattle may be set in Cleveland, or a background with the Golden Gate bridge may be replaced with the Tower Bridge over the Thames River, etc.).
  • a weather condition may be replaced with a different weather condition (e.g. a surfing scene from Baywatch may take place in a snowstorm instead of a sunny day, etc.), or buildings in a background may be replaced with mountains or open countryside.
  • replacing at least one of a setting aspect, an environmental aspect, or a background aspect of the audio-visual core portion with a corresponding at least one of a replacement setting aspect, a replacement environmental aspect, or a replacement background aspect at 1202 may include replacing at least one of a city in which at least one scene is set, a country in which at least one scene is set, a weather condition in which at least one scene is set, a time of day in which at least one scene is set, or a landscape in which at least one scene is set at 1204 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one animated character with at least one replacement animated character at 1206 (e.g. replacing a cartoon Snow White from Snow White and the Seven Dwarfs with a cartoon Alice from Alice in Wonderland, replacing an animated elf with an animated dwarf, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one of a setting aspect, an environmental aspect, or a background aspect of the audio-visual core portion with a corresponding at least one of a replacement setting aspect, a replacement environmental aspect, or a replacement background aspect at 1212 (e.g.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one of a city in which at least one scene is set, a country in which at least one scene is set, a weather condition in which at least one scene is set, a time of day in which at least one scene is set, or a landscape in which at least one scene is set at 1214 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one animated character with at least one replacement animated character at 1216 (e.g. receiving a payment based at least partially on replacing a cartoon Snow White with a cartoon Alice, receiving a payment based at least partially on replacing a cartoon Cartman with a cartoon Kenny, etc.).
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one virtual character with at least one replacement virtual character at 1302 (e.g. replacing a virtual warrior with a virtual wizard, etc.).
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one industrial product depicted in the audio-visual core portion with at least one replacement industrial product at 1304 (e.g.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one name brand depicted in the audio-visual core portion with at least one replacement name brand at 1306 (e.g. replacing a leather label on character's pants from “Levis” to “J Brand,” replacing an Izod alligator on a character's shirt with a Ralph Lauren horse logo, replacing a shoe logo from “Gucci” to “Calvin Klein,” etc.).
  • replacing at least one name brand depicted in the audio-visual core portion with at least one replacement name brand at 1306 e.g. replacing a leather label on character's pants from “Levis” to “J Brand,” replacing an Izod alligator on a character's shirt with a Ralph Lauren horse logo, replacing a shoe logo from “Gucci” to “Calvin Klein,” etc.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one trade dress depicted in the audio-visual core portion with at least one replacement trade dress at 1308 (e.g. replacing uniforms, packaging, colors, signs, logos, and any other items associated with a trade dress of “McDonald's” restaurant with corresponding trade dress items associated with “Burger King” restaurant, replacing brown trucks and uniforms associated with the “UPS” delivery company with red and yellow trucks and uniforms associated with the “DHL Express” delivery company, replacing helmets and jerseys associated with the Minnesota Vikings with replacement helmets and jerseys associated with the Seattle Seahawks, etc.).
  • replacement trade dress at 1308 e.g. replacing uniforms, packaging, colors, signs, logos, and any other items associated with a trade dress of “McDonald's” restaurant with corresponding trade dress items associated with “Burger King” restaurant, replacing brown trucks and uniform
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one virtual character with at least one replacement virtual character at 1312 (e.g. receiving a payment based on replacing a virtual warrior with a virtual wizard, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one industrial product depicted in the audio-visual core portion with at least one replacement industrial product at 1314 (e.g. receiving a payment based on replacing a nameplace on a milling machine from “Cincinnati” to “Bridgeport” in a factory scene, replacing a name of a shipping line and/or the colors on a container ship from “Maersk” to “Evergreen,” etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one name brand depicted in the audio-visual core portion with at least one replacement name brand at 1316 (e.g. receiving a payment based at least partially on replacing a leather label on character's pants, replacing a trademark on a character's shirt, or replacing a logo on a character's computer, etc.).
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include receiving at least a portion of a consideration based at least partially on replacing at least one trade dress depicted in the audio-visual core portion with at least one replacement trade dress at 1318 (e.g.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 are shown in FIG. 14 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least a portion of dialogue of the audio-visual core portion with a revised dialogue portion at 1402 .
  • the at least one selection signal indicative of a viewer selection e.g.
  • a portion of dialogue of a movie that contains profanity or that may otherwise be offensive to the viewer is replaced with a replacement portion of dialogue that is not offensive to the viewer (e.g. a dialogue of a movie is modified from an R-rated dialogue to a lower-rated dialogue, such as PG-13-rated dialogue or a G-rated dialogue, such as “Frankly, my dear, I don't give a damn” being replaced with “Frankly, my dear, I don't really care”, a dialogue that is threatening or violent may be replaced with a less-threatening or less-violent dialogue, etc.).
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing one or more spoken portions with one or more replacement spoken portions (e.g. replacing a profane word, such as “damn,” with a non-profane word, such as “darn,” replacing a first laughter, such as a “tee hee hee,” with a second laugher, such as a “ha ha ha,” etc.) and modifying one or more facial movements corresponding to the one or more spoken portions with one or more replacement facial movements corresponding to the one or more replacement spoken portions (e.g.
  • replacing one or more spoken portions with one or more replacement spoken portions and modifying one or more facial movements corresponding to the one or more spoken portions with one or more replacement facial movements corresponding to the one or more replacement spoken portions at 1404 may include replacing one or more words spoken in a first language with one or more replacement words spoken in a second language (e.g. replacing “no” with “nyet,” replacing “yes” with “oui,” etc.), and modifying one or more facial movements corresponding to the one or more words spoken in the first language with one or more replacement facial movements corresponding to the one or more words spoken in the second language (e.g.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least a portion of dialogue of the audio-visual core portion with a revised dialogue portion at 1412 (e.g. receiving payment based on modifying an audio-visual content to accommodate a viewer selection indicating a desire for no profanity, or based on automatic detection using a sensor of a child entering a viewing area, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of replacing one or more spoken portions with one or more replacement spoken portions or modifying one or more facial movements corresponding to the one or more spoken portions with one or more replacement facial movements corresponding to the one or more replacement spoken portions at 1414 (e.g. receiving payment for replacing a profane word with a non-profane word, and replacing one or more lip movements corresponding with the profane word with one or more replacement lip movements corresponding with the non-profane word, etc.).
  • receiving at least a portion of a consideration based at least partially on at least one of replacing one or more spoken portions with one or more replacement spoken portions or modifying one or more facial movements corresponding to the one or more spoken portions with one or more replacement facial movements corresponding to the one or more replacement spoken portions at 1414 may include receiving at least a portion of a consideration based at least partially on at least one of replacing one or more words spoken in a first language with one or more replacement words spoken in a second language, or modifying one or more facial movements corresponding to the one or more words spoken in the first language with one or more replacement facial movements corresponding to the one or more words spoken in the second language at 1416 (e.g. receiving payment for replacing sounds and facial movements corresponding to Japanese speech with those corresponding to English speech, receiving payment for replacing sounds and facial movements corresponding to English speech with those corresponding to Chinese speech, etc.).
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing one or more audible portions with one or more replacement audible portions (e.g. replacing a sound of a hand clap with a sound of snapping fingers, replacing a sound of a cough with a sound of a sneeze, replacing the sound of a piano with the sound of a violin, etc.) and modifying one or more body movements corresponding to the one or more audible portions with one or more replacement body movements corresponding to the one or more replacement audible portions (e.g.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing one or more background noises with one or more replacement background noises (e.g. replacing a sound of a bird singing with a sound of a dog barking, replacing a sound of an avalanche with a sound of an erupting volcano, etc.) at 1504 .
  • replacement background noises e.g. replacing a sound of a bird singing with a sound of a dog barking, replacing a sound of an avalanche with a sound of an erupting volcano, etc.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing one or more background noises with one or more replacement background noises (e.g. replacing a sound of a lion roaring with a sound of an elephant trumpeting, replacing a sound of an avalanche with a sound of an erupting volcano, etc.), and replacing one or more background visual components with one or more replacement background visual components (e.g. replacing a visual image of a lion roaring with a visual image of an elephant trumpeting, replacing a visual depiction of an avalanche with a visual depiction of an erupting volcano, etc.) at 1506 .
  • replacement background noises e.g. replacing a sound of a lion roaring with a sound of an elephant trumpeting, replacing a sound of an avalanche with a sound of an erupting volcano, etc.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of replacing one or more audible portions with one or more replacement audible portions, or modifying one or more body movements corresponding to the one or more audible portions with one or more replacement body movements corresponding to the one or more replacement audible portions at 1512 (e.g. receiving payment based on replacing sounds and body movements associated with a hand clap with replacement sounds and body movements associated with snapping fingers, receiving payment based on replacing sounds and body movements associated with a cough with replacement sounds and movements associated with a sneeze, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing one or more background noises with one or more replacement background noises at 1514 (e.g. receiving payment based on replacing jungle sounds with urban sounds, receiving payment based on replacing crowd noise with sounds of ocean surf, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of replacing one or more background noises with one or more replacement background noises, or replacing one or more background visual components with one or more replacement background visual components at 1516 (e.g. receiving payment based on replacing sounds and images of a lion roaring with replacement sounds and images of an elephant trumpeting, receiving payment based on replacing sounds and video of an avalanche with replacement sounds and video of an erupting volcano, etc.).
  • content that is categorized as being culturally inappropriate may be either omitted (or deleted or removed), or may be replaced with alternate content that is categorized as being culturally appropriate, such as by retrieving replacement content from a library of lookup tables, or any other suitable source. For example, as shown in FIG.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include at least one of replacing a culturally inappropriate portion with a culturally appropriate portion or omitting the culturally inappropriate portion at 1602 (e.g. replacing terminology that may be considered a racial slur in a particular culture with replacement terminology that is not considered a racial slur in the particular culture, removing a content portion that includes a hand gesture that is insulting to a particular culture; etc.).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving a selection signal indicative of a cultural heritage of at least one viewer at 1604 , and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include at least one of replacing a portion considered inappropriate with respect to the cultural heritage of the at least one viewer with a replacement portion considered appropriate with respect to the cultural heritage of the at least one viewer, or omitting the inappropriate portion at 1606 (e.g.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of replacing a culturally inappropriate portion with a culturally appropriate portion or omitting the culturally inappropriate portion at 1608 (e.g. receiving payment based on replacing terminology that may be considered in poor taste in Iceland with replacement terminology that is not considered in poor taste, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of at least one of replacing a portion considered inappropriate with respect to the cultural heritage of the at least one viewer with a replacement portion considered appropriate with respect to the cultural heritage of the at least one viewer, or omitting the inappropriate portion at 1610 (e.g. receiving payment based on receiving a signal indicating that a viewer is Chinese, and replacing a reference to “Taiwan” with a reference to “Chinese Taipei;” receiving payment based on receiving an indication that a viewer is Islamic, and replacing a reference to the Bible with a reference to the Quran; etc.).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving a selection signal indicative of a geographic location of at least one viewer at 1702 , and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include at least one of replacing a portion considered inappropriate with respect to the geographic location of the at least one viewer with a replacement portion considered appropriate with respect to the geographic location of the at least one viewer, or omitting the inappropriate portion at 1704 (e.g.
  • a signal such as a GPS signal from a viewer's cell phone, indicating that the viewer is located in Brazil, and replacing a content portion that includes a hand gesture that is offensive in Brazil, such as a Texas Longhorns “hook-em-horns” hand gesture, with a benign hand gesture appropriate for the viewer located in Brazil; receiving a signal, such as a location of an IP address of a local Internet service provider, that indicates that a viewer is located within a Native American reservation, and replacing content that includes terminology offensive to Native Americans with replacement content that includes non-offensive terminology; etc.).
  • a signal such as a GPS signal from a viewer's cell phone, indicating that the viewer is located in Brazil
  • a content portion that includes a hand gesture that is offensive in Brazil, such as a Texas Longhorns “hook-em-horns” hand gesture, with a benign hand gesture appropriate for the viewer located in Brazil
  • receiving a signal such as a location of an IP address of a local Internet service provider, that indicates that a viewer is located within
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving a selection signal indicative of a cultural identity of at least one viewer at 1706 , and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include at least one of replacing at least a portion of content inappropriate for the cultural identity of the at least one viewer with an appropriate portion of content, or omitting the inappropriate portion at 1708 (e.g. receiving a signal, such as a language selection of a software installed on a viewer's electronic device, indicating that the viewer is Arabic, and removing a content portion that is inappropriate to the Arabic culture; etc.).
  • a signal such as a language selection of a software installed on a viewer's electronic device, indicating that the viewer is Arabic, and removing a content portion that is inappropriate to the Arabic culture; etc.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of replacing a portion considered inappropriate with respect to the geographic location of the at least one viewer with a replacement portion considered appropriate with respect to the geographic location of the at least one viewer, or omitting the inappropriate portion at 1710 (e.g. receiving payment based on receiving a signal, such as a GPS signal from a viewer's cell phone, indicating that the viewer is located in Brazil, and replacing a content portion that includes a hand gesture that is offensive in Brazil with a benign hand gesture appropriate for the viewer located in Brazil; etc.).
  • a signal such as a GPS signal from a viewer's cell phone
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of at least one of replacing at least a portion of content inappropriate for the cultural identity of the at least one viewer with an appropriate portion of content, or omitting the inappropriate portion at 1712 (e.g. receiving a signal, such as a language selection of a software installed on a viewer's electronic device, indicating that the viewer is Arabic, and removing a content portion that is inappropriate to the Arabic culture; etc.).
  • a signal such as a language selection of a software installed on a viewer's electronic device, indicating that the viewer is Arabic, and removing a content portion that is inappropriate to the Arabic culture; etc.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may be accomplished in various ways. For example, as shown in FIG. 18 , in some implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include changing at least one portion of a digital signal stream in accordance with the at least one selection signal (e.g. replacing original digitized signals of the audio-visual core portion with replacement digitized signals of the audio-visual core portion, supplementing original digitized signals of the audio-visual core portion with supplemental digitized signals, etc.) at 1802 .
  • changing at least one portion of a digital signal stream in accordance with the at least one selection signal e.g. replacing original digitized signals of the audio-visual core portion with replacement digitized signals of the audio-visual core portion, supplementing original digitized signals of the audio-visual core
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include digitizing at least a portion of an audio-visual core portion, and changing at least one portion of the digitized portion in accordance with the at least one selection signal at 1804 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least a portion of an audio-visual core portion with a view of a three dimensional model of a replacement portion in accordance with the at least one selection signal at 1806 .
  • a dynamically-customized movie e.g. the movie Cleopatra
  • a desired lead actress (or actor) e.g. Angelina Joli
  • an original lead actress (or actor) e.g.
  • the processing component 110 may retrieve a digital model of the desired lead actress (or actor) and may substitute appropriate portions of the incoming core portion 102 with appropriate views of the digital model of the desired lead actress (or actor).
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include rendering at least a portion of an audio-visual core portion in accordance with the at least one selection signal to create the dynamically-customized audio-visual content at 1808 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on changing at least one portion of a digital signal stream in accordance with the at least one selection signal at 1812 (e.g. receiving a payment portion based on replacing digitized signals with replacement digitized signals, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of digitizing at least a portion of an audio-visual core portion, or changing at least one portion of the digitized portion in accordance with the at least one selection signal at 1814 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least a portion of an audio-visual core portion with a view of a three dimensional model of a replacement portion in accordance with the at least one selection signal at 1816 . (e.g. receiving payment based on replacing a first actor with a 3D model of a replacement actor).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on rendering at least a portion of an audio-visual core portion in accordance with the at least one selection signal to create the dynamically-customized audio-visual content at 1818 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include re-rendering at least a portion of an audio-visual core portion in accordance with the at least one selection signal to create the dynamically-customized audio-visual content at 1902 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include rendering at least a replacement portion in accordance with the at least one-relection signal, and combining the at least a replacement portion with the audio-visual core portion at 1904 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include re-rendering at least a portion of an audio-visual core portion in accordance with the at least one-relection signal to create a replacement portion, and combining the replacement portion with the audio-visual core portion at 1906 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on re-rendering at least a portion of an audio-visual core portion in accordance with the at least one selection signal to create the dynamically-customized audio-visual content at 1912 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of rendering at least a replacement portion in accordance with the at least one-relection signal, or combining the at least a replacement portion with the audio-visual core portion at 1914 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of re-rendering at least a portion of an audio-visual core portion in accordance with the at least one-relection signal to create a replacement portion, or combining the replacement portion with the audio-visual core portion at 1916 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include rendering a plurality of frames of video data to form a first rendered stream, rendering a plurality of frames of video data to form a second rendered stream, and combining the first rendered stream and the second rendered stream for substantially simultaneous display on a display device (e.g. multiplexing the first and second rendered streams) at 2002 .
  • the operations at 2002 may include, for example, those techniques disclosed in U.S. Pat. No. 8,059,201 issued to Aarts et al. (disclosing techniques for real-time and non-real-time rendering of video data streams), which patent is incorporated herein by reference.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modeling at least one object using a wireframe model including a plurality of polygons, and applying texture data to the plurality of polygons to provide a three-dimensional appearance to the wireframe model for display on a display device at 2004 .
  • the operations at 2004 may include, for example, those techniques disclosed in U.S. Pat. No. 8,016,653 issued to Pendleton et al. (disclosing techniques for three dimensional rendering of live events), which patent is incorporated herein by reference.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include rendering a supplemental video stream, blocking a portion of the audio-visual core portion, and combining the supplemental video stream with at least an unblocked portion of the audio-visual core portion at 2006 .
  • the operations at 2006 may include, for example, those techniques disclosed in U.S. Pat. Nos. 7,945,926 and 7,631,327 issued to Dempski et al. (disclosing techniques for video animation and merging with television broadcasts and supplemental content sources), which patents are incorporated herein by reference.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of rendering a plurality of frames of video data to form a first rendered stream, rendering a plurality of frames of video data to form a second rendered stream, or combining the first rendered stream and the second rendered stream for substantially simultaneous display on a display device at 2012 (e.g. receiving a payment based on multiplexing first and second rendered streams).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of modeling at least one object using a wireframe model including a plurality of polygons, or applying texture data to the plurality of polygons to provide a three-dimensional appearance to the wireframe model for display on a display device at 2014 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of rendering a supplemental video stream, blocking a portion of the audio-visual core portion, or combining the supplemental video stream with at least an unblocked portion of the audio-visual core portion at 2016 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include rendering a supplemental video stream, blocking a portion of the audio-visual core portion, combining the supplemental video stream with at least an unblocked portion of the audio-visual core portion, and using an area outside a letterboxed portion to display a supplemental content at 2102 .
  • the operations at 2102 may include, for example, those techniques disclosed in U.S. Pat. Nos. 7,945,926 and 7,631,327 issued to Dempski et al. (disclosing techniques for video animation and merging with television broadcasts and supplemental content sources), which patents were previously incorporated herein by reference.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include providing a three-dimensional model of a first object having one or more characteristics to be modified, providing a three-dimensional model of a second object having one or more characteristics that are to be adopted, and replacing the one or more characteristics to be modified with the one or more characteristics that are to be adopted to provide a modified model of the first object at 2104 .
  • the “providing” operations at 2104 may, in at least some implementations, be accomplished by a dynamic customization system (e.g. system 160 of FIG.
  • the “adopting” operations at 2104 may include one or more of reusing operations, copying operations, grafting operations, re-skinning operations, illuminating operations, or any other suitable operations.
  • the operations at 2104 may include, for example, those techniques disclosed in U.S. Pat. No. 7,109,993 and U.S. Patent Publication No. 20070165022 by Peleg et al. (disclosing generating a head model and modifying portions of facial features), which patent and pending application are incorporated herein by reference.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modeling at least one object to be modified using a plurality of sections, and at least one of replacing, adjusting, moving, or modifying at least one of the plurality of sections in accordance with a stored information, the stored information being determined at least partially based on the at least one selection signal at 2106 .
  • the operations at 2106 may include, for example, those techniques disclosed in U.S. Pat. No. 6,054,999 issued to Strandberg (disclosing producing graphic movement sequences from recordings of measured data from strategic parts of actors), which patent is incorporated herein by reference.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of rendering a supplemental video stream, blocking a portion of the audio-visual core portion, combining the supplemental video stream with at least an unblocked portion of the audio-visual core portion, or using an area outside a letterboxed portion to display a supplemental content at 2112 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of providing a three-dimensional model of a first object having one or more characteristics to be modified, providing a three-dimensional model of a second object having one or more characteristics that are to be adopted, or replacing the one or more characteristics to be modified with the one or more characteristics that are to be adopted to provide a modified model of the first object at 2114 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of modeling at least one object to be modified using a plurality of sections, or at least one of replacing, adjusting, moving, or modifying at least one of the plurality of sections in accordance with a stored information, the stored information being determined at least partially based on the at least one selection signal at 2116 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include providing a first wire-frame model of a first object that is to be modified and a second wire-frame model of a second object having one or more characteristics that are to be mapped onto the first wire-frame model, obtaining a fitting function for mapping the one or more characteristics from the second wire-frame model onto the first wire-frame model, the one or more characteristics being at least partially determined in accordance with the at least one selection signal, and mapping the one or more characteristics from the second wire-frame model onto the first wire-frame model using the fitting function at 2202 .
  • the operations at 2202 may include, for example, those techniques disclosed in U.S. Pat. No. 5,926,575 issued to Ohzeki et al. (disclosing techniques for image deformation or distortion based on correspondence to a reference image, wire-frame modeling of images and texture mapping), which patent is incorporated herein by reference.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include providing at least one background image portion that includes at least a portion of an object to be modified, and at least one foreground image portion that includes at least one aspect that is to be adapted to at least part of the object to be modified, at least one of scaling, translating, rotating, or distorting the at least one foreground image portion to substantially conform the at least one foreground image portion with the at least one background image portion, and merging the at least one foreground image portion with the at least one background image portion for display on a display device at 2204 .
  • the operations at 2204 may include, for example, those techniques disclosed in U.S. Pat. No. 5,623,587 issued to Bulman (disclosing techniques for creation of composite electronic images from multiple individual images), which patent is incorporated herein by reference.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of providing a first wire-frame model of a first object that is to be modified and a second wire-frame model of a second object having one or more characteristics that are to be mapped onto the first wire-frame model, obtaining a fitting function for mapping the one or more characteristics from the second wire-frame model onto the first wire-frame model, the one or more characteristics being at least partially determined in accordance with the at least one selection signal, or mapping the one or more characteristics from the second wire-frame model onto the first wire-frame model using the fitting function at 2202 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of providing at least one background image portion that includes at least a portion of an object to be modified, and at least one foreground image portion that includes at least one aspect that is to be adapted to at least part of the object to be modified, at least one of scaling, translating, rotating, or distorting the at least one foreground image portion to substantially conform the at least one foreground image portion with the at least one background image portion, or merging the at least one foreground image portion with the at least one background image portion for display on a display device at 2214 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include combining a plurality of images to provide a synthesized object having at least one of an animation capability, a sound capability, or a synchronized animation and sound capability, and commanding at least one of a movement, a sound, or a synchronized movement and sound of the synthesized object using a script file at least partially based on the at least one selection signal at 2302 .
  • the operations at 2302 may include, for example, those techniques disclosed in U.S. Pat. No. 5,111,409 issued to Gasper et al.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include altering a plurality of light intensities at a plurality of pixel locations corresponding to one or more aspects of an object to be modified at least partially based on the at least one selection signal at 2304 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include determining a plurality of pixels of at least one digital image that are to be adjusted based on at least a portion of a speaker changing from speaking a first dialogue portion to a second dialogue portion, and altering one or more light intensities of at least some of the plurality of pixels to adjust the at least one digital image to depict the at least a portion of the speaker speaking the second dialogue portion at 2306 .
  • the operations at 2304 and 2306 may include, for example, those techniques disclosed in U.S. Pat. Nos. 4,827,532 and 4,600,281 and 4,260,229 issued to Bloomstein (disclosing techniques for substitution of sound track language and corresponding lip movements), which patents are incorporated herein by reference.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of combining a plurality of images to provide a synthesized object having at least one of an animation capability, a sound capability, or a synchronized animation and sound capability, or commanding at least one of a movement, a sound, or a synchronized movement and sound of the synthesized object using a script file at least partially based on the at least one selection signal at 2312 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on altering a plurality of light intensities at a plurality of pixel locations corresponding to one or more aspects of an object to be modified at least partially based on the at least one selection signal at 2314 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of determining a plurality of pixels of at least one digital image that are to be adjusted based on at least a portion of a speaker changing from speaking a first dialogue portion to a second dialogue portion, or altering one or more light intensities of at least some of the plurality of pixels to adjust the at least one digital image to depict the at least a portion of the speaker speaking the second dialogue portion at 2316 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing a portion of the audio-visual core portion with a replacement audio-visual portion based on a selection of at least one of an alternative story line or an alternative plot, the selection being at least partially based on the at least one selection signal at 2402 .
  • the operations at 2402 may include, for example, those techniques disclosed in U.S. Pat. No. 4,569,026 issued to Best (disclosing techniques for interactive entertainment systems), which patent is incorporated herein by reference.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include annotating a portion of the audio-visual core portion with an annotation portion at least partially based on the at least one selection signal at 2404 .
  • the operations at 2404 may include, for example, those techniques disclosed in U.S. Patent Publication No. 20040181592 by Samra et al. (disclosing techniques for annotating and versioning digital media), which pending patent application is incorporated herein by reference.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include determining one or more control parameters associated with a control event available for modification, determining one or more additional parameters of at least one additional event influenced upon modification of the one or more control parameters associated with the control event, and modifying at least some of the one or more control parameters and the one or more additional parameters at least partially based on the at least one selection signal at 2406 .
  • the operations at 2406 may include, for example, those techniques disclosed in U.S. Patent Publication No. 20110029099 by Benson (disclosing techniques for providing audio visual content), which pending patent application is incorporated herein by reference.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of replacing a portion of the audio-visual core portion with a replacement audio-visual portion based on a selection of at least one of an alternative story line or an alternative plot, the selection being at least partially based on the at least one selection signal at 2412 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on annotating a portion of the audio-visual core portion with an annotation portion at least partially based on the at least one selection signal at 2414 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on modifying an audio portion and not a visual portion at 2416 .
  • receiving at least one audio-visual core portion at 610 may involve a variety of different ways and aspects. For example, in some implementations, receiving at least one audio-visual core portion at 610 may include receiving an audio portion and not a visual portion at 2502 . In other implementations, receiving at least one audio-visual core portion at 610 may include receiving a visual portion and not an audio portion at 2504 . In still other implementations, receiving at least one audio-visual core portion at 610 may include receiving a separate audio portion and a separate visual portion at 2506 . In further implementations, receiving at least one audio-visual core portion at 610 may include receiving a combined audio and visual portion at 2508 .
  • receiving at least one audio-visual core portion at 610 may include receiving one or more audio portions and one or more visual portions at 2510 (e.g. receiving a plurality of audio portions and a single video portion, receiving a single audio portion and a plurality of video portions, etc.).
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may involve a variety of different ways and aspects.
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual contentat 630 may include modifying an audio portion and not a visual portion at 2522 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual contentat 630 may include modifying a visual portion and not an audio portion at 2524 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying a separate audio portion and modifying a separate visual portion at 2526 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying a combined audio and visual portion at 2528 .
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying one or more audio portions and modifying one or more visual portions at 2530 (e.g. modifying a plurality of audio portions and modifying a single video portion, modifying a single audio portion and modifying a plurality of video portions, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on modifying an audio portion and not a visual portion at 2532 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on modifying a visual portion and not an audio portion at 2534 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on modifying a separate audio portion and modifying a separate visual portion at 2536 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on modifying a combined audio and visual portion at 2538 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on modifying one or more audio portions and modifying one or more visual portions at 2530 (e.g. receiving payment for modifying a plurality of audio portions and modifying a single video portion, modifying a single audio portion and modifying a plurality of video portions, etc.).
  • outputting the dynamically-customized audio-visual content at 640 may involve a variety of different ways and aspects.
  • outputting the dynamically-customized audio-visual content at 640 may include outputting a dynamically-customized audio at 2602 .
  • outputting the dynamically-customized audio-visual content at 640 may include outputting a dynamically-customized visual portion and not a dynamically-customized audio portion at 2604 .
  • outputting the dynamically-customized audio-visual content at 640 may include outputting a separate dynamically-customized audio portion and a separate dynamically-customized visual portion at 2606 .
  • outputting the dynamically-customized audio-visual content at 640 may include outputting a combined dynamically-customized audio and visual portion at 2608 .
  • outputting the dynamically-customized audio-visual content at 640 may include outputting one or more dynamically-customized audio portions and one or more dynamically-customized visual portions at 2610 (e.g. outputting a plurality of audio portions and outputting a single video portion, outputting a single audio portion and outputting a plurality of video portions, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on outputting a dynamically-customized audio at 2612 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on outputting a dynamically-customized visual portion and not a dynamically-customized audio portion at 2614 .
  • receiving a consideration for the dynamically-customized audio-visual content at 640 may include receiving at least a portion of a consideration based at least partially on outputting a separate dynamically-customized audio portion and a separate dynamically-customized visual portion at 2616 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on outputting a combined dynamically-customized audio and visual portion at 2628 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on outputting one or more dynamically-customized audio portions and one or more dynamically-customized visual portions at 2630 (e.g. receiving payment for outputting a plurality of audio portions and outputting a single video portion, outputting a single audio portion and outputting a plurality of video portions, etc.).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving an input from a viewer indicative of a desired setting selected from at least one sliding scale of at least one viewing aspect at 2702 .
  • FIG. 28 shows one possible implementation of a user interface 2800 in accordance with the teachings of the present disclosure.
  • the user interface 2800 displays a plurality of customization aspects 2810 having a corresponding plurality of sliding scales 2820 (e.g. comedy scale, action scale, drama scale, etc.).
  • a viewer may position each selector 2822 associated with each sliding scale 2820 to indicate their desired preferences associated with each customization aspect 2810 , resulting in a suitably customized audio-visual content.
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving an input from a viewer indicative of a desired viewing profile selected from a plurality of viewing profiles associated with the viewer at 2704 .
  • FIG. 29 shows one possible implementation of a user interface 2900 in accordance with the teachings of the present disclosure.
  • the user interface 2900 displays a plurality of customization profiles 2910 (e.g. family time, viewing with spouse, viewing alone, etc.) associated with a particular viewer 2920 (e.g. “Arnold”).
  • the particular viewer 2220 may select the desired profile 2910 depending upon who else (if anyone) may be present in the viewing area with the particular viewer 2920 , resulting in a suitably customized audio-visual content.
  • receiving at least one selection signal indicative of a viewer preference at 620 may include monitoring at least one characteristic of at least one viewer at 2706 (e.g. facial features, smile, frown, scowl, displeasure, interest, lack of interest, laughter, tears, fear, anxiety, sadness, disgust, shock, distaste, etc.), and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include automatically adjusting at least one customization aspect in response to the at least one characteristic of the at least one viewer (e.g. increasing comedy aspects, reducing horror aspects, increasing dramatic aspects, reducing profanity aspects, etc.) at 1708 .
  • the at least one characteristic of the at least one viewer e.g. increasing comedy aspects, reducing horror aspects, increasing dramatic aspects, reducing profanity aspects, etc.
  • a monitoring device may sense facial features associated with displeasure at particular occurrences of profane dialogue, and may automatically reduce the amount of profanity contained in the dialogue.
  • the monitoring device may sense a higher-than-desired level of fear, and may automatically reduce the horror aspects of the dynamically customized audio-visual content so provide a desired level of fear to the viewer. receiving at least one selection signal indicative of a viewer preference.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of monitoring at least one characteristic of at least one viewer, or automatically adjusting at least one customization aspect in response to the at least one characteristic of the at least one viewer (e.g. receiving payment for increasing comedy aspects, receiving payment for reducing horror aspects, receiving payment for increasing dramatic aspects, receiving payment for reducing profanity aspects, etc.) at 2718 .
  • receiving at least one selection signal indicative of a viewer preference at 620 may include sensing at least one characteristic of at least one viewer at 3002 , and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include automatically changing a viewing profile associated with the viewer in response to the sensed at least one characteristic of the at least one viewer at 3012 .
  • a sensing device e.g. a Kinect® device, Nintendo Wii®, etc.
  • the monitoring device may sense a higher-than-desired level of fear, and may automatically reduce the horror aspects of the dynamically customized audio-visual content so provide a desired level of fear to the viewer.
  • receiving at least one selection signal indicative of a viewer preference receiving at least one selection signal indicative of a viewer preference
  • receiving at least one selection signal indicative of a viewer preference at 620 may include monitoring a viewing area into which a dynamically-customized audio-visual content is to be displayed at 3004 , and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include automatically adjusting at least one customization aspect in response to a change in at least one characteristic of the viewing area at 3014 .
  • a monitoring device may sense that a less than desired amount of laughter is occurring in the viewing area (e.g. using pattern recognition techniques, etc.), and may automatically increase a comedy level of the dynamically customized audio-visual content.
  • the sensing device may sense that more than a desired level of screaming is occurring within the viewing area, and may automatically reduce a horror level of the dynamically customized audio-visual content.
  • receiving at least one selection signal indicative of a viewer preference at 620 may include sensing a change in a number of viewers in a viewing area into which a dynamically-customized audio-visual content is to be displayed at 3006 , and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include automatically adjusting at least one customization aspect in response to a change in the number of viewers in the viewing area at 3016 .
  • a monitoring device may sense that a viewer's spouse has entered the viewing area (e.g. using facial recognition techniques, body recognition techniques, voice recognition techniques, etc.), and may automatically change from a first viewing profile (e.g.
  • the sensing device may sense that a viewer's children have departed from the viewing area, and may automatically change from a family-oriented viewing profile to an individual-oriented viewing profile.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of sensing at least one characteristic of at least one viewer, or automatically changing a viewing profile associated with the viewer in response to the sensed at least one characteristic of the at least one viewer at 3022 . (e.g. receiving payment for sensing a viewer's emotion with a Kinect® device, and automatically changing from a first viewing profile to a second viewing profile that better fits the viewer's emotion).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of monitoring a viewing area into which a dynamically-customized audio-visual content is to be displayed, or automatically adjusting at least one customization aspect in response to a change in at least one characteristic of the viewing area at 3024 (e.g. receiving payment for a monitoring device indicating that more than a desired level of screaming is occurring within the viewing area, and may automatically reduce a honor level of the dynamically customized audio-visual content).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of sensing a change in a number of viewers in a viewing area into which a dynamically-customized audio-visual content is to be displayed, or automatically adjusting at least one customization aspect in response to a change in the number of viewers in the viewing area at 3016 (e.g. receiving payment for a monitoring device sensing that a viewer's spouse has entered the viewing area, and automatically changing from a “viewing alone” profile to a “viewing with spouse” profile, etc.).
  • FIG. 31 shows additional embodiments of processes for dynamic customization of audio-visual content in accordance with the present disclosure. More specifically, in some implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of one or more other viewer reactions to a portion of audio-visual content at 3102 , and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include adjusting at least one customization aspect in response to the at least one input indicative of one or more other viewer reactions at 3112 . For example, in some implementations, an input signal may be received (e.g.
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of one or more other parent reactions to a portion of audio-visual content at 3104 , and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying a portion of audio-visual content in response to the at least one input indicative of one or more other parent reactions at 3114 .
  • an input may be received indicating that a majority of parents reacted negatively to a particular portion of audio-visual content (e.g.
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of a viewing history of at least one viewer within a viewing area into which a dynamically customized audio-visual content is to be displayed at 3106 , and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying a portion of audio-visual content in response to the at least one input indicative of a viewing history at 3116 .
  • an input may be received indicating that a viewer has repeatedly changed a channel whenever a particular portion of audio-visual content has been displayed, and in response to the at least one input, the audio-visual core portion is automatically replacing the particular portion of audio-visual content with a replacement portion of content.
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of one or more other viewer reactions to a portion of audio-visual content, or adjusting at least one customization aspect in response to the at least one input indicative of one or more other viewer reactions at 3122 (e.g. receiving a payment for receiving an input from a service that assesses viewer reactions, and modifying content based on other demographically-similar viewers, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of one or more other parent reactions to a portion of audio-visual content, or modifying a portion of audio-visual content in response to the at least one input indicative of one or more other parent reactions at 3124 (e.g. receiving a payment for receiving an input indicating that a majority of parents reacted negatively to a particular portion of audio-visual content, and automatically modifying one or more aspects of the content to improve parental satisfaction, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of a viewing history of at least one viewer within a viewing area into which a dynamically customized audio-visual content is to be displayed, or modifying a portion of audio-visual content in response to the at least one input indicative of a viewing history at 3126 (e.g. receiving a payment for determining that a viewer has repeatedly changed a channel whenever a particular actor has appeared, and automatically replacing the particular actor with a replacement actor based on the viewer's history).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative that at least one viewer has not viewed one or more prerequisite content portions at 3202 , and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include supplementing at least a portion of audio-visual content with at least some of the one or more prerequisite content portions in response to the at least one input at 3212 .
  • an input may be received indicating that a viewer has missed previous episodes of a series, and in response to the at least one input, the audio-visual core portion is automatically supplemented with one or more scenes that provide essential plot points that the viewer will need to view in order to be brought up to speed for the upcoming episode.
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of one or more preferences of at least one viewer based on previous viewing behavior at 3204 , and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include automatically adjusting a plot direction of at least a portion of audio-visual content in response to the at least one input at 3214 . For example, in some implementations, an input may be received indicating that a viewer prefers sad endings over happy endings, and in response to the at least one input, the audio-visual core portion is automatically modified to provide a plot direction that ends up with a sad ending rather than a happy ending.
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of a preferred point of view of at least one viewer at 3206 , and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include adjusting the point of view of at least a portion of the audio-visual core portion in response to the at least one input at 3216 . For example, in some implementations, a viewer may manually select from a menu of available points of view (e.g.
  • the audio-visual core portion is automatically adjusted to show content from the selected perspective (e.g. a fight scene from the perspective of one of the fighters, etc.).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative that at least one viewer has not viewed one or more prerequisite content portions, or supplementing at least a portion of audio-visual content with at least some of the one or more prerequisite content portions in response to the at least one input at 3222 (e.g. receiving payment for receiving an indication that a viewer has missed previous episodes of a series, and automatically supplementing the content with one or more scenes that provide essential plot points).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of one or more preferences of at least one viewer based on previous viewing behavior, or automatically adjusting a plot direction of at least a portion of audio-visual content in response to the at least one input at 3224 (e.g. receiving payment for receiving an indication that a viewer prefers sad endings over happy endings, and automatically modifying the content to provide a plot direction that ends up with a sad ending rather than a happy ending).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of a preferred point of view of at least one viewer, or adjusting the point of view of at least a portion of the audio-visual core portion in response to the at least one input at 3216 (e.g. receiving payment for receiving an indication that a viewer prefers viewing fighting scenes from a top view, and automatically adjusting a perspective of a fight scene accordingly).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of at least one preferred display characteristic at 3302
  • modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include adjusting at least one display characteristic of at least a portion of the audio-visual core portion in response to the at least one input at 3312 .
  • an input may be received that indicates a display characteristic suitable to a particular viewing environment (e.g. a brightness, a contrast, a volume level, an outdoor viewing environment, etc.) or suitable to a particular viewing device (e.g. an aspect ratio, a display resolution value, a screen size, etc.), and the audio-visual core portion may be adjusted to be optimally displayed in accordance with the display characteristic.
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving from a non-private source of information at least one input indicative of a preference of at least one viewer at 3204 (e.g. receiving an input from a viewer's public blog indicating a preference, receiving an input from a viewer's public information placed on a social networking site indicating a preference, etc.), and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include adjusting at least a portion of the audio-visual core portion in response to the at least one input at 3214 .
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of a time period available for viewing for at least one viewer at 3206 (e.g. receiving a manual input from a viewer, reading a viewer's calendar or scheduling software, etc.), and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include adjusting at least one a portion of the audio-visual core portion to fit the at least one time period available for viewing at 3216 (e.g. omitting a non-essential portion of the audio-visual core portion, etc.).
  • receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of a preference of at least one viewer with a prior consent from the at least one viewer at 3208 (e.g. receiving an input indicating a preference after a viewer “opts in”).
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of at least one preferred display characteristic, or adjusting at least one display characteristic of at least a portion of the audio-visual core portion in response to the at least one input at 3322 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving from a non-private source of information at least one input indicative of a preference of at least one viewer, or adjusting at least a portion of the audio-visual core portion in response to the at least one input at 3224 .
  • receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of a time period available for viewing for at least one viewer, or adjusting at least one a portion of the audio-visual core portion to fit the at least one time period available for viewing at 3226 .
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • functionality of the program modules may be combined or distributed as desired in various alternate embodiments.
  • embodiments of these methods, systems, and techniques may be stored on or transmitted across some form of computer readable media.
  • the implementer may opt for a mainly software implementation.
  • the implementer may opt for some combination of hardware, software, and/or firmware.
  • the processes and/or devices and/or other technologies described herein may be effected, and which may be desired over another may be a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary.
  • optical aspects of implementations will typically employ optically-oriented hardware, software, and or firmware.
  • any two components so associated can also be viewed as being “operably connected” or “operably coupled” (or “operatively connected,” or “operatively coupled”) to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable” (or “operatively couplable”) to each other to achieve the desired functionality.
  • operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
  • a signal bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disk drives, CD ROMs, digital tape, and computer memory; and transmission type media such as digital and analog communication links using TDM or IP based communication links (e.g., packet links).

Abstract

Systems and methods for dynamic customization of audio-visual content are described. In some implementations, a process may include receiving at least one audio-visual core portion, receiving at least one selection signal indicative of a viewer preference, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content, outputting the dynamically-customized audio-visual content; and receiving a consideration for the dynamically-customized audio-visual content.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
The present application is related to and claims the benefit of the earliest available effective filing date(s) from the following listed application(s) (the “Related Applications”) (e.g., claims earliest available priority dates for other than provisional patent applications or claims benefits under 35 USC §119(e) for provisional patent applications, for any and all parent, grandparent, great-grandparent, etc. applications of the Related Application(s)). All subject matter of the Related Applications and of any and all parent, grandparent, great-grandparent, etc. applications of the Related Applications, including any priority claims, is incorporated herein by reference to the extent such subject matter is not inconsistent herewith.
RELATED APPLICATIONS
For purposes of the USPTO extra-statutory requirements:
    • (1) the present application is related to, and claims the benefit of priority of, U.S. patent application Ser. No. 13/566,723, entitled “Dynamic Customization and Monetization of Audio-Visual Content,” naming William H. Gates, III, Nathan P. Myhrvold, Edward K. Y. Jung, Casey Tegreene, Roderick A. Hyde, Lowell L. Wood, Jr., Keith D. Rosema, Pablos Holman, Daniel A. Gerrity, Jordin T. Kare, Royce A. Levien, Robert W. Lord, Richard T. Lord, Mark A. Malamud, and John D. Rinaldo, Jr., filed on Aug. 3, 2012, which is currently co-pending or which is an application of which a currently co-pending application is entitled to the benefit of the filing date;
The United States Patent Office (USPTO) has published a notice to the effect that the USPTO's computer programs require that patent applicants reference both a serial number and indicate whether an application is a continuation, continuation-in-part, or divisional of a parent application. Stephen G. Kunin, Benefit of Prior-Filed Application, USPTO Official Gazette Mar. 18, 2003. The present Applicant Entity (hereinafter “Applicant”) has provided above a specific reference to the application(s) from which priority is being claimed as recited by statute. Applicant understands that the statute is unambiguous in its specific reference language and does not require either a serial number or any characterization, such as “continuation” or “continuation-in-part,” for claiming priority to U.S. patent applications. Notwithstanding the foregoing, Applicant understands that the USPTO's computer programs have certain data entry requirements, and hence Applicant has provided designation(s) of a relationship between the present application and its parent application(s) as set forth above, but expressly points out that such designation(s) are not to be construed in any way as any type of commentary and/or admission as to whether or not the present application contains any new matter in addition to the matter of its parent application(s).
FIELD OF THE DISCLOSURE
The present disclosure relates generally to dynamic customization of audio-visual broadcasts (e.g. television broadcasts, data streams, etc.), and more specifically, to monetization of dynamically customized audio-visual broadcasts.
BACKGROUND
Conventional audio-visual content streams, including television broadcasts or the like, typically consist of either pre-recorded content or live events that do not allow viewers to interact with or control any of the audio-visual content that is displayed. Various concepts have recently been introduced that allow for television broadcasts to be modified to a limited degree to accommodate viewer choices, as disclosed by U.S. Pat. Nos. 7,945,926 and 7,631,327 entitled “Enhanced Custom Content Television” issued to Dempski et al. Such prior art systems and methods are relatively limited, however, in their ability to accommodate and assimilate viewer-related information to provide a dynamically tailored audio-visual content stream. Systems and methods for monetization of dynamically customized audio-visual broadcasts that provide an improved degree of accommodation or assimilation of viewer-related choices and characteristics would have considerable utility.
SUMMARY
The present disclosure teaches systems and methods for dynamic customization and monetization of audio-visual content, such as television broadcasts, internet streams, podcasts, audio broadcasts, and the like. For example, in at least some implementations, a process for providing audio-visual content in accordance with the teachings of the present disclosure may include receiving at least one audio-visual core portion, receiving at least one selection signal indicative of a viewer preference, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content, outputting the dynamically-customized audio-visual content, and receiving a consideration for the dynamically-customized audio-visual content.
This summary is intended to provide an introduction of a few exemplary aspects of implementations in accordance with the present disclosure. It is not intended to provide an exhaustive explanation of all possible implementations, and should thus be construed as merely introductory, rather than limiting, of the following disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1-5 show schematic views of systems for dynamic customization and monetization of audio-visual content in accordance with possible implementations of the present disclosure.
FIGS. 6 through 33 are flowcharts of processes for dynamic customization and monetization of audio-visual content in accordance with further possible implementations of the present disclosure.
DETAILED DESCRIPTION
Techniques for dynamic customization and monetization of audio-visual content, such as television broadcasts or other audio-visual content streams, will now be disclosed in the following detailed description. It will be appreciated that many specific details of certain implementations will be described and shown in FIGS. 1 through 33 to provide a thorough understanding of such implementations. One skilled in the art will understand, however, that the present disclosure may have other possible implementations, and that such other implementations may be practiced with or without some of the particular details set forth in the following description.
In the following discussion, exemplary systems or environments for implementing one or more of the teachings of the present disclosure are described first. Next, exemplary flow charts showing various embodiments of processes for dynamic customization and monetization of audio-visual content in accordance with one or more of the teachings of the present disclosure are described.
Exemplary Systems for Dynamic Customization and Monetization of Audio-Visual Content
Embodiments of methods and systems in accordance with the present disclosure may be implemented in a variety of environments. Initially, methods and systems in accordance with the present disclosure will be described in terms of dynamic customization of broadcasts. It should be remembered, however, that inventive aspects of such methods and systems may be applied to other environments that involve audio-visual content streams, and are not necessarily limited to the specific audio-visual broadcast implementations shown herein.
FIG. 1 is a schematic view of a representative system 100 for dynamic customization and monetization of audio-visual content in accordance with an implementation of the present disclosure. In this implementation, the system 100 includes a processing component 110 that receives an audio-visual core portion 102, such as a television broadcast, and provides a dynamically customized audio-visual content 112 to a display 130. In some implementations, a viewer 140 uses a control device 142 to provide one or more selection signals 144 to a sensor 150 which, in turn, provides inputs corresponding to the selection signals 144 to the processing component 110. Alternately, the processing component 110 may operate without selection signals 144, such as by accessing default inputs stored within a memory. In some embodiments, the sensor 150 may receive further supplemental selection signals 145 from a processing device 146 (e.g. laptop, desktop, personal data assistant, cell phone, iPad, iPhone, etc.) associated with the viewer 140.
As described more fully below, based on the one or more selection signals 144 (or default inputs if specific inputs are not provided), the processing component 110 may modify one or more aspects of the incoming audio-visual core portion 102 to provide the dynamically customized audio-visual content 112 that is shown on the display 130. In at least some implementations, the processing component 110 may access a data store 120 having revised content portions stored therein to perform one or more aspects of the processes described below.
In at least some implementations, the processing component 110 may modify the core portion 102 by a rendering process. The rendering process is preferably a real-time (or approximately real-time) process. The rendering process may receive the core portion 102 as a digital signal stream, and may modify one or more aspects of the core portion 102, such as by replacing one or more portions of the core portion 102 with one or more revised content portions retrieved from the data store 120, in accordance with the selection signals 144 (and/or default inputs). It should be appreciated that, in some embodiments, the audio-visual core portion 102 may consist of solely an audio portion, or solely a visual (or video) portion, or may include a separate audio portion and a separate visual portion. In further embodiments, the audio-visual core portion 102 may include a plurality of audio portions or a plurality of visual portions, or any suitable combination thereof.
As used herein, the term “visual” in such phrases as “audio-visual portion,” “audio-visual core portion,” “visual portion,” etc. is used broadly to refer to signals, data, information, or portions thereof that are associated with something which may eventually be viewed on a suitable display device by a viewer (e.g. video, photographs, images, etc.). It should be understood that a “visual portion” is not intended to mean that the signals, data, information, or portions thereof are themselves visible to a viewer. Similarly, as used herein, the term “audio” in such phrases as “audio-visual portion,” “audio-visual core portion,” “audio portion,” etc. is used broadly to refer to signals, data, information, or portions thereof that are associated with something which may eventually produce sound on a suitable output device to a listener, and are not intended to mean that the signals, data, information, or portions thereof are themselves audible to a listener.
It will be appreciated that the components of the system 100 shown in FIG. 1 are merely exemplary, and represent one possible implementation of a system in accordance with the present disclosure. The various components of the system 100 may communicate and exchange information as needed to perform the functions and operations described herein. More specifically, in various implementations, each of the components of the system 100 may be implemented using software, hardware, firmware, or any suitable combinations thereof. Similarly, one or more of the components of the system 100 may be combined, or may be divided or separated into additional components, or additional components may be added, or one or more of the components may simply be eliminated, depending upon the particular requirements or specifications of the operating environment.
It will be appreciated that other suitable embodiments of systems for dynamic customization of audio-visual broadcasts may be conceived. For example, in some embodiments, the display 130 may be that associated with a conventional television or other conventional audio-visual display device, and the processing component 110 may be a separate component, such as a gaming device (e.g. Microsoft Xbox®, Sony Playstation®, Nintendo Wii®, etc.), a media player (e.g. DVD player, Blu Ray device, Tivo, etc.), or any other suitable component. Similarly, the sensor 150 may be a separate component or may alternately be integrated into the same component with the display 130 or the processing component 110. Similarly, the information store 120 may be a separate component or may alternately be integrated into the same component with the processing component 110, the display 130, or the sensor 150. Alternately, some or all of the components (e.g. the processing component 110, the information store 120, the display 130, the sensor 150, etc.) may be integrated into a common component 160.
FIG. 2 is a schematic view of another representative system 200 for dynamic customization of television broadcasts in accordance with an implementation of the present disclosure. In this implementation, the system 200 includes a processing component 210 that receives an audio-visual core portion 202, and provides a dynamically customized audio-visual content 212 to a display 230. A viewer 240 uses a control device 242 to provide one or more selection signals 244 to a sensor 250 which, in turn, provides inputs corresponding to the selection signals 244 to the processing component 210. As described above, the processing component 210 may also operate without selection signals 244, such as by accessing default inputs stored within a memory 220. The sensor 250 may sense a field of view 260 to detect the viewer 240 or other one or more other persons 262. In the implementation shown in FIG. 2, the processing component 210, the memory 220, and the sensor 250 are housed within a single device 225.
As described more fully below, based on the one or more selection signals 244 (or default inputs if specific inputs are not provided), the processing component 210 may modify one or more aspects of the incoming audio-visual core portion 202 to provide the dynamically customized audio-visual content 212 that is shown on the display 230. The processing component 210 may also modify one or more aspects of the incoming audio-visual core portion 202 based on one or more persons (e.g. viewer 240, other person 262) sensed within the filed of view 260. In at least some implementations, the processing component 210 may retrieve revised content portions stored in the memory 220 to perform one or more aspects of the processes described below.
FIG. 3 shows another representative implementation of a system 300 for dynamic customization of audio-visual content in accordance with another possible embodiment. In this implementations the system 300 may include one or more processors (or processing units) 302, special purpose circuitry 382, a memory 304, and a bus 306 that couples various system components, including the memory 304, to the one or more processors 302 and special purpose circuitry 382 (e.g. ASIC, FPGA, etc.). The bus 306 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. In this implementation, the memory 304 includes read only memory (ROM) 308 and random access memory (RAM) 310. A basic input/output system (BIOS) 312, containing the basic routines that help to transfer information between elements within the system 300, such as during start-up, is stored in ROM 308.
The exemplary system 300 further includes a hard disk drive 314 for reading from and writing to a hard disk (not shown), and is connected to the bus 306 via a hard disk driver interface 316 (e.g., a SCSI, ATA, or other type of interface). A magnetic disk drive 318 for reading from and writing to a removable magnetic disk 320, is connected to the system bus 306 via a magnetic disk drive interface 322. Similarly, an optical disk drive 324 for reading from or writing to a removable optical disk 326 such as a CD ROM, DVD, or other optical media, connected to the bus 306 via an optical drive interface 328. The drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the system 300. Although the exemplary system 300 described herein employs a hard disk, a removable magnetic disk 320 and a removable optical disk 326, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs) read only memories (ROM), and the like, may also be used.
As further shown in FIG. 3, a number of program modules may be stored on the memory 304 (e.g. the ROM 308 or the RAM 310) including an operating system 330, one or more application programs 332, other program modules 334, and program data 336 (e.g. the data store 320, image data, audio data, three dimensional object models, etc.). Alternately, these program modules may be stored on other computer-readable media, including the hard disk, the magnetic disk 320, or the optical disk 326. For purposes of illustration, programs and other executable program components, such as the operating system 330, are illustrated in FIG. 3 as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the system 300, and may be executed by the processor(s) 302 or the special purpose circuitry 382 of the system 300.
A user may enter commands and information into the system 300 through input devices such as a keyboard 338 and a pointing device 340. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are connected to the processing unit 302 and special purpose circuitry 382 through an interface 342 that is coupled to the system bus 306. A monitor 325 (e.g. display 130, display 230, or any other display device) may be connected to the bus 306 via an interface, such as a video adapter 346. In addition, the system 300 may also include other peripheral output devices (not shown) such as speakers and printers.
The system 300 may operate in a networked environment using logical connections to one or more remote computers (or servers) 358. Such remote computers (or servers) 358 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and may include many or all of the elements described above relative to system 300. The logical connections depicted in FIG. 3 may include one or more of a local area network (LAN) 348 and a wide area network (WAN) 350. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet. In this embodiment, the system 300 also includes one or more broadcast tuners 356. The broadcast tuner 356 may receive broadcast signals directly (e.g., analog or digital cable transmissions fed directly into the tuner 356) or via a reception device (e.g., via sensor 150, sensor 250, an antenna, a satellite dish, etc.).
When used in a LAN networking environment, the system 300 may be connected to the local network 348 through a network interface (or adapter) 352. When used in a WAN networking environment, the system 300 typically includes a modem 354 or other means for establishing communications over the wide area network 350, such as the Internet. The modem 354, which may be internal or external, may be connected to the bus 306 via the serial port interface 342. Similarly, the system 300 may exchange (send or receive) wireless signals 353 (e.g. selection signals 144, signals 244, core portion 102, core portion 202, etc.) with one or more remote devices (e.g. remote 142, remote 242, computers 258, etc.), using a wireless interface 355 coupled to a wireless communicator 357 (e.g., sensor 150, sensor 250, an antenna, a satellite dish, a transmitter, a receiver, a transceiver, a photoreceptor, a photodiode, an emitter, a receptor, etc.).
In a networked environment, program modules depicted relative to the system 300, or portions thereof, may be stored in the memory 304, or in a remote memory storage device. More specifically, as further shown in FIG. 3, a dynamic customization component 380 may be stored in the memory 304 of the system 300. The dynamic customization component 380 may be implemented using software, hardware, firmware, or any suitable combination thereof. In cooperation with the other components of the system 300, such as the processing unit 302 or the special purpose circuitry 382, the dynamic customization component 380 may be operable to perform one or more implementations of processes for dynamic customization in accordance with the present disclosure.
It will be appreciated that while the system 300 shown in FIG. 3 is capable of receiving an audio-visual core portion (e.g. core portion 102, core portion 202, etc.) from an external source (e.g. via the wireless device 357, the LAN 348, the WAN 350, etc.), in further embodiments, the audio-visual core portion may itself be generated within the system 300, such as by playing media stored within the system memory 304, or stored within the hard disk drive 314, or played on the disk drive 318, the optical drive 328, or any other suitable component of the system 300. In some implementations, the audio-visual core portion may be generated by suitable software routines operating within the system 300.
FIG. 4 is a schematic view of a representative system 400 for dynamic customization of audio-visual content in accordance with an alternate implementation of the present disclosure. In this implementation, the system 400 includes one or more core content providers 410 that provide one or more audio-visual core portions 412 to one or more customization service providers 420. The one or more customization service providers 420 include at least one dynamic customization system 422, which may include one or more of the components described above with respect to FIGS. 1-3.
It will be appreciated that, in at least some implementations, one or more of the core content providers 410, or one or more of the customization service providers 420, may be based or partially based in what is referred to as the “cloud” or “cloud computing,” or may be provided using one or more “cloud services.” For the purposes of this application, cloud computing is the delivery of computational capacity and/or storage capacity as a service. The “cloud” refers to one or more hardware and/or software components that deliver or assist in the delivery of computational and/or storage capacity, including, but not limited to, one or more of a client, an application, a platform, an infrastructure, and a server, and associated hardware and/or software. Cloud and cloud computing may refer to one or more of a computer, a processor, a storage medium, a router, a modem, a virtual machine (e.g., a virtual server), a data center, an operating system, a middleware, a hardware back-end, a software back-end, and a software application. A cloud may refer to a private cloud, a public cloud, a hybrid cloud, and/or a community cloud. A cloud may be a shared pool of configurable computing resources, which may be public, private, semi-private, distributable, scaleable, flexible, temporary, virtual, and/or physical. A cloud or cloud service may be delivered over one or more types of network, e.g., the Internet.
As used in this application, a cloud or cloud services may include one or more of infrastructure-as-a-service (“IaaS”), platform-as-a-service (“Paas”), software-as-a-service (“SaaS”), and desktop-as-a-service (“DaaS”). As a non-exclusive example, IaaS may include, e.g., one or more virtual server instantiations that may start, stop, access, and configure virtual servers and/or storage centers (e.g., providing one or more processors, storage space, and network resources on-demand, e.g., GoGrid and Rackspace). PaaS may include, e.g., one or more software and/or development tools hosted on an infrastructure (e.g., a computing platform and/or a solution stack from which the client can create software interfaces and applications, e.g., Microsoft Azure. SaaS may include, e.g., software hosted by a service provider and accessible over a network (e.g., the software for the application and the data associated with that software application are kept on the network, e.g., Google Apps, SalesForce). DaaS may include, e.g., providing desktop, applications, data, and services for the user over a network (e.g., providing a multi-application framework, the applications in the framework, the data associated with the applications, and services related to the applications and/or the data over the network, e.g., Citrix). The foregoing is intended to be exemplary of the types of systems referred to in this application as “cloud” or “cloud computing” and should not be considered complete or exhaustive.
As further shown in FIG. 4, a viewer 440 may provide one or more selection signals 444 using a manual input device 442. In some implementations, the one or more selections signals 444 may be provided to a sensor 450 which, in turn, provides selection inputs 452 corresponding to the selection signals 444 to the one or more dynamic customization service providers 420. Alternately, the sensor 450 may be eliminated, and the selection signals 444 may be communicated directly to the one or more dynamic customization service providers 420.
As further shown in FIG. 4, in some embodiments, the sensor 450 may receive one or more supplemental selection signals 445 from one or more electronic devices 446 (e.g. laptop, desktop, personal data assistant, cell phone, iPad, iPhone, etc.) associated with the viewer 440. As described above, the one or more supplemental selection signals 445 may be based on a variety of suitable information, including, for example, browsing histories, purchase records, call records, downloaded content, or any other suitable information or data. In some implementations, one or more supplemental selection signals 445 may be automatically determined from one or more characteristics of a viewing area 460, such as a presence of one or more additional viewers 442 (e.g. a child, spouse, friend, visitor, etc.).
In operation, the one or more customization service providers 420 receive the one or more selection inputs 452 (or default inputs if specific inputs are not provided), and the audio-visual core portion 412 from the one or more core content providers 610, and using the one or more dynamic customization systems 422, provide a dynamically customized audio-visual content 470 to a display 472 visible to the one or more viewers 440, 442 in the viewing area 460.
In at least some embodiments, one or more viewers 440, 442 may provide one or more payments (or other consideration) 480 to the one or more customization service providers 420 in exchange for the dynamically customized audio-visual content 470. Similarly, in at least some embodiments the one or more customization service providers 420 may provide one or more payments (or other consideration) 482 to the one or more core content providers 410 in exchange for the core audio-visual content 412. In some embodiments, the amounts of at least one of the one or more payments 480, or the one or more payments 482, may be at least partially determined using one or more processes in accordance with the teachings of the present disclosure, as described more fully below.
Again, it should be appreciated that, in some embodiments, the audio-visual core portion 412 may consist of solely an audio portion, or solely a visual (or video) portion, a separate audio portion, a separate visual portion, a plurality of audio portions, a plurality of visual portions, or any suitable combination thereof. Similarly, in various embodiments, the dynamically customized audio-visual core portion 470 may consist of solely an audio portion, or solely a visual (or video) portion, a separate audio portion, a separate visual portion, a plurality of audio portions, a plurality of visual portions, or any suitable combination thereof.
FIG. 5 shows a schematic view of another representative system 500 for dynamic customization of audio-visual broadcasts in accordance with an alternate implementation of the present disclosure. It will be appreciated that, in this implementation, the system 500 includes several of the same components as described above for the system 500 shown in FIG. 5, however, the one or more customization service providers 420 have been eliminated. For the sake of brevity, a description of the components described above with respect to FIG. 4 will not be repeated, but rather, the significant new aspects of the system 500 shown in FIG. 5 will be described.
As shown in FIG. 5, in some implementations, the one or more selection inputs 552 are provided to one or more core content providers 510. The one or more core content providers 510 have one or more dynamic customization systems 512. In operation, the one or more core content providers 510 receive the one or more selection inputs 512 (or default inputs if specific inputs are not provided), and modify an audio-visual core portion using the one or more dynamic customization systems 512 to provide a dynamically customized audio-visual content 470 to a display 472 visible to one or more viewers 440, 442 in a viewing area 460. Thus, in at least some implementations, the one or more customization service providers 420 shown in FIG. 4 may be eliminated, and the same one or more entities that normally provide an audio-visual core portion (e.g. normal television broadcasts, etc.) may perform the dynamic customization to provide the desired dynamically customized audio-visual content to viewers.
In at least some embodiments, the one or more viewers 440, 442 may provide one or more payments (or other consideration) 490 to the one or more core content providers 510 in exchange for the dynamically customized audio-visual content 470. In some embodiments, the amount of the one or more payments 490 may be defined using one or more processes in accordance with the teachings of the present disclosure, as described more fully below.
Of course, other environments may be implemented to perform the dynamic customization of audio-visual content in accordance with the present disclosure, and systems in accordance with the present disclosure are not necessarily limited to the specific implementations shown and described herein. Additional functions and operational aspects of systems in accordance with the teachings of the present disclosure are described more fully below.
Exemplary Processes for Dynamic Customization and Monetization of Audio-Visual Content
In the following description of exemplary processes for dynamic-customization of audio-visual content, reference will be made to specific components of the exemplary systems described above and shown in FIGS. 1 through 5. It will be appreciated, however, that such references are merely exemplary, and that the inventive processes are not limited to being implemented on the specific systems described above, but rather, the processes described herein may be implemented on a wide variety of suitable systems and in a wide variety of suitable environments.
FIG. 6 shows a flowchart of a process 600 for dynamic-customization of audio-visual content in accordance with an implementation of the present disclosure. In this implementation, the process 600 includes receiving at least one audio-visual core portion at 610, receiving at least one selection signal indicative of a viewer preference at 620, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630, outputting the dynamically-customized audio-visual content at 640, and receiving a consideration for the dynamically-customized audio-visual content at 650.
It will be appreciated that in accordance with the present disclosure, an incoming audio-visual core portion may be dynamically customized in accordance with a viewer's preferences, thereby increasing the viewer's satisfaction. The viewer (e.g. viewer 140) may indicate preferences for actresses (and actors) 132, vehicles 134, depicted products (or props) 135, environmental aspects 136 (e.g. buildings, scenery, setting, background, lighting, etc.), language 138, or other suitable preferences. In further implementations, virtually any desired aspect of the incoming core portion 102 may be dynamically customized in accordance with the viewer's selections, preferences, or characteristics as implemented by the selection signals 144.
As shown in FIG. 7, in some implementations, receiving at least one audio-visual core portion at 610 may include receiving at least one audio-visual core portion at a dynamic customization system proximate to a viewer at 702 (e.g. dynamic customization system 100 shown in FIG. 1, a gaming console or other suitable processing device located in a viewer's home, etc.). In other implementations, receiving at least one audio-visual core portion at 610 may include receiving at least one audio-visual core portion at a dynamic customization service that provides a dynamically customized audio-visual content to a viewer at 704 (e.g. customization service provider 420 shown in FIG. 4). In still other implementations, receiving at least one audio-visual core portion at 610 may include generating at least one audio-visual core portion by a core content provider at 706 (e.g. core content provider 410 shown in FIG. 4). In additional implementations, receiving at least one audio-visual core portion at 610 may include providing at least one audio-visual core portion from a memory device by a core content provider at 708 (e.g. core content provider 510 shown in FIG. 5).
In still other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one selection signal indicative of a viewer preference at a dynamic customization system proximate to a viewer at 712 (e.g. dynamic customization system 100 shown in FIG. 1, an Xbox®, Playstation®, Wii®, personal computer, Mac®, or other suitable processing device located within a viewer's living space or sphere of influence, etc.). In further implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one selection signal indicative of a viewer preference at a dynamic customization service that provides a dynamically customized audio-visual content to a viewer at 714 (e.g. customization service provider 420 shown in FIG. 4). In still further implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one selection signal indicative of a viewer preference by a core content provider at 716 (e.g. core content provider 510 shown in FIG. 5).
As further shown in FIG. 7, in other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying the audio-visual core portion at a dynamic customization system proximate to a viewer at 722 (e.g. dynamic customization system 100 shown in FIG. 1). In further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying the audio-visual core portion at a dynamic customization service that provides a dynamically customized audio-visual content to a viewer at 724 (e.g. customization service provider 420 shown in FIG. 4). In still further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying the audio-visual core portion by a core content provider that provides the audio-visual core portion at 726 (e.g. core content provider 510 shown in FIG. 5).
In additional implementations, outputting the dynamically-customized audio-visual content at 640 may include outputting the dynamically-customized audio-visual content from a dynamic customization system proximate to a viewer at 732 (e.g. dynamic customization system 100 shown in FIG. 1, at the viewer's television set, at the viewer's viewing room, within the viewer's dwelling, etc.). In further implementations, outputting the dynamically-customized audio-visual content at 640 may include outputting the dynamically-customized audio-visual content from a dynamic customization service that provides the dynamically-customized audio-visual content to a viewer at 734 (e.g. customization service provider 420 shown in FIG. 4). In still further implementations, outputting the dynamically-customized audio-visual content at 640 may include outputting the dynamically-customized audio-visual content from a core content provider that provides the audio-visual core portion at 736 (e.g. core content provider 510 shown in FIG. 5).
As further shown in FIG. 7, in alternate implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least one of a payment, a promise to pay, a promise to perform a deed, or a grant of a right at 741. For example, in some implementations, the payment may be a one-time payment, a monthly subscription payment, a use-based or on-demand type of payment, or any other suitable payment. Similarly, in some implementations, the promise to pay may be a contractual commitment to provide future payment (or payments) based on amount or frequency of usage, or any other suitable terms. Further, the promise to perform a deed may include a promise to send payment, a promise to enable access private information, a promise to allow data gathering regarding viewing habits or preferences, or any other suitable promises. And the grant of a right may include a grant of access to gather personal data, a grant to share data gathered, a grant to perform market testing or market analysis, or any other suitable grant of one or more rights. Of course, these examples are merely exemplary, and the consideration received at 650 may be any suitable consideration as that term is generally understood in accordance with the principles of contracts and contract law, and as described more fully below.
As further shown in FIG. 7, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving one or more payments at a dynamic customization service that provides a dynamically customized audio-visual content to a viewer at 742 (e.g. customization service provider 420 shown in FIG. 4). In further alternate implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving one or more payments by a core content provider that provides the audio-visual core portion at 744 (e.g. core content provider 510 shown in FIG. 5). Finally, in additional embodiments, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving one or more payments from a viewer of the dynamically-customized audio-visual content at 746 (e.g. viewer 132, viewer 1840, etc.).
A wide variety of different types of input may serve as the audio-visual core portion. For example, as shown in FIG. 8, in some implementations, receiving at least one audio-visual core portion at 610 may include receiving a television broadcast at 802 (e.g. conventional wireless television broadcast, cable television broadcast, satellite television broadcast, etc.). In further implementations, receiving at least one audio-visual core portion at 610 may include receiving an audio-visual data stream at 804 (e.g. streaming audio-visual content via Internet, audio-visual data stream via LAN, etc.). In still further implementations, receiving at least one audio-visual core portion at 610 may include receiving at least one audio core portion and receiving at least one visual core portion at 806 (e.g. receiving an audio signal via a wireless connection and receiving a video data stream via a cable or vice versa, receiving an audio signal via a first wireless connection and receiving a video signal via a second wireless connection, etc.). In still further embodiments, receiving at least one audio-visual core portion at 610 may include receiving an internally-generated audio-visual core portion at 808 (e.g. receiving an audio-visual core portion from an internal media player, generating an audio-visual core portion using an internally-executing software routine, etc.). In additional implementations, receiving at least one audio-visual core portion at 610 may include receiving a virtual reality portion at 810 (e.g. receiving a core portion that provides a structure of a virtual reality, generating a virtual reality core portion using an internally-executing software routine or local processing unit, etc.). In still other implementations, receiving at least one audio-visual core portion at 610 may include receiving a video game data stream portion at 812 (e.g. receiving a video game signal as a data stream and where a server determines what is displayed on a viewer's display device based on the at least one selection signal, etc.).
As further shown in FIG. 8, a variety of different selection signals may be received in accordance with the present disclosure, and a variety of different payment schemes may be devised based on the different selection signal varieties. For example, in some implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one selection signal generated by a user input device at 820 (e.g. receiving a signal generated by a keyboard, a joystick, a microphone, a touch screen, etc). In further implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one selection signal based on a pre-determined default value at 822 (e.g. receiving one or more signals based on a user's previous selections stored in memory, or a pre-defined profile for a user stored in memory, etc.).
In other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include sensing one or more viewers present within a viewing area and determining at least one selection signal based on the one or more viewers sensed within the viewing area at 824 (e.g. sensing a parent and a child within a television viewing area, and determining a first selection signal based on the parent and a second selection signal based on the child, sensing a female and a male within a television viewing area, and determining a first selection signal based on the female and a second selection signal based on the male, etc.). In still other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one supplemental signal from an electronic device associated with a viewer (e.g. a cell phone, personal data assistant, laptop computer, desktop computer, smart phone, tablet, Apple iPhone, Apple iPad, Microsoft Surface, Kindle Fire, etc.) and determining at least one selection signal based on the at least one supplemental signal at 826.
It will be appreciated that various implementations of receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of receiving at least one selection signal indicative of a viewer preference at 630. For example, as shown in FIG. 8, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on the receiving at least one selection signal generated by the user input device at 830 (e.g. receiving a payment at least partially based on receiving a signal generated by a keyboard, a joystick, a microphone, a touch screen, etc). In other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on the receiving at least one selection signal based on a pre-determined default value at 832 (e.g. receiving a payment at least partially based on receiving one or more signals based on a user's previous selections stored in memory, or a pre-defined profile for a user stored in memory, etc.).
In other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of sensing one or more viewers present within a viewing area or determining at least one selection signal based on the one or more viewers sensed within the viewing area at 834 (e.g. receiving a payment at least partially based on sensing a parent and a child within a television viewing area and/or determining a first selection signal based on the parent and a second selection signal based on the child, receiving a payment at least partially based on sensing a female and a male within a television viewing area, and/or determining a first selection signal based on the female and a second selection signal based on the male, etc.). In still other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one supplemental signal from an electronic device associated with a viewer or determining at least one selection signal based on the at least one supplemental signal at 836 (e.g. receiving a payment based at least partially on receiving at least one supplemental signal from a cell phone, personal data assistant, laptop computer, desktop computer, smart phone, tablet, Apple iPhone, Apple iPad, Microsoft Surface, Kindle Fire, etc. associated with a viewer, and/or determine at least one selection signal based on such a supplemental signal).
As shown in FIG. 9, in other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include scanning an electronic device associated with a viewer (e.g. a cell phone, personal data assistant, laptop computer, desktop computer, smart phone, tablet, Apple iPhone®, Apple iPad®, Microsoft Surface®, Kindle Fire®, etc.) and determining at least one selection signal based on the scanning at 902. And in other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include querying an electronic device associated with a viewer (e.g. a cell phone, personal data assistant, laptop computer, desktop computer, smart phone, tablet, Apple iPhone®, Apple iPad®, Microsoft Surface®, Kindle Fire®, etc.) and determining at least one selection signal based on the querying at 906.
As noted above, various implementations of receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of receiving at least one selection signal indicative of a viewer preference at 630. For example, as shown in FIG. 9, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of scanning an electronic device associated with a viewer or determining at least one selection signal based on the scanning at 912 (e.g. receiving a payment based at least partially on scanning a viewer's cell phone, personal data assistant, laptop computer, desktop computer, smart phone, tablet, Apple iPhone®, Apple iPad®, Microsoft Surface®, Kindle Fire®, etc., and/or determining a selection signal based on the scanning). And in other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of querying an electronic device associated with a viewer or determining at least one selection signal based on the querying at 914 (e.g. receiving a payment based at least partially on a querying of a viewer's cell phone, personal data assistant, laptop computer, desktop computer, smart phone, tablet, Apple iPhone®, Apple iPad®, Microsoft Surface®, Kindle Fire®, etc., and/or determining a selection signal based on the querying).
In some instances, one or more incoming signals may conflict with one or more other incoming signals. Such conflicts may be resolved in a variety of suitable ways. For example, as shown in FIG. 10, in some implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least two selection signals, and arbitrating between at least two conflicting selection signals at 1002 (e.g. receiving a first selection signal indicating a desire to view R-rated subject matter, and a second selection signal indicating that a child is in the viewing area, and arbitrating between the first and second selection signals such that the R-rated subject matter is not shown). In at least some implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least two selection signals, and between at least two conflicting selection signals, determining which signal to apply based on a pre-determined ranking at 1004 (e.g. receiving a first selection signal from a manual input device to view a movie in English and a second selection signal from a scanning of a laptop computer indicating a preference for French, and determining to apply the first selection signal based on a pre-determined ranking that gives higher ranking to manually input signals over signals determined by scanning; receiving a first selection signal from a parent's electronic device and a second selection signal from a child's electronic device, and determining to apply the first selection signal based on a ranking that gives priority to signals from the parent's electronic device over the child's electronic device, etc.).
In further implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least two selection signals, and between at least two conflicting selection signals, determining which signal to apply based on one or more rules at 1006 (e.g. receiving a first selection signal from a manual input device indicating a desire to view R-rated content, and a second selection signal from a scanning of a viewing area indicating a child in a viewing area, and determining not to display the R-rated content based on a rule that indicates that R-rated content will not be displayed when any child is present; receiving a first selection signal from a manual input device indicating a desire to view a first actor, and a second selection signal from an Android phone indicating a desire to view a second actor, and determining to apply the first selection signal based on a rule that gives priority to a manual input over an input determined from querying an electronic device, etc.). In still other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving a selection signal, and determining whether to apply the selection signal based on an authorization level at 1008 (e.g. receiving a selection signal from a scanning of a viewer's electronic device indicating a desire to view R-rated content, and determining not to display the R-rated content based on a lack of authorization by an owner of the electronic device).
Again, it will be appreciated that various implementations of receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of receiving at least one selection signal indicative of a viewer preference at 630. For example, as further shown in FIG. 10, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least two selection signals or arbitrating between at least two conflicting selection signals at 1012 (e.g. receiving a payment based at least partially on receiving and/or arbitrating between the first and second selection signals that conflict with respect to a preferred maturity level of content, a preferred language of content, a preferred setting of content, etc.). In at least some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include, receiving at least a portion of a consideration based at least partially on at least one of receiving at least two selection signals or between at least two conflicting selection signals, determining which signal to apply based on a pre-determined ranking at 1014 (e.g. receiving a payment based at least partially on receiving and/or determining which of two conflicting signals to apply based on a ranking hierarchy, etc.).
In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving a selection signal, and determining whether to apply the selection signal based on an authorization level at 1016 (e.g. receiving a payment based at least partially on receiving first and second selection signals that conflict, and/or determining which to apply based on one or more rules regarding a content maturity level, a language preference, a content violence level, etc.). In still other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving a selection signal, and determining whether to apply the selection signal based on an authorization level at 1018 (e.g. receiving a payment based at least partially on receiving a selection signal from a scanning of a viewer's electronic device indicating a desire to view R-rated content and determining not to display the R-rated content based on a lack of authorization by an owner of the electronic device, etc.).
As noted above, a wide variety of aspects of audio-visual core portions may be dynamically customized in accordance with the preferences of a viewer. For example, as shown in FIG. 11, in at least some implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one actor of the audio-visual core portion with at least one replacement actor at 1102 (e.g. replacing the actor Brad Pitt in the movie Troy with replacement actor Mel Gibson, replacing the actor Meryl Streep in the movie The Manchurian Candidate with replacement actor Jessica Alba, the term “actor” being used herein a gender-neutral manner to include both males and females, etc.).
In further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing one or more of a facial appearance, a voice, a body appearance, or an apparel with a corresponding one or more of a replacement facial appearance, a replacement voice, a replacement body appearance, or a replacement apparel at 1104 (e.g. replacing a facial appearance and a voice of the actor Brad Pitt in the movie Troy with a replacement facial appearance of actor Mel Gibson and a replacement voice of actor Chris Rock, replacing a body appearance and an apparel of actor Meryl Streep in the movie The Manchurian Candidate with a replacement body appearance of actor Jessica Alba and a replacement apparel based on a browsing history of online clothing shopping recently viewed by the viewer as indicated by supplemental signals from the viewer's laptop computer, etc.).
As further shown in FIG. 11, in still other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one consumer product depicted in the audio-visual core portion with at least one replacement consumer product at 1106 (e.g. replacing a can of Coke® held by an actor in a television sitcom with a can of Dr. Pepper®, replacing a hamburger eaten by a character in a movie with a taco, replacing a Gibson® guitar played by a character in a podcast with a Fender® guitar, etc.). In further implementations, replacing at least one consumer product depicted in the audio-visual core portion with at least one replacement consumer product at 864 may include replacing at least one of a beverage product, a food product, a vehicle, an article of clothing, an article of jewelry, a musical instrument, an electronic device, a household appliance, an article of furniture, an artwork, an office equipment, or an article of manufacture at 1108.
It will be appreciated that various implementations of receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630. For example, as further shown in FIG. 11, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one actor of the audio-visual core portion with at least one replacement actor at 1122 (e.g. receiving a payment based at least partially on replacing an actor with a replacement actor, receiving a relatively higher payment based on replacing a lower-popularity actor with a higher-popularity actor, etc.).
In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing one or more of a facial appearance, a voice, a body appearance, or an apparel with a corresponding one or more of a replacement facial appearance, a replacement voice, a replacement body appearance, or a replacement apparel at 1124 (e.g. receiving a payment based on replacing a facial appearance and a voice of a first actor with a second actor, receiving a relatively higher payment based at least partially on replacing a first body appearance of a lower-popularity actress with a body appearance of a higher-popularity actress, etc.).
In yet other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one consumer product depicted in the audio-visual core portion with at least one replacement consumer product at 1126 (e.g. receiving a payment based at least partially on replacing a can of Coke® held by an actor in a television sitcom with a can of Dr. Pepper®, receiving a payment based at least partially on replacing a hamburger eaten by a character in a movie with a taco, replacing a Gibson® guitar played by a character in a podcast with a Fender® guitar, etc.). In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one of a beverage product, a food product, a vehicle, an article of clothing, an article of jewelry, a musical instrument, an electronic device, a household appliance, an article of furniture, an artwork, an office equipment, or an article of manufacture at 1108.
Referring now to FIG. 12, in additional implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one of a setting aspect, an environmental aspect, or a background aspect of the audio-visual core portion with a corresponding at least one of a replacement setting aspect, a replacement environmental aspect, or a replacement background aspect at 1202. For example, one or more scenes from a movie may be set in a different location (e.g. scenes from Sleepless in Seattle may be set in Cleveland, or a background with the Golden Gate bridge may be replaced with the Tower Bridge over the Thames River, etc.). Alternately, a weather condition may be replaced with a different weather condition (e.g. a surfing scene from Baywatch may take place in a snowstorm instead of a sunny day, etc.), or buildings in a background may be replaced with mountains or open countryside.
In some implementations, replacing at least one of a setting aspect, an environmental aspect, or a background aspect of the audio-visual core portion with a corresponding at least one of a replacement setting aspect, a replacement environmental aspect, or a replacement background aspect at 1202 may include replacing at least one of a city in which at least one scene is set, a country in which at least one scene is set, a weather condition in which at least one scene is set, a time of day in which at least one scene is set, or a landscape in which at least one scene is set at 1204.
As further shown in FIG. 12, in other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one animated character with at least one replacement animated character at 1206 (e.g. replacing a cartoon Snow White from Snow White and the Seven Dwarfs with a cartoon Alice from Alice in Wonderland, replacing an animated elf with an animated dwarf, etc.).
Again, various implementations of receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630. For example, as further shown in FIG. 12, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one of a setting aspect, an environmental aspect, or a background aspect of the audio-visual core portion with a corresponding at least one of a replacement setting aspect, a replacement environmental aspect, or a replacement background aspect at 1212 (e.g. receiving a payment based at least partially on replacing scenes set in a first building setting with scenes set in a second building setting, etc.). In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one of a city in which at least one scene is set, a country in which at least one scene is set, a weather condition in which at least one scene is set, a time of day in which at least one scene is set, or a landscape in which at least one scene is set at 1214.
In still other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one animated character with at least one replacement animated character at 1216 (e.g. receiving a payment based at least partially on replacing a cartoon Snow White with a cartoon Alice, receiving a payment based at least partially on replacing a cartoon Cartman with a cartoon Kenny, etc.).
With reference to FIG. 13, in further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one virtual character with at least one replacement virtual character at 1302 (e.g. replacing a virtual warrior with a virtual wizard, etc.). In still other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one industrial product depicted in the audio-visual core portion with at least one replacement industrial product at 1304 (e.g. replacing a nameplace on a milling machine from “Cincinnati” to “Bridgeport” in a factory scene, replacing a name of a shipping line and/or the colors on a container ship from “Maersk” to “Evergreen,” etc.).
In still further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one name brand depicted in the audio-visual core portion with at least one replacement name brand at 1306 (e.g. replacing a leather label on character's pants from “Levis” to “J Brand,” replacing an Izod alligator on a character's shirt with a Ralph Lauren horse logo, replacing a shoe logo from “Gucci” to “Calvin Klein,” etc.). In yet other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least one trade dress depicted in the audio-visual core portion with at least one replacement trade dress at 1308 (e.g. replacing uniforms, packaging, colors, signs, logos, and any other items associated with a trade dress of “McDonald's” restaurant with corresponding trade dress items associated with “Burger King” restaurant, replacing brown trucks and uniforms associated with the “UPS” delivery company with red and yellow trucks and uniforms associated with the “DHL Express” delivery company, replacing helmets and jerseys associated with the Minnesota Vikings with replacement helmets and jerseys associated with the Seattle Seahawks, etc.).
Again, various implementations of receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630. For example, as further shown in FIG. 13, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one virtual character with at least one replacement virtual character at 1312 (e.g. receiving a payment based on replacing a virtual warrior with a virtual wizard, etc.). In still other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one industrial product depicted in the audio-visual core portion with at least one replacement industrial product at 1314 (e.g. receiving a payment based on replacing a nameplace on a milling machine from “Cincinnati” to “Bridgeport” in a factory scene, replacing a name of a shipping line and/or the colors on a container ship from “Maersk” to “Evergreen,” etc.).
In other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least one name brand depicted in the audio-visual core portion with at least one replacement name brand at 1316 (e.g. receiving a payment based at least partially on replacing a leather label on character's pants, replacing a trademark on a character's shirt, or replacing a logo on a character's computer, etc.). In yet other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include receiving at least a portion of a consideration based at least partially on replacing at least one trade dress depicted in the audio-visual core portion with at least one replacement trade dress at 1318 (e.g. receiving payment based at least partially on replacing uniforms, packaging, colors, signs, logos, and any other items associated with a trade dress of “McDonald's” restaurant with corresponding trade dress items associated with “Burger King” restaurant, receiving payment based on replacing helmets and jerseys associated with the Dallas Cowboys with those of the Detroit Lions so a viewer may watch a depiction of the Lions winning a Super Bowl, etc.).
Additional possible implementations of modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 are shown in FIG. 14. For example, in some implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least a portion of dialogue of the audio-visual core portion with a revised dialogue portion at 1402. For example, based on the at least one selection signal indicative of a viewer selection (e.g. a viewer selection indicating a desire for no profanity, or based on automatic detection using a sensor of a child entering a viewing area, etc.) at 620, a portion of dialogue of a movie that contains profanity or that may otherwise be offensive to the viewer is replaced with a replacement portion of dialogue that is not offensive to the viewer (e.g. a dialogue of a movie is modified from an R-rated dialogue to a lower-rated dialogue, such as PG-13-rated dialogue or a G-rated dialogue, such as “Frankly, my dear, I don't give a damn” being replaced with “Frankly, my dear, I don't really care”, a dialogue that is threatening or violent may be replaced with a less-threatening or less-violent dialogue, etc.).
In some implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing one or more spoken portions with one or more replacement spoken portions (e.g. replacing a profane word, such as “damn,” with a non-profane word, such as “darn,” replacing a first laughter, such as a “tee hee hee,” with a second laugher, such as a “ha ha ha,” etc.) and modifying one or more facial movements corresponding to the one or more spoken portions with one or more replacement facial movements corresponding to the one or more replacement spoken portions (e.g. replacing one or more lip movements corresponding with the profane word with one or more replacement lip movements corresponding with the non-profane word, replacing lip and eye movements corresponding with the first laughter with replacement lip and eye movements corresponding with the second laughter, etc.) at 1404. Accordingly, unlike conventional editing practices that change spoken words but leave facial movements unchanged, in accordance with at least some implementations, by replacing both the audible portions and the corresponding facial movements, it is not apparent to a viewer that any changes have been made to the dialogue of the audio-visual core portion.
As further shown in FIG. 14, in further implementations, replacing one or more spoken portions with one or more replacement spoken portions and modifying one or more facial movements corresponding to the one or more spoken portions with one or more replacement facial movements corresponding to the one or more replacement spoken portions at 1404 may include replacing one or more words spoken in a first language with one or more replacement words spoken in a second language (e.g. replacing “no” with “nyet,” replacing “yes” with “oui,” etc.), and modifying one or more facial movements corresponding to the one or more words spoken in the first language with one or more replacement facial movements corresponding to the one or more words spoken in the second language (e.g. replacing facial movements corresponding to “no” with replacement facial movements corresponding to “nyet,” replacing facial movements corresponding to “yes” with replacement facial movements corresponding to “oui,” etc.) at 1406. Again, in this way, it will not be apparent to a viewer that an actor was originally speaking a first language but the movie has been dubbed with a second language, and instead, it will appear to the viewer that the actor was originally speaking the second language.
As previously noted, various implementations of receiving a consideration for the dynamically-customized audio-visual content at 650 may be implemented in accordance with the various implementations of modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630. For example, as further shown in FIG. 14, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least a portion of dialogue of the audio-visual core portion with a revised dialogue portion at 1412 (e.g. receiving payment based on modifying an audio-visual content to accommodate a viewer selection indicating a desire for no profanity, or based on automatic detection using a sensor of a child entering a viewing area, etc.).
In some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of replacing one or more spoken portions with one or more replacement spoken portions or modifying one or more facial movements corresponding to the one or more spoken portions with one or more replacement facial movements corresponding to the one or more replacement spoken portions at 1414 (e.g. receiving payment for replacing a profane word with a non-profane word, and replacing one or more lip movements corresponding with the profane word with one or more replacement lip movements corresponding with the non-profane word, etc.). In further implementations, receiving at least a portion of a consideration based at least partially on at least one of replacing one or more spoken portions with one or more replacement spoken portions or modifying one or more facial movements corresponding to the one or more spoken portions with one or more replacement facial movements corresponding to the one or more replacement spoken portions at 1414 may include receiving at least a portion of a consideration based at least partially on at least one of replacing one or more words spoken in a first language with one or more replacement words spoken in a second language, or modifying one or more facial movements corresponding to the one or more words spoken in the first language with one or more replacement facial movements corresponding to the one or more words spoken in the second language at 1416 (e.g. receiving payment for replacing sounds and facial movements corresponding to Japanese speech with those corresponding to English speech, receiving payment for replacing sounds and facial movements corresponding to English speech with those corresponding to Chinese speech, etc.).
With reference to FIG. 15, in some implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing one or more audible portions with one or more replacement audible portions (e.g. replacing a sound of a hand clap with a sound of snapping fingers, replacing a sound of a cough with a sound of a sneeze, replacing the sound of a piano with the sound of a violin, etc.) and modifying one or more body movements corresponding to the one or more audible portions with one or more replacement body movements corresponding to the one or more replacement audible portions (e.g. replacing two hands striking with two fingers snapping, replacing facial movements associated with a cough with facial movements associated with a sneeze, replacing visual components associated with a piano being played with replacement visual components associated with a violin being played, etc.) at 1502. Accordingly, by replacing both the audible and visual portions, it may not be apparent to the viewer that any changes have been made to the audio-visual core portion.
In other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing one or more background noises with one or more replacement background noises (e.g. replacing a sound of a bird singing with a sound of a dog barking, replacing a sound of an avalanche with a sound of an erupting volcano, etc.) at 1504.
In further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing one or more background noises with one or more replacement background noises (e.g. replacing a sound of a lion roaring with a sound of an elephant trumpeting, replacing a sound of an avalanche with a sound of an erupting volcano, etc.), and replacing one or more background visual components with one or more replacement background visual components (e.g. replacing a visual image of a lion roaring with a visual image of an elephant trumpeting, replacing a visual depiction of an avalanche with a visual depiction of an erupting volcano, etc.) at 1506.
With continued reference to FIG. 15, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of replacing one or more audible portions with one or more replacement audible portions, or modifying one or more body movements corresponding to the one or more audible portions with one or more replacement body movements corresponding to the one or more replacement audible portions at 1512 (e.g. receiving payment based on replacing sounds and body movements associated with a hand clap with replacement sounds and body movements associated with snapping fingers, receiving payment based on replacing sounds and body movements associated with a cough with replacement sounds and movements associated with a sneeze, etc.).
In other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing one or more background noises with one or more replacement background noises at 1514 (e.g. receiving payment based on replacing jungle sounds with urban sounds, receiving payment based on replacing crowd noise with sounds of ocean surf, etc.). In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of replacing one or more background noises with one or more replacement background noises, or replacing one or more background visual components with one or more replacement background visual components at 1516 (e.g. receiving payment based on replacing sounds and images of a lion roaring with replacement sounds and images of an elephant trumpeting, receiving payment based on replacing sounds and video of an avalanche with replacement sounds and video of an erupting volcano, etc.).
It will be appreciated that systems and methods in accordance with the present disclosure may be utilized to adjust content to accommodate cultural differences. In some implementations, content that is categorized as being culturally inappropriate (e.g. vulgar, offensive, racist, derogatory, degrading, stereotypical, distasteful, etc.) may be either omitted (or deleted or removed), or may be replaced with alternate content that is categorized as being culturally appropriate, such as by retrieving replacement content from a library of lookup tables, or any other suitable source. For example, as shown in FIG. 16, in some implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include at least one of replacing a culturally inappropriate portion with a culturally appropriate portion or omitting the culturally inappropriate portion at 1602 (e.g. replacing terminology that may be considered a racial slur in a particular culture with replacement terminology that is not considered a racial slur in the particular culture, removing a content portion that includes a hand gesture that is insulting to a particular culture; etc.).
In other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving a selection signal indicative of a cultural heritage of at least one viewer at 1604, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include at least one of replacing a portion considered inappropriate with respect to the cultural heritage of the at least one viewer with a replacement portion considered appropriate with respect to the cultural heritage of the at least one viewer, or omitting the inappropriate portion at 1606 (e.g. receiving a signal indicating that a viewer is Chinese, and replacing a reference to “Taiwan” with a reference to “Chinese Taipei;” receiving an indication that a viewer is Islamic, and replacing a reference to the Bible with a reference to the Quran; etc.).
With continued reference to FIG. 16, in other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of replacing a culturally inappropriate portion with a culturally appropriate portion or omitting the culturally inappropriate portion at 1608 (e.g. receiving payment based on replacing terminology that may be considered in poor taste in Iceland with replacement terminology that is not considered in poor taste, etc.).
In other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of at least one of replacing a portion considered inappropriate with respect to the cultural heritage of the at least one viewer with a replacement portion considered appropriate with respect to the cultural heritage of the at least one viewer, or omitting the inappropriate portion at 1610 (e.g. receiving payment based on receiving a signal indicating that a viewer is Chinese, and replacing a reference to “Taiwan” with a reference to “Chinese Taipei;” receiving payment based on receiving an indication that a viewer is Islamic, and replacing a reference to the Bible with a reference to the Quran; etc.).
As shown in FIG. 17, in further implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving a selection signal indicative of a geographic location of at least one viewer at 1702, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include at least one of replacing a portion considered inappropriate with respect to the geographic location of the at least one viewer with a replacement portion considered appropriate with respect to the geographic location of the at least one viewer, or omitting the inappropriate portion at 1704 (e.g. receiving a signal, such as a GPS signal from a viewer's cell phone, indicating that the viewer is located in Brazil, and replacing a content portion that includes a hand gesture that is offensive in Brazil, such as a Texas Longhorns “hook-em-horns” hand gesture, with a benign hand gesture appropriate for the viewer located in Brazil; receiving a signal, such as a location of an IP address of a local Internet service provider, that indicates that a viewer is located within a Native American reservation, and replacing content that includes terminology offensive to Native Americans with replacement content that includes non-offensive terminology; etc.).
And in other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving a selection signal indicative of a cultural identity of at least one viewer at 1706, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include at least one of replacing at least a portion of content inappropriate for the cultural identity of the at least one viewer with an appropriate portion of content, or omitting the inappropriate portion at 1708 (e.g. receiving a signal, such as a language selection of a software installed on a viewer's electronic device, indicating that the viewer is Arabic, and removing a content portion that is inappropriate to the Arabic culture; etc.).
With continued reference to FIG. 17, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of replacing a portion considered inappropriate with respect to the geographic location of the at least one viewer with a replacement portion considered appropriate with respect to the geographic location of the at least one viewer, or omitting the inappropriate portion at 1710 (e.g. receiving payment based on receiving a signal, such as a GPS signal from a viewer's cell phone, indicating that the viewer is located in Brazil, and replacing a content portion that includes a hand gesture that is offensive in Brazil with a benign hand gesture appropriate for the viewer located in Brazil; etc.).
And in other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of at least one of replacing at least a portion of content inappropriate for the cultural identity of the at least one viewer with an appropriate portion of content, or omitting the inappropriate portion at 1712 (e.g. receiving a signal, such as a language selection of a software installed on a viewer's electronic device, indicating that the viewer is Arabic, and removing a content portion that is inappropriate to the Arabic culture; etc.).
It will be appreciated that modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may be accomplished in various ways. For example, as shown in FIG. 18, in some implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include changing at least one portion of a digital signal stream in accordance with the at least one selection signal (e.g. replacing original digitized signals of the audio-visual core portion with replacement digitized signals of the audio-visual core portion, supplementing original digitized signals of the audio-visual core portion with supplemental digitized signals, etc.) at 1802. In other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include digitizing at least a portion of an audio-visual core portion, and changing at least one portion of the digitized portion in accordance with the at least one selection signal at 1804.
In further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing at least a portion of an audio-visual core portion with a view of a three dimensional model of a replacement portion in accordance with the at least one selection signal at 1806. Thus, if the one or more selection signals 144 indicates that the user prefers to see a dynamically-customized movie (e.g. the movie Cleopatra) with a desired lead actress (or actor) (e.g. Angelina Joli) rather than an original lead actress (or actor) (e.g. Elizabeth Taylor), the processing component 110 may retrieve a digital model of the desired lead actress (or actor) and may substitute appropriate portions of the incoming core portion 102 with appropriate views of the digital model of the desired lead actress (or actor). In still further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include rendering at least a portion of an audio-visual core portion in accordance with the at least one selection signal to create the dynamically-customized audio-visual content at 1808.
With continued reference to FIG. 18, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on changing at least one portion of a digital signal stream in accordance with the at least one selection signal at 1812 (e.g. receiving a payment portion based on replacing digitized signals with replacement digitized signals, etc.). In other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of digitizing at least a portion of an audio-visual core portion, or changing at least one portion of the digitized portion in accordance with the at least one selection signal at 1814.
In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on replacing at least a portion of an audio-visual core portion with a view of a three dimensional model of a replacement portion in accordance with the at least one selection signal at 1816. (e.g. receiving payment based on replacing a first actor with a 3D model of a replacement actor). In still further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on rendering at least a portion of an audio-visual core portion in accordance with the at least one selection signal to create the dynamically-customized audio-visual content at 1818.
As shown in FIG. 19, in other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include re-rendering at least a portion of an audio-visual core portion in accordance with the at least one selection signal to create the dynamically-customized audio-visual content at 1902. In additional implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include rendering at least a replacement portion in accordance with the at least one-relection signal, and combining the at least a replacement portion with the audio-visual core portion at 1904. In further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include re-rendering at least a portion of an audio-visual core portion in accordance with the at least one-relection signal to create a replacement portion, and combining the replacement portion with the audio-visual core portion at 1906.
With continued reference to FIG. 19, in still other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on re-rendering at least a portion of an audio-visual core portion in accordance with the at least one selection signal to create the dynamically-customized audio-visual content at 1912. In additional implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of rendering at least a replacement portion in accordance with the at least one-relection signal, or combining the at least a replacement portion with the audio-visual core portion at 1914. In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of re-rendering at least a portion of an audio-visual core portion in accordance with the at least one-relection signal to create a replacement portion, or combining the replacement portion with the audio-visual core portion at 1916.
With reference to FIG. 20, in some implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include rendering a plurality of frames of video data to form a first rendered stream, rendering a plurality of frames of video data to form a second rendered stream, and combining the first rendered stream and the second rendered stream for substantially simultaneous display on a display device (e.g. multiplexing the first and second rendered streams) at 2002. In at least some implementations, the operations at 2002 may include, for example, those techniques disclosed in U.S. Pat. No. 8,059,201 issued to Aarts et al. (disclosing techniques for real-time and non-real-time rendering of video data streams), which patent is incorporated herein by reference.
In other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modeling at least one object using a wireframe model including a plurality of polygons, and applying texture data to the plurality of polygons to provide a three-dimensional appearance to the wireframe model for display on a display device at 2004. In at least some implementations, the operations at 2004 may include, for example, those techniques disclosed in U.S. Pat. No. 8,016,653 issued to Pendleton et al. (disclosing techniques for three dimensional rendering of live events), which patent is incorporated herein by reference.
In still other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include rendering a supplemental video stream, blocking a portion of the audio-visual core portion, and combining the supplemental video stream with at least an unblocked portion of the audio-visual core portion at 2006. In at least some implementations, the operations at 2006 may include, for example, those techniques disclosed in U.S. Pat. Nos. 7,945,926 and 7,631,327 issued to Dempski et al. (disclosing techniques for video animation and merging with television broadcasts and supplemental content sources), which patents are incorporated herein by reference.
With continued reference to FIG. 20, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of rendering a plurality of frames of video data to form a first rendered stream, rendering a plurality of frames of video data to form a second rendered stream, or combining the first rendered stream and the second rendered stream for substantially simultaneous display on a display device at 2012 (e.g. receiving a payment based on multiplexing first and second rendered streams).
In other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of modeling at least one object using a wireframe model including a plurality of polygons, or applying texture data to the plurality of polygons to provide a three-dimensional appearance to the wireframe model for display on a display device at 2014. In still other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of rendering a supplemental video stream, blocking a portion of the audio-visual core portion, or combining the supplemental video stream with at least an unblocked portion of the audio-visual core portion at 2016.
As shown in FIG. 21, in other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include rendering a supplemental video stream, blocking a portion of the audio-visual core portion, combining the supplemental video stream with at least an unblocked portion of the audio-visual core portion, and using an area outside a letterboxed portion to display a supplemental content at 2102. In at least some implementations, the operations at 2102 may include, for example, those techniques disclosed in U.S. Pat. Nos. 7,945,926 and 7,631,327 issued to Dempski et al. (disclosing techniques for video animation and merging with television broadcasts and supplemental content sources), which patents were previously incorporated herein by reference.
In further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include providing a three-dimensional model of a first object having one or more characteristics to be modified, providing a three-dimensional model of a second object having one or more characteristics that are to be adopted, and replacing the one or more characteristics to be modified with the one or more characteristics that are to be adopted to provide a modified model of the first object at 2104. For example, the “providing” operations at 2104 may, in at least some implementations, be accomplished by a dynamic customization system (e.g. system 160 of FIG. 1), and may include executing one or more instructions that create a three-dimensional (3D) model, or may involve operations similar to those commonly referred to as “drag and drop” in commercially-available software (e.g. Microsoft Visio, etc.) to select pre-formed objects from a series of graphical menus, databases, or other suitable storage structures, and may also include a capability for alteration, modification, or individualization by a viewer. In particular implementations, the “adopting” operations at 2104 may include one or more of reusing operations, copying operations, grafting operations, re-skinning operations, illuminating operations, or any other suitable operations. In at least some implementations, the operations at 2104 may include, for example, those techniques disclosed in U.S. Pat. No. 7,109,993 and U.S. Patent Publication No. 20070165022 by Peleg et al. (disclosing generating a head model and modifying portions of facial features), which patent and pending application are incorporated herein by reference.
In additional implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modeling at least one object to be modified using a plurality of sections, and at least one of replacing, adjusting, moving, or modifying at least one of the plurality of sections in accordance with a stored information, the stored information being determined at least partially based on the at least one selection signal at 2106. In at least some implementations, the operations at 2106 may include, for example, those techniques disclosed in U.S. Pat. No. 6,054,999 issued to Strandberg (disclosing producing graphic movement sequences from recordings of measured data from strategic parts of actors), which patent is incorporated herein by reference.
With continued reference to FIG. 21, in still other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of rendering a supplemental video stream, blocking a portion of the audio-visual core portion, combining the supplemental video stream with at least an unblocked portion of the audio-visual core portion, or using an area outside a letterboxed portion to display a supplemental content at 2112. In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of providing a three-dimensional model of a first object having one or more characteristics to be modified, providing a three-dimensional model of a second object having one or more characteristics that are to be adopted, or replacing the one or more characteristics to be modified with the one or more characteristics that are to be adopted to provide a modified model of the first object at 2114. In additional implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of modeling at least one object to be modified using a plurality of sections, or at least one of replacing, adjusting, moving, or modifying at least one of the plurality of sections in accordance with a stored information, the stored information being determined at least partially based on the at least one selection signal at 2116.
As shown in FIG. 22, in other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include providing a first wire-frame model of a first object that is to be modified and a second wire-frame model of a second object having one or more characteristics that are to be mapped onto the first wire-frame model, obtaining a fitting function for mapping the one or more characteristics from the second wire-frame model onto the first wire-frame model, the one or more characteristics being at least partially determined in accordance with the at least one selection signal, and mapping the one or more characteristics from the second wire-frame model onto the first wire-frame model using the fitting function at 2202. In at least some implementations, the operations at 2202 may include, for example, those techniques disclosed in U.S. Pat. No. 5,926,575 issued to Ohzeki et al. (disclosing techniques for image deformation or distortion based on correspondence to a reference image, wire-frame modeling of images and texture mapping), which patent is incorporated herein by reference.
In still other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include providing at least one background image portion that includes at least a portion of an object to be modified, and at least one foreground image portion that includes at least one aspect that is to be adapted to at least part of the object to be modified, at least one of scaling, translating, rotating, or distorting the at least one foreground image portion to substantially conform the at least one foreground image portion with the at least one background image portion, and merging the at least one foreground image portion with the at least one background image portion for display on a display device at 2204. In at least some implementations, the operations at 2204 may include, for example, those techniques disclosed in U.S. Pat. No. 5,623,587 issued to Bulman (disclosing techniques for creation of composite electronic images from multiple individual images), which patent is incorporated herein by reference.
With continued reference to FIG. 22, in still other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of providing a first wire-frame model of a first object that is to be modified and a second wire-frame model of a second object having one or more characteristics that are to be mapped onto the first wire-frame model, obtaining a fitting function for mapping the one or more characteristics from the second wire-frame model onto the first wire-frame model, the one or more characteristics being at least partially determined in accordance with the at least one selection signal, or mapping the one or more characteristics from the second wire-frame model onto the first wire-frame model using the fitting function at 2202. In still other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of providing at least one background image portion that includes at least a portion of an object to be modified, and at least one foreground image portion that includes at least one aspect that is to be adapted to at least part of the object to be modified, at least one of scaling, translating, rotating, or distorting the at least one foreground image portion to substantially conform the at least one foreground image portion with the at least one background image portion, or merging the at least one foreground image portion with the at least one background image portion for display on a display device at 2214.
As shown in FIG. 23, in further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include combining a plurality of images to provide a synthesized object having at least one of an animation capability, a sound capability, or a synchronized animation and sound capability, and commanding at least one of a movement, a sound, or a synchronized movement and sound of the synthesized object using a script file at least partially based on the at least one selection signal at 2302. In at least some implementations, the operations at 2302 may include, for example, those techniques disclosed in U.S. Pat. No. 5,111,409 issued to Gasper et al. (disclosing techniques for synchronization of synthesized actors), and U.S. Pat. Nos. 4,884,972 and 4,884,972 issued to Gasper (disclosing techniques for synchronization of animated objects), which patents are incorporated herein by reference.
In other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include altering a plurality of light intensities at a plurality of pixel locations corresponding to one or more aspects of an object to be modified at least partially based on the at least one selection signal at 2304. In further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include determining a plurality of pixels of at least one digital image that are to be adjusted based on at least a portion of a speaker changing from speaking a first dialogue portion to a second dialogue portion, and altering one or more light intensities of at least some of the plurality of pixels to adjust the at least one digital image to depict the at least a portion of the speaker speaking the second dialogue portion at 2306. In at least some implementations, the operations at 2304 and 2306 may include, for example, those techniques disclosed in U.S. Pat. Nos. 4,827,532 and 4,600,281 and 4,260,229 issued to Bloomstein (disclosing techniques for substitution of sound track language and corresponding lip movements), which patents are incorporated herein by reference.
With continued reference to FIG. 23, in further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of combining a plurality of images to provide a synthesized object having at least one of an animation capability, a sound capability, or a synchronized animation and sound capability, or commanding at least one of a movement, a sound, or a synchronized movement and sound of the synthesized object using a script file at least partially based on the at least one selection signal at 2312. In other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on altering a plurality of light intensities at a plurality of pixel locations corresponding to one or more aspects of an object to be modified at least partially based on the at least one selection signal at 2314. In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of determining a plurality of pixels of at least one digital image that are to be adjusted based on at least a portion of a speaker changing from speaking a first dialogue portion to a second dialogue portion, or altering one or more light intensities of at least some of the plurality of pixels to adjust the at least one digital image to depict the at least a portion of the speaker speaking the second dialogue portion at 2316.
As shown in FIG. 24, in further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include replacing a portion of the audio-visual core portion with a replacement audio-visual portion based on a selection of at least one of an alternative story line or an alternative plot, the selection being at least partially based on the at least one selection signal at 2402. In at least some implementations, the operations at 2402 may include, for example, those techniques disclosed in U.S. Pat. No. 4,569,026 issued to Best (disclosing techniques for interactive entertainment systems), which patent is incorporated herein by reference.
In still further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include annotating a portion of the audio-visual core portion with an annotation portion at least partially based on the at least one selection signal at 2404. In at least some implementations, the operations at 2404 may include, for example, those techniques disclosed in U.S. Patent Publication No. 20040181592 by Samra et al. (disclosing techniques for annotating and versioning digital media), which pending patent application is incorporated herein by reference.
In yet other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include determining one or more control parameters associated with a control event available for modification, determining one or more additional parameters of at least one additional event influenced upon modification of the one or more control parameters associated with the control event, and modifying at least some of the one or more control parameters and the one or more additional parameters at least partially based on the at least one selection signal at 2406. In at least some implementations, the operations at 2406 may include, for example, those techniques disclosed in U.S. Patent Publication No. 20110029099 by Benson (disclosing techniques for providing audio visual content), which pending patent application is incorporated herein by reference.
With continued reference to FIG. 24, in other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of replacing a portion of the audio-visual core portion with a replacement audio-visual portion based on a selection of at least one of an alternative story line or an alternative plot, the selection being at least partially based on the at least one selection signal at 2412. In still further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on annotating a portion of the audio-visual core portion with an annotation portion at least partially based on the at least one selection signal at 2414. In yet other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on modifying an audio portion and not a visual portion at 2416.
As shown in FIG. 25, receiving at least one audio-visual core portion at 610 may involve a variety of different ways and aspects. For example, in some implementations, receiving at least one audio-visual core portion at 610 may include receiving an audio portion and not a visual portion at 2502. In other implementations, receiving at least one audio-visual core portion at 610 may include receiving a visual portion and not an audio portion at 2504. In still other implementations, receiving at least one audio-visual core portion at 610 may include receiving a separate audio portion and a separate visual portion at 2506. In further implementations, receiving at least one audio-visual core portion at 610 may include receiving a combined audio and visual portion at 2508. In additional implementations, receiving at least one audio-visual core portion at 610 may include receiving one or more audio portions and one or more visual portions at 2510 (e.g. receiving a plurality of audio portions and a single video portion, receiving a single audio portion and a plurality of video portions, etc.).
Similarly, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may involve a variety of different ways and aspects. For example, in some implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual contentat 630 may include modifying an audio portion and not a visual portion at 2522. In other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual contentat 630 may include modifying a visual portion and not an audio portion at 2524. In still other implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying a separate audio portion and modifying a separate visual portion at 2526. In further implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying a combined audio and visual portion at 2528. In additional implementations, modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying one or more audio portions and modifying one or more visual portions at 2530 (e.g. modifying a plurality of audio portions and modifying a single video portion, modifying a single audio portion and modifying a plurality of video portions, etc.).
With continued reference to FIG. 25, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on modifying an audio portion and not a visual portion at 2532. In other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on modifying a visual portion and not an audio portion at 2534. In still other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on modifying a separate audio portion and modifying a separate visual portion at 2536. In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on modifying a combined audio and visual portion at 2538. In additional implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on modifying one or more audio portions and modifying one or more visual portions at 2530 (e.g. receiving payment for modifying a plurality of audio portions and modifying a single video portion, modifying a single audio portion and modifying a plurality of video portions, etc.).
As shown in FIG. 26, outputting the dynamically-customized audio-visual content at 640 may involve a variety of different ways and aspects. For example, in some implementations, outputting the dynamically-customized audio-visual content at 640 may include outputting a dynamically-customized audio at 2602. In other implementations, outputting the dynamically-customized audio-visual content at 640 may include outputting a dynamically-customized visual portion and not a dynamically-customized audio portion at 2604. In still other implementations, outputting the dynamically-customized audio-visual content at 640 may include outputting a separate dynamically-customized audio portion and a separate dynamically-customized visual portion at 2606. In further implementations, outputting the dynamically-customized audio-visual content at 640 may include outputting a combined dynamically-customized audio and visual portion at 2608. In additional implementations, outputting the dynamically-customized audio-visual content at 640 may include outputting one or more dynamically-customized audio portions and one or more dynamically-customized visual portions at 2610 (e.g. outputting a plurality of audio portions and outputting a single video portion, outputting a single audio portion and outputting a plurality of video portions, etc.).
With continued reference to FIG. 26, in other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on outputting a dynamically-customized audio at 2612. In other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on outputting a dynamically-customized visual portion and not a dynamically-customized audio portion at 2614. In still other implementations, receiving a consideration for the dynamically-customized audio-visual content at 640 may include receiving at least a portion of a consideration based at least partially on outputting a separate dynamically-customized audio portion and a separate dynamically-customized visual portion at 2616. In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on outputting a combined dynamically-customized audio and visual portion at 2628. In additional implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on outputting one or more dynamically-customized audio portions and one or more dynamically-customized visual portions at 2630 (e.g. receiving payment for outputting a plurality of audio portions and outputting a single video portion, outputting a single audio portion and outputting a plurality of video portions, etc.).
A variety of alternate embodiments of receiving at least one selection signal indicative of a viewer preference for dynamic customization of audio-visual content in accordance with the present disclosure may be conceived. For example, as shown in FIG. 27, in some implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving an input from a viewer indicative of a desired setting selected from at least one sliding scale of at least one viewing aspect at 2702. FIG. 28 shows one possible implementation of a user interface 2800 in accordance with the teachings of the present disclosure. In this implementation, the user interface 2800 displays a plurality of customization aspects 2810 having a corresponding plurality of sliding scales 2820 (e.g. comedy scale, action scale, drama scale, etc.). In operation, a viewer may position each selector 2822 associated with each sliding scale 2820 to indicate their desired preferences associated with each customization aspect 2810, resulting in a suitably customized audio-visual content.
Referring again to FIG. 27, in further implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving an input from a viewer indicative of a desired viewing profile selected from a plurality of viewing profiles associated with the viewer at 2704. For example, FIG. 29 shows one possible implementation of a user interface 2900 in accordance with the teachings of the present disclosure. In this implementation, the user interface 2900 displays a plurality of customization profiles 2910 (e.g. family time, viewing with spouse, viewing alone, etc.) associated with a particular viewer 2920 (e.g. “Arnold”). In operation, the particular viewer 2220 may select the desired profile 2910 depending upon who else (if anyone) may be present in the viewing area with the particular viewer 2920, resulting in a suitably customized audio-visual content.
In still other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include monitoring at least one characteristic of at least one viewer at 2706 (e.g. facial features, smile, frown, scowl, displeasure, interest, lack of interest, laughter, tears, fear, anxiety, sadness, disgust, shock, distaste, etc.), and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include automatically adjusting at least one customization aspect in response to the at least one characteristic of the at least one viewer (e.g. increasing comedy aspects, reducing horror aspects, increasing dramatic aspects, reducing profanity aspects, etc.) at 1708. For example, in some implementations, a monitoring device (e.g. the sensor 250, Microsoft Kinect®, Nintendo Wii®, etc.) may sense facial features associated with displeasure at particular occurrences of profane dialogue, and may automatically reduce the amount of profanity contained in the dialogue. Alternately, the monitoring device may sense a higher-than-desired level of fear, and may automatically reduce the horror aspects of the dynamically customized audio-visual content so provide a desired level of fear to the viewer. receiving at least one selection signal indicative of a viewer preference.
With continued reference to FIG. 27, in further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of monitoring at least one characteristic of at least one viewer, or automatically adjusting at least one customization aspect in response to the at least one characteristic of the at least one viewer (e.g. receiving payment for increasing comedy aspects, receiving payment for reducing horror aspects, receiving payment for increasing dramatic aspects, receiving payment for reducing profanity aspects, etc.) at 2718.
As shown in FIG. 30, in still further implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include sensing at least one characteristic of at least one viewer at 3002, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include automatically changing a viewing profile associated with the viewer in response to the sensed at least one characteristic of the at least one viewer at 3012. For example, in some implementations, a sensing device (e.g. a Kinect® device, Nintendo Wii®, etc.) may sense interest from the viewer in particular occurrences of content being displayed (e.g. history-related content), and may automatically change from a first viewing profile (e.g. a profile that has increased emphasis on comedy) to a second viewing profile (e.g. a profile that has increased emphasis on historical topics or documentary topics). Alternately, the monitoring device may sense a higher-than-desired level of fear, and may automatically reduce the horror aspects of the dynamically customized audio-visual content so provide a desired level of fear to the viewer. receiving at least one selection signal indicative of a viewer preference receiving at least one selection signal indicative of a viewer preference
With continued reference to FIG. 30, in other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include monitoring a viewing area into which a dynamically-customized audio-visual content is to be displayed at 3004, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include automatically adjusting at least one customization aspect in response to a change in at least one characteristic of the viewing area at 3014. For example, in some implementations, a monitoring device may sense that a less than desired amount of laughter is occurring in the viewing area (e.g. using pattern recognition techniques, etc.), and may automatically increase a comedy level of the dynamically customized audio-visual content. Alternately, the sensing device may sense that more than a desired level of screaming is occurring within the viewing area, and may automatically reduce a horror level of the dynamically customized audio-visual content.
In additional implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include sensing a change in a number of viewers in a viewing area into which a dynamically-customized audio-visual content is to be displayed at 3006, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include automatically adjusting at least one customization aspect in response to a change in the number of viewers in the viewing area at 3016. For example, in some implementations, a monitoring device may sense that a viewer's spouse has entered the viewing area (e.g. using facial recognition techniques, body recognition techniques, voice recognition techniques, etc.), and may automatically change from a first viewing profile (e.g. a profile associated with “viewing alone”) to a second viewing profile (e.g. a profile associated with “viewing with spouse”). Alternately, the sensing device may sense that a viewer's children have departed from the viewing area, and may automatically change from a family-oriented viewing profile to an individual-oriented viewing profile.
With continued reference to FIG. 30, in other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of sensing at least one characteristic of at least one viewer, or automatically changing a viewing profile associated with the viewer in response to the sensed at least one characteristic of the at least one viewer at 3022. (e.g. receiving payment for sensing a viewer's emotion with a Kinect® device, and automatically changing from a first viewing profile to a second viewing profile that better fits the viewer's emotion). In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of monitoring a viewing area into which a dynamically-customized audio-visual content is to be displayed, or automatically adjusting at least one customization aspect in response to a change in at least one characteristic of the viewing area at 3024 (e.g. receiving payment for a monitoring device indicating that more than a desired level of screaming is occurring within the viewing area, and may automatically reduce a honor level of the dynamically customized audio-visual content).
In additional implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of sensing a change in a number of viewers in a viewing area into which a dynamically-customized audio-visual content is to be displayed, or automatically adjusting at least one customization aspect in response to a change in the number of viewers in the viewing area at 3016 (e.g. receiving payment for a monitoring device sensing that a viewer's spouse has entered the viewing area, and automatically changing from a “viewing alone” profile to a “viewing with spouse” profile, etc.).
FIG. 31 shows additional embodiments of processes for dynamic customization of audio-visual content in accordance with the present disclosure. More specifically, in some implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of one or more other viewer reactions to a portion of audio-visual content at 3102, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include adjusting at least one customization aspect in response to the at least one input indicative of one or more other viewer reactions at 3112. For example, in some implementations, an input signal may be received (e.g. from a repository of information on viewer reactions, from a service that assesses viewer reactions, etc.) that indicates that other demographically-similar viewers (e.g. other viewers of same age, other viewers of same gender, other viewers of same ethnic heritage, etc.) reacted negatively to a particular portion of audio-visual content (e.g. a scene, a portion of dialogue, a visual image, etc.), and in response to the at least one input, automatically adjusting at least one customization aspect (e.g. deleting a scene, changing a dialogue, changing an actor ethnicity, etc.) of the dynamically customized audio-visual content.
In other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of one or more other parent reactions to a portion of audio-visual content at 3104, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying a portion of audio-visual content in response to the at least one input indicative of one or more other parent reactions at 3114. For example, in some implementations, an input may be received indicating that a majority of parents reacted negatively to a particular portion of audio-visual content (e.g. dialogue that includes profanity, scenes that include violent content, scenes that include adult situations, etc.), and in response to the at least one input, automatically modifying one or more aspects (e.g. deleting a scene, changing a dialogue, adjusting a clothing of actors, etc.) of the dynamically customized audio-visual content in response to the at least one input indicative of one or more other parent reactions.
In further implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of a viewing history of at least one viewer within a viewing area into which a dynamically customized audio-visual content is to be displayed at 3106, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include modifying a portion of audio-visual content in response to the at least one input indicative of a viewing history at 3116. For example, in some implementations, an input may be received indicating that a viewer has repeatedly changed a channel whenever a particular portion of audio-visual content has been displayed, and in response to the at least one input, the audio-visual core portion is automatically replacing the particular portion of audio-visual content with a replacement portion of content.
With continued reference to FIG. 31, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of one or more other viewer reactions to a portion of audio-visual content, or adjusting at least one customization aspect in response to the at least one input indicative of one or more other viewer reactions at 3122 (e.g. receiving a payment for receiving an input from a service that assesses viewer reactions, and modifying content based on other demographically-similar viewers, etc.). In other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of one or more other parent reactions to a portion of audio-visual content, or modifying a portion of audio-visual content in response to the at least one input indicative of one or more other parent reactions at 3124 (e.g. receiving a payment for receiving an input indicating that a majority of parents reacted negatively to a particular portion of audio-visual content, and automatically modifying one or more aspects of the content to improve parental satisfaction, etc.). In further implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of a viewing history of at least one viewer within a viewing area into which a dynamically customized audio-visual content is to be displayed, or modifying a portion of audio-visual content in response to the at least one input indicative of a viewing history at 3126 (e.g. receiving a payment for determining that a viewer has repeatedly changed a channel whenever a particular actor has appeared, and automatically replacing the particular actor with a replacement actor based on the viewer's history).
As shown in FIG. 32, in still further implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative that at least one viewer has not viewed one or more prerequisite content portions at 3202, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include supplementing at least a portion of audio-visual content with at least some of the one or more prerequisite content portions in response to the at least one input at 3212. For example, in some implementations, an input may be received indicating that a viewer has missed previous episodes of a series, and in response to the at least one input, the audio-visual core portion is automatically supplemented with one or more scenes that provide essential plot points that the viewer will need to view in order to be brought up to speed for the upcoming episode.
In additional implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of one or more preferences of at least one viewer based on previous viewing behavior at 3204, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include automatically adjusting a plot direction of at least a portion of audio-visual content in response to the at least one input at 3214. For example, in some implementations, an input may be received indicating that a viewer prefers sad endings over happy endings, and in response to the at least one input, the audio-visual core portion is automatically modified to provide a plot direction that ends up with a sad ending rather than a happy ending.
In still other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of a preferred point of view of at least one viewer at 3206, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include adjusting the point of view of at least a portion of the audio-visual core portion in response to the at least one input at 3216. For example, in some implementations, a viewer may manually select from a menu of available points of view (e.g. from a first person perspective of one of the characters, from a third party perspective, a top view, side view, etc.), and in response to the at least one input, the audio-visual core portion is automatically adjusted to show content from the selected perspective (e.g. a fight scene from the perspective of one of the fighters, etc.).
With continued reference to FIG. 32, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative that at least one viewer has not viewed one or more prerequisite content portions, or supplementing at least a portion of audio-visual content with at least some of the one or more prerequisite content portions in response to the at least one input at 3222 (e.g. receiving payment for receiving an indication that a viewer has missed previous episodes of a series, and automatically supplementing the content with one or more scenes that provide essential plot points). In additional implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of one or more preferences of at least one viewer based on previous viewing behavior, or automatically adjusting a plot direction of at least a portion of audio-visual content in response to the at least one input at 3224 (e.g. receiving payment for receiving an indication that a viewer prefers sad endings over happy endings, and automatically modifying the content to provide a plot direction that ends up with a sad ending rather than a happy ending). In still other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of a preferred point of view of at least one viewer, or adjusting the point of view of at least a portion of the audio-visual core portion in response to the at least one input at 3216 (e.g. receiving payment for receiving an indication that a viewer prefers viewing fighting scenes from a top view, and automatically adjusting a perspective of a fight scene accordingly).
As shown in FIG. 33, in other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of at least one preferred display characteristic at 3302, and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include adjusting at least one display characteristic of at least a portion of the audio-visual core portion in response to the at least one input at 3312. For example, in some implementations, an input may be received that indicates a display characteristic suitable to a particular viewing environment (e.g. a brightness, a contrast, a volume level, an outdoor viewing environment, etc.) or suitable to a particular viewing device (e.g. an aspect ratio, a display resolution value, a screen size, etc.), and the audio-visual core portion may be adjusted to be optimally displayed in accordance with the display characteristic.
In additional implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving from a non-private source of information at least one input indicative of a preference of at least one viewer at 3204 (e.g. receiving an input from a viewer's public blog indicating a preference, receiving an input from a viewer's public information placed on a social networking site indicating a preference, etc.), and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include adjusting at least a portion of the audio-visual core portion in response to the at least one input at 3214.
In yet other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of a time period available for viewing for at least one viewer at 3206 (e.g. receiving a manual input from a viewer, reading a viewer's calendar or scheduling software, etc.), and modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content at 630 may include adjusting at least one a portion of the audio-visual core portion to fit the at least one time period available for viewing at 3216 (e.g. omitting a non-essential portion of the audio-visual core portion, etc.). In still other implementations, receiving at least one selection signal indicative of a viewer preference at 620 may include receiving at least one input indicative of a preference of at least one viewer with a prior consent from the at least one viewer at 3208 (e.g. receiving an input indicating a preference after a viewer “opts in”).
With continued reference to FIG. 33, in some implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of at least one preferred display characteristic, or adjusting at least one display characteristic of at least a portion of the audio-visual core portion in response to the at least one input at 3322. In additional implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving from a non-private source of information at least one input indicative of a preference of at least one viewer, or adjusting at least a portion of the audio-visual core portion in response to the at least one input at 3224. In yet other implementations, receiving a consideration for the dynamically-customized audio-visual content at 650 may include receiving at least a portion of a consideration based at least partially on at least one of receiving at least one input indicative of a time period available for viewing for at least one viewer, or adjusting at least one a portion of the audio-visual core portion to fit the at least one time period available for viewing at 3226.
It should be appreciated that the particular embodiments of processes described herein are merely possible implementations of the present disclosure, and that the present disclosure is not limited to the particular implementations described herein and shown in the accompanying figures. For example, in alternate implementations, certain acts need not be performed in the order described, and may be modified, and/or may be omitted entirely, depending on the circumstances. Moreover, in various implementations, the acts described may be implemented by a computer, controller, processor, programmable device, or any other suitable device, and may be based on instructions stored on one or more computer-readable media or otherwise stored or programmed into such devices. In the event that computer-readable media are used, the computer-readable media can be any available media that can be accessed by a device to implement the instructions stored thereon.
Various methods, systems, and techniques have been described herein in the general context of computer-executable instructions, such as program modules, executed by one or more processors or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various alternate embodiments. In addition, embodiments of these methods, systems, and techniques may be stored on or transmitted across some form of computer readable media.
It may also be appreciated that there may be little distinction between hardware and software implementations of aspects of systems and methods disclosed herein. The use of hardware or software may generally be a design choice representing cost vs. efficiency tradeoffs, however, in certain contexts the choice between hardware and software can become significant. Those having skill in the art will appreciate that there are various vehicles by which processes, systems, and technologies described herein can be effected (e.g., hardware, software, firmware, or combinations thereof), and that a preferred vehicle may vary depending upon the context in which the processes, systems, and technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle. Alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation. In still other implementations, the implementer may opt for some combination of hardware, software, and/or firmware. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, and which may be desired over another may be a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically-oriented hardware, software, and or firmware.
Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use standard engineering practices to integrate such described devices and/or processes into workable systems having the described functionality. That is, at least a portion of the devices and/or processes described herein can be developed into a workable system via a reasonable amount of experimentation.
The herein described aspects and drawings illustrate different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected” or “operably coupled” (or “operatively connected,” or “operatively coupled”) to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable” (or “operatively couplable”) to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
Those skilled in the art will recognize that some aspects of the embodiments disclosed herein can be implemented in standard integrated circuits, and also as one or more computer programs running on one or more computers, and also as one or more software programs running on one or more processors, and also as firmware, as well as virtually any combination thereof. It will be further understood that designing the circuitry and/or writing the code for the software and/or firmware could be accomplished by a person skilled in the art in light of the teachings and explanations of this disclosure.
The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. For example, in some embodiments, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure.
In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies equally regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of a signal bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disk drives, CD ROMs, digital tape, and computer memory; and transmission type media such as digital and analog communication links using TDM or IP based communication links (e.g., packet links).
While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of this subject matter described herein. Furthermore, it is to be understood that the invention is defined by the appended claims. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.).
As a further example of “open” terms in the present specification and claims, it will be understood that usage of a language construction “A or B” is generally interpreted as a non-exclusive “open term” meaning: A alone, B alone, and/or A and B together.
Although various features have been described in considerable detail with reference to certain preferred embodiments, other embodiments are possible. Therefore, the spirit or scope of the appended claims should not be limited to the description of the embodiments contained herein.

Claims (30)

What is claimed is:
1. A system for providing audio-visual content, comprising:
a device configured to be operatively positioned proximate to a display and to be operatively coupled to the display, the device including a processor and a memory configured with one or more instructions that, when executed by the processor, configure the processor to provide at least:
circuitry for receiving at least one audio-visual core portion at the device configured to be operatively positioned proximate to the display;
circuitry for determining at least one selection signal indicative of a viewer preference without requiring a viewing area into which a dynamically-customized audio-visual content is to be displayed for presence of at least one viewer, the circuitry for determining at least one selection signal indicative of a viewer preference being based at least partially on:
determining information received from a memory of a communication device associated with the at least one viewer present within the viewing area, the information including at least one of an online shopping characteristic or a browsing history characteristic;
determining, based at least partially on the information, at least one similar viewer having at least one similarity to the at least one viewer; and
determining the at least one selection signal based on at least one indication of a reaction by the at least one similar viewer determined based at least partially on the information including at least one of the online shopping characteristic or the browsing history characteristic;
circuitry for modifying the audio-visual core portion, at the device configured to be operatively positioned proximate to the display, with at least one revised content portion in accordance with the at least one selection signal to create the dynamically customized audio-visual content, including at least:
circuitry for automatically adjusting at least one customization aspect in response to the information received from the memory of the communication device associated with the at least one viewer within the viewing area; and
circuitry for modifying the audio-visual core portion with at least one revised content portion, the at least one revised content portion including at least one of a revised facial appearance of an actor, a revised voice of an actor, a revised movement of an actor, a revised story-line, or a revised plot direction of at least one of a movie, a sitcom, or a story-related content within the audio-visual core portion;
circuitry for outputting the dynamically-customized audio-visual content for display within the viewing area; and
circuitry for facilitating a payment based on the modifying of the audio-visual core portion in accordance with the at least one selection signal based on the at least one indication of the reaction by the at least one similar viewer determined based at least partially on the information including at least one of the online shopping characteristic or the browsing history characteristic received from the memory of the communication device.
2. The system of claim 1, wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
at least one of:
circuitry for replacing at least one actor of the audio-visual core portion with at least one replacement actor;
circuitry for replacing one or more of a facial appearance, a voice, a body appearance, or an apparel with a corresponding one or more of a replacement facial appearance, a replacement voice, a replacement body appearance, or a replacement apparel;
circuitry for replacing at least one of a setting aspect, an environmental aspect, or a background aspect of the audio-visual core portion with a corresponding at least one of a replacement setting aspect, a replacement environmental aspect, or a replacement background aspect;
circuitry for receiving an input from a viewer indicative of a desired setting selected from at least one sliding scale of at least one viewing aspect; or
circuitry for replacing at least a city in which at least one scene is set without requiring interaction by the at least one viewer.
3. The system of claim 1, wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for replacing one or more audible portions with one or more replacement audible portions; and
circuitry for modifying one or more body movements corresponding to the one or more audible portions with one or more replacement body movements corresponding to the one or more replacement audible portions.
4. The system of claim 1, wherein circuitry for determining at least one selection signal indicative of a viewer preference comprises:
circuitry for determining a selection signal indicative of a geographic location of at least one viewer; and
wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for at least one of replacing a portion considered inappropriate with respect to the geographic location of the at least one viewer with a replacement portion considered appropriate with respect to the geographic location of the at least one viewer, or omitting the inappropriate portion.
5. The system of claim 1, wherein circuitry for determining at least one selection signal indicative of a viewer preference comprises:
circuitry for determining a selection signal of a cultural identity of at least one viewer; and
wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for at least one of replacing at least a portion of content inappropriate for the cultural identity of the at least one viewer with an appropriate portion of content, or omitting the inappropriate portion.
6. The system of claim 1, wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for determining a plurality of pixels of at least one digital image that are to be adjusted based on at least a portion of a speaker changing from speaking a first dialogue portion to a second dialogue portion; and
circuitry for altering one or more light intensities of at least some of the plurality of pixels to adjust the at least one digital image to depict the at least a portion of the speaker speaking the second dialogue portion.
7. The system of claim 1, wherein circuitry for modifying the audio-visual core portion with at least one revised content portion within the audio-visual core portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for replacing a portion of the audio-visual core portion with a replacement audio-visual portion based on a selection of at least one of an alternative story line or an alternative plot, the selection being at least partially based on the at least one selection signal.
8. The system of claim 1, wherein circuitry for determining at least one selection signal indicative of a viewer preference comprises:
circuitry for monitoring at least one characteristic of at least one viewer; and
wherein circuitry for modifying the audio-visual core portion with at least one revised content portion within the audio-visual core portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for automatically adjusting at least one customization aspect in response to the at least one characteristic of the at least one viewer.
9. The system of claim 1, wherein determining information received from a memory of a device associated with the at least one viewer present within the viewing area, the information including at least one of an online shopping characteristic or a browsing history characteristic comprises:
determining at least one online shopping characteristic of the at least one viewer based at least partially on a content of a memory of at least one electronic device associated with the at least one viewer; and
wherein circuitry for modifying the audio-visual core portion with at least one revised content portion within the audio-visual core portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for automatically adjusting an audio characteristic of the at least one revised content portion in response to the determined at least one online shopping characteristic of the viewer based at least partially on the at least one online shopping characteristic of the at least one viewer.
10. The system of claim 1, wherein circuitry for determining at least one selection signal indicative of a viewer preference comprises:
circuitry for detecting one or more actions of at least one viewer associated with at least one electronic device associated with the at least one viewer; and
wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for automatically adjusting at least one customization aspect in response to the detecting one or more actions of at least one viewer associated with at least one electronic device associated with the at least one viewer.
11. The system of claim 1, wherein circuitry for determining at least one selection signal indicative of a viewer preference comprises:
circuitry for determining at least one input indicative of a viewing history of at least one viewer within a viewing area into which a dynamically customized audio-visual content is to be displayed; and
wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for modifying a portion of audio-visual content in response to the at least one input indicative of a viewing history.
12. The system of claim 1, wherein circuitry for determining at least one selection signal indicative of a viewer preference comprises:
circuitry for determining at least one input indicative that at least one viewer has not viewed one or more prerequisite content portions; and
wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for supplementing at least a portion of audio-visual content with at least some of the one or more prerequisite content portions in response to the at least one input.
13. The system of claim 1, wherein circuitry for determining at least one selection signal indicative of a viewer preference comprises:
circuitry for determining at least one input indicative of one or more preferences of at least one viewer based on previous viewing behavior; and
wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for automatically adjusting a plot direction of at least a portion of audio-visual content in response to the at least one input.
14. The system of claim 1, wherein circuitry for determining at least one selection signal indicative of a viewer preference comprises:
circuitry for determining at least one input indicative of a preferred viewing perspective of at least one viewer; and
wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for adjusting the viewing perspective of at least a portion of the audio-visual core portion in response to the at least one input.
15. The system of claim 1, wherein circuitry for determining at least one selection signal indicative of a viewer preference comprises:
circuitry for determining from a non-private source of information at least one input indicative of a preference of at least one viewer; and
wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for adjusting at least a portion of the audio-visual core portion in response to the at least one input.
16. The system of claim 1, wherein determining information received from a memory of a communication device associated with the at least one viewer present within the viewing area, the information including at least one of an online shopping characteristic or a browsing history characteristic comprises:
determining information received from a memory of a communication device, including information from at least one of calendar or scheduling software, indicative of a time period available for viewing for at least one viewer; and
wherein circuitry for modifying the audio-visual core portion, at the device configured to operatively positioned proximate to the display, with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for automatically adjusting at least one a portion of the audio-visual core portion, at the device configured to be operatively positioned proximate to the display, to fit the at least one time period available for viewing based on the information received from the memory of the communication device, including information from at least one of calendar or scheduling software, indicative of the time period available for viewing for at least one viewer.
17. The system of claim 1, wherein determining information receiving from a memory of a device associated with the at least one viewer present within the viewing area, the information including at least one of an online shopping characteristic or a browsing history characteristic comprises:
determining information received from a memory of at least one portable communication device associated with the at least one viewer present within the viewing area, the at least one portable communication device being present within the viewing area simultaneously with the at least one viewer.
18. The system of claim 1, wherein determining information received from a memory of a device associated with the at least one viewer present within the viewing area comprises:
determining information received from a memory of at least one portable communication device associated with the at least one viewer present within the viewing area without regard to an identity of the at least one viewer, the at least one portable communication device being present within the viewing area simultaneously with the at least one viewer.
19. The system of claim 1, wherein determining information received from a memory of a device associated with the at least one viewer present within the viewing area, the information including at least one of an online shopping characteristic or a browsing history characteristic comprises:
determining information received from a memory of at least one portable communication device associated with the at least one viewer present within the viewing area, the at least one portable communication device being present within the viewing area simultaneously with the at least one viewer, the information being indicative of at least one online purchase associated with the at least one viewer.
20. The system of claim 1, wherein determining information received from a memory of a device associated with the at least one viewer present within the viewing area, the information including at least one of an online shopping characteristic or a browsing history characteristic comprises:
determining information received from a memory of at least one portable communication device associated with the at least one viewer present within the viewing area, the at least one portable communication device being present within the viewing area simultaneously with the at least one viewer, the information being indicative of at least one online browsing activity associated with the at least one viewer.
21. The system of claim 1, wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for replacing at least a landscape in which at least one scene is set without requiring interaction by the at least one viewer.
22. The system of claim 1, wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for replacing at least a weather condition in which at least one scene is set.
23. The system of claim 1, wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection to create a dynamically customized audio-visual content comprises:
circuitry for replacing a portion considered inappropriate with respect to a cultural heritage of the at least one viewer with a replacement portion considered appropriate with respect to the cultural heritage of the at least one viewer.
24. The system of claim 1, wherein circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content comprises:
circuitry for modifying the audio-visual core portion with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content without requiring interaction by the at least one viewer and without regard to an identity of the at least one viewer.
25. The system of claim 1, wherein the information determined from the memory of the communication device further includes at least an indication of a subscription to a customization service, and wherein the payment comprises a payment in accordance with the indication of the subscription to the customization service.
26. The system of claim 1, wherein the information determined from the memory of the communication device further includes at least an indication grant of access to personal data, and wherein the payment comprises a payment in accordance with the indication of the grant of access to personal data.
27. The system of claim 1, wherein the payment includes at least one of a payment, a promise to pay, a promise to perform a deed, a grant of a right, a one-time payment, a subscription payment, a use-based payment, or an on-demand type of payment.
28. The system of claim 1, wherein the payment includes at least one of a grant of access to gather personal data, a grant to share data gathered, a grant of access private information, a grant to perform market testing, or a grant to perform market analysis.
29. A system for providing audio-visual content, comprising:
a device configured to be operatively positioned proximate to a display and to be operatively coupled to the display, the device including a processor and a memory configured with one or more instructions that, when executed by the processor, configure the processor to provide at least:
means for receiving at least one audio-visual core portion at the device configured to be operatively positioned proximate to the display;
means for determining at least one selection signal indicative of a viewer preference without requiring interaction by the at least one viewer, including at least means for monitoring a viewing area into which a dynamically-customized audio-visual content is to be displayed for presence of at least one viewer, the means for determining at least one selection signal indicative of a viewer preference being based at least partially on:
determining information received from a memory of a communication device associated with the at least one viewer present within the viewing area, the information including at least one of an online shopping characteristic or a browsing history characteristic;
determining, based at least partially on the information, at least one similar viewer having at least one similarity to the at least one viewer; and
determining the at least one selection signal based on at least one indication of a reaction by the at least one similar viewer determined based at least partially on the information including at least one of the online shopping characteristic or the browsing history characteristic;
means for modifying the audio-visual core portion, at the device configured to be operatively positioned proximate to the display, with at least one revised content portion in accordance with the at least one selection signal to create the dynamically customized audio-visual content, including at least:
means for automatically adjusting at least one customization aspect in response to the information received from the memory of the communication device associated with the at least one viewer within the viewing area; and
means for modifying the audio-visual core portion including at least one revised content portion the at least one revised content portion including at least one of a revised facial appearance of an actor, a revised voice of an actor, a revised movement of an actor, a revised story-line, or a revised plot direction of at least one of a movie, a sitcom, or a story-related content within the audio-visual core portion;
means for outputting the dynamically-customized audio-visual content for display within the viewing area; and
means for facilitating a payment based on the modifying of the audio-visual core portion in accordance with the at least one selection signal based on the at least one indication of the reaction by the at least one similar viewer determined based at least partially on the information including at least one of the online shopping characteristic or the browsing history characteristic received from the memory of the communication device.
30. One or more non-transitory computer-readable media comprising:
one or more instructions for receiving at least one audio-visual core portion at a device configured to be operatively positioned proximate to a display, the device including a processor configurable with one or more instructions, the at least one audio-visual core portion including at least one of a movie, a sitcom, or a story-related content;
one or more instructions for determining at least one selection signal indicative of a viewer preference without requiring interaction by the at least one viewer, including at least one or more instructions for monitoring a viewing area into which a dynamically-customized audio-visual content is to be displayed for presence of at least one electronic device associated with at least one viewer, the one or more instructions for determining at least one selection signal indicative of a viewer preference being based at least partially on:
determining information received from a memory of a communication device associated with the at least one viewer present within the viewing area, the information including at least one of an online shopping characteristic or a browsing history characteristic;
determining, based at least partially on the information, at least one similar viewer having at least one similarity to the at least one viewer; and
determining the at least one selection signal based on at least one indication of a reaction by the at least one similar viewer determined based at least partially on the information including at least one of the online shopping characteristic or the browsing history characteristic;
one or more instructions for modifying the audio-visual core portion, at the device configured to be operatively positioned proximate to the display, with at least one revised content portion in accordance with the at least one selection signal to create a dynamically customized audio-visual content, including at least;
one or more instructions for automatically adjusting at least one customization aspect in response to the information received from the memory of the communication device associated with the at least one viewer within the viewing area; and
one or more instructions for modifying the audio-visual core portion, the at least one revised content portion including at least one of a revised facial appearance of an actor, a revised voice of an actor, a revised movement of an actor, a revised story-line, or a revised plot direction of at least one of a movie, a sitcom, or a story-related content within the audio-visual core portion;
one or more instructions for outputting the dynamically-customized audio-visual content for display within the viewing area; and
one or more instructions for facilitating a payment based on the modifying of the audio-visual core portion in accordance with the at least one selection signal based on the at least one indication of the reaction by the at least one similar viewer determined based at least partially on the information including at least one of the online shopping characteristic or the browsing history characteristic received from the memory of the communication device.
US13/602,058 2012-08-03 2012-08-31 Dynamic customization and monetization of audio-visual content Expired - Fee Related US10455284B2 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US13/602,058 US10455284B2 (en) 2012-08-31 2012-08-31 Dynamic customization and monetization of audio-visual content
US13/689,488 US9300994B2 (en) 2012-08-03 2012-11-29 Methods and systems for viewing dynamically customized audio-visual content
US13/708,632 US10237613B2 (en) 2012-08-03 2012-12-07 Methods and systems for viewing dynamically customized audio-visual content
US13/714,195 US20140039991A1 (en) 2012-08-03 2012-12-13 Dynamic customization of advertising content
US13/720,727 US20140040039A1 (en) 2012-08-03 2012-12-19 Methods and systems for viewing dynamically customized advertising content
US13/801,079 US20140040945A1 (en) 2012-08-03 2013-03-13 Dynamic customization of audio visual content using personalizing information
US13/827,167 US20140040946A1 (en) 2012-08-03 2013-03-14 Dynamic customization of audio visual content using personalizing information
PCT/US2013/053444 WO2014022783A2 (en) 2012-08-03 2013-08-02 Dynamic customization of audio visual content using personalizing information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/602,058 US10455284B2 (en) 2012-08-31 2012-08-31 Dynamic customization and monetization of audio-visual content

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/566,723 Continuation-In-Part US20140040931A1 (en) 2012-08-03 2012-08-03 Dynamic customization and monetization of audio-visual content

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/689,488 Continuation-In-Part US9300994B2 (en) 2012-08-03 2012-11-29 Methods and systems for viewing dynamically customized audio-visual content

Publications (2)

Publication Number Publication Date
US20140068661A1 US20140068661A1 (en) 2014-03-06
US10455284B2 true US10455284B2 (en) 2019-10-22

Family

ID=50189378

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/602,058 Expired - Fee Related US10455284B2 (en) 2012-08-03 2012-08-31 Dynamic customization and monetization of audio-visual content

Country Status (1)

Country Link
US (1) US10455284B2 (en)

Families Citing this family (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052104A1 (en) * 2005-07-01 2008-02-28 Searete Llc Group content substitution in media works
US9426387B2 (en) * 2005-07-01 2016-08-23 Invention Science Fund I, Llc Image anonymization
US20070266049A1 (en) * 2005-07-01 2007-11-15 Searete Llc, A Limited Liability Corportion Of The State Of Delaware Implementation of media content alteration
US8910033B2 (en) * 2005-07-01 2014-12-09 The Invention Science Fund I, Llc Implementing group content substitution in media works
US20090150444A1 (en) * 2005-07-01 2009-06-11 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Media markup for audio content alteration
US20090151004A1 (en) * 2005-07-01 2009-06-11 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Media markup for visual content alteration
US20090037278A1 (en) * 2005-07-01 2009-02-05 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Implementing visual substitution options in media works
US20080052161A1 (en) * 2005-07-01 2008-02-28 Searete Llc Alteration of promotional content in media works
US9065979B2 (en) 2005-07-01 2015-06-23 The Invention Science Fund I, Llc Promotional placement in media works
US20070263865A1 (en) * 2005-07-01 2007-11-15 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Authorization rights for substitute media content
US20070005423A1 (en) * 2005-07-01 2007-01-04 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Providing promotional content
US20090235364A1 (en) * 2005-07-01 2009-09-17 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Media markup for promotional content alteration
US9583141B2 (en) 2005-07-01 2017-02-28 Invention Science Fund I, Llc Implementing audio substitution options in media works
US20090037243A1 (en) * 2005-07-01 2009-02-05 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Audio substitution options in media works
US20090210946A1 (en) * 2005-07-01 2009-08-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Media markup for promotional audio content
US20070294720A1 (en) * 2005-07-01 2007-12-20 Searete Llc Promotional placement in media works
US20080086380A1 (en) * 2005-07-01 2008-04-10 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Alteration of promotional content in media works
US20080013859A1 (en) * 2005-07-01 2008-01-17 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Implementation of media content alteration
US9230601B2 (en) * 2005-07-01 2016-01-05 Invention Science Fund I, Llc Media markup system for content alteration in derivative works
US20080010083A1 (en) * 2005-07-01 2008-01-10 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Approval technique for media content alteration
US20100154065A1 (en) * 2005-07-01 2010-06-17 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Media markup for user-activated content alteration
US20070276757A1 (en) * 2005-07-01 2007-11-29 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Approval technique for media content alteration
US20090204475A1 (en) * 2005-07-01 2009-08-13 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Media markup for promotional visual content
US20090150199A1 (en) * 2005-07-01 2009-06-11 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Visual substitution options in media works
US20100017885A1 (en) * 2005-07-01 2010-01-21 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Media markup identifier for alterable promotional segments
US9092928B2 (en) * 2005-07-01 2015-07-28 The Invention Science Fund I, Llc Implementing group content substitution in media works
US20080180539A1 (en) * 2007-01-31 2008-07-31 Searete Llc, A Limited Liability Corporation Image anonymization
US20080244755A1 (en) * 2007-03-30 2008-10-02 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Authorization for media content alteration
US20080270161A1 (en) * 2007-04-26 2008-10-30 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Authorization rights for substitute media content
US9215512B2 (en) 2007-04-27 2015-12-15 Invention Science Fund I, Llc Implementation of media content alteration
TWI533685B (en) * 2012-10-31 2016-05-11 Inst Information Industry Scene control system, method and recording medium
US9357165B2 (en) * 2012-11-16 2016-05-31 At&T Intellectual Property I, Lp Method and apparatus for providing video conferencing
US20140181633A1 (en) * 2012-12-20 2014-06-26 Stanley Mo Method and apparatus for metadata directed dynamic and personal data curation
US9687745B2 (en) * 2013-08-22 2017-06-27 Riot Games, Inc. Systems and methods that enable customizable teams for multi-player online games
GB201317288D0 (en) * 2013-09-30 2013-11-13 Nokia Corp Editing image data
US20150170446A1 (en) * 2013-12-12 2015-06-18 Microsoft Corporation Access tracking and restriction
KR101891420B1 (en) * 2013-12-24 2018-08-23 인텔 코포레이션 Content protection for data as a service (daas)
US10575039B2 (en) * 2014-02-13 2020-02-25 Piksel, Inc. Delivering media content
CN107079185A (en) * 2014-09-26 2017-08-18 惠普发展公司有限责任合伙企业 Content is shown
US10063925B2 (en) 2014-12-23 2018-08-28 Western Digital Technologies, Inc. Providing digital video assets with multiple age rating levels
US9530426B1 (en) * 2015-06-24 2016-12-27 Microsoft Technology Licensing, Llc Filtering sounds for conferencing applications
US20170064405A1 (en) * 2015-08-26 2017-03-02 Caavo Inc System and method for personalizing and recommending content
US10579879B2 (en) * 2016-08-10 2020-03-03 Vivint, Inc. Sonic sensing
EP3616411A1 (en) * 2017-04-26 2020-03-04 Koninklijke KPN N.V. Personalized multicast content
US11606621B2 (en) * 2017-06-15 2023-03-14 At&T Intellectual Property I, L.P. Method of providing personalized channel change lists
EP3646608A1 (en) * 2017-06-29 2020-05-06 Telefonaktiebolaget LM Ericsson (PUBL) Adapting live content
US10715883B2 (en) 2017-09-06 2020-07-14 Rovi Guides, Inc. Systems and methods for generating summaries of missed portions of media assets
KR102429556B1 (en) * 2017-12-05 2022-08-04 삼성전자주식회사 Display apparatus and audio outputting method
US11153254B2 (en) * 2018-01-02 2021-10-19 International Business Machines Corporation Meme intelligent conversion
US11252483B2 (en) * 2018-11-29 2022-02-15 Rovi Guides, Inc. Systems and methods for summarizing missed portions of storylines
US11232129B2 (en) * 2019-03-26 2022-01-25 At&T Intellectual Property I, L.P. Method for content synchronization and replacement
US11818426B2 (en) * 2019-11-14 2023-11-14 Dish Network L.L.C. Method and system for adaptive audio modification
EP3944100A1 (en) * 2020-07-20 2022-01-26 Mimi Hearing Technologies GmbH Method of selecting a suitable content for subjective preference judgement
US11673059B2 (en) 2021-05-18 2023-06-13 Roblox Corporation Automatic presentation of suitable content

Citations (133)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4260229A (en) 1978-01-23 1981-04-07 Bloomstein Richard W Creating visual images of lip movements
US4569026A (en) 1979-02-05 1986-02-04 Best Robert M TV Movies that talk back
US4600281A (en) 1985-03-29 1986-07-15 Bloomstein Richard W Altering facial displays in cinematic works
US4884972A (en) 1986-11-26 1989-12-05 Bright Star Technology, Inc. Speech synchronized animation
US5111409A (en) 1989-07-21 1992-05-05 Elon Gasper Authoring and use systems for sound synchronized animation
US5623587A (en) 1993-10-15 1997-04-22 Kideo Productions, Inc. Method and apparatus for producing an electronic image
US5926575A (en) 1995-11-07 1999-07-20 Telecommunications Advancement Organization Of Japan Model-based coding/decoding method and system
US6054999A (en) 1988-03-22 2000-04-25 Strandberg; Oerjan Method and apparatus for computer supported animation
US6317593B1 (en) 1996-08-12 2001-11-13 Gateway, Inc. Intelligent cellular telephone function
US20020029384A1 (en) 2000-07-20 2002-03-07 Griggs Theodore L. Mechanism for distributing content data
US20020063714A1 (en) 2000-10-04 2002-05-30 Michael Haas Interactive, multimedia advertising systems and methods
US20020075318A1 (en) 2000-12-20 2002-06-20 Hong Yang System and method for providing adaptive scaling of selected features in an integrated receiver decoder
US20020077900A1 (en) 2000-12-14 2002-06-20 Thompson Tiffany A. Internet protocol-based interstitial advertising
US20020120931A1 (en) 2001-02-20 2002-08-29 Thomas Huber Content based video selection
US20020133397A1 (en) 2001-01-16 2002-09-19 Wilkins Christopher M. Distributed ad flight management
US20030051256A1 (en) 2001-09-07 2003-03-13 Akira Uesaki Video distribution device and a video receiving device
US20030163371A1 (en) 2000-04-11 2003-08-28 Jeremy Beard System and method for presenting information over time to a user
US20040181592A1 (en) 2001-02-22 2004-09-16 Sony Corporation And Sony Electronics, Inc. Collaborative computer-based production system including annotation, versioning and remote interaction
US20050125718A1 (en) * 2002-01-08 2005-06-09 Koninklijke Philips Electronics N. V Controlling application devices simultaneously
US20050138656A1 (en) 1999-09-24 2005-06-23 United Video Properties, Inc. Interactive television program guide with enhanced user interface
US20060010240A1 (en) 2003-10-02 2006-01-12 Mei Chuah Intelligent collaborative expression in support of socialization of devices
US20060037037A1 (en) 2004-06-14 2006-02-16 Tony Miranz System and method for providing virtual video on demand
US20060064717A1 (en) 2004-09-14 2006-03-23 Sony Corporation Information processing device, information processing method and program therefor
US7020888B2 (en) 2000-11-27 2006-03-28 Intellocity Usa, Inc. System and method for providing an omnimedia package
US20060074550A1 (en) 2004-09-20 2006-04-06 Freer Carl J System and method for distributing multimedia content via mobile wireless platforms
US20060116965A1 (en) * 2003-07-14 2006-06-01 Takahiro Kudo Content delivery apparatus and content reproduction apparatus
US20060174264A1 (en) 2002-12-13 2006-08-03 Sony Electronics Inc. Content personalization for digital conent
US7109993B2 (en) 1995-10-08 2006-09-19 Yissum Research Development Company Of The Hebrew University Of Jerusalem Method and system for the automatic computerized audio visual dubbing of movies
US20070005795A1 (en) * 1999-10-22 2007-01-04 Activesky, Inc. Object oriented video system
US20070099684A1 (en) 2005-11-03 2007-05-03 Evans Butterworth System and method for implementing an interactive storyline
US20070122786A1 (en) * 2005-11-29 2007-05-31 Broadcom Corporation Video karaoke system
US20070155307A1 (en) 2006-01-03 2007-07-05 Apple Computer, Inc. Media data transfer
US20070162951A1 (en) 2000-04-28 2007-07-12 Rashkovskiy Oleg B Providing content interruptions
US20070165022A1 (en) 1998-07-15 2007-07-19 Shmuel Peleg Method and system for the automatic computerized audio visual dubbing of movies
US20070214473A1 (en) 2006-03-01 2007-09-13 Barton James M Customizing DVR functionality
US20070244750A1 (en) 2006-04-18 2007-10-18 Sbc Knowledge Ventures L.P. Method and apparatus for selecting advertising
US20070271580A1 (en) 2006-05-16 2007-11-22 Bellsouth Intellectual Property Corporation Methods, Apparatus and Computer Program Products for Audience-Adaptive Control of Content Presentation Based on Sensed Audience Demographics
US20070288978A1 (en) 2006-06-08 2007-12-13 Ajp Enterprises, Llp Systems and methods of customized television programming over the internet
US20070294740A1 (en) * 2000-08-31 2007-12-20 Eddie Drake Real-time audience monitoring, content rating, and content enhancing
US20080028422A1 (en) 2005-07-01 2008-01-31 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Implementation of media content alteration
US20080065468A1 (en) 2006-09-07 2008-03-13 Charles John Berg Methods for Measuring Emotive Response and Selection Preference
US20080109843A1 (en) * 2006-09-14 2008-05-08 Shah Ullah Methods and systems for securing content played on mobile devices
US20080126193A1 (en) 2006-11-27 2008-05-29 Grocery Shopping Network Ad delivery and implementation system
US20080168489A1 (en) 2007-01-10 2008-07-10 Steven Schraga Customized program insertion system
US20080215436A1 (en) 2006-12-15 2008-09-04 Joseph Roberts System for delivering advertisements to wireless communication devices
US20080250468A1 (en) 2007-04-05 2008-10-09 Sbc Knowledge Ventures. L.P. System and method for scheduling presentation of future video event data
US20080266324A1 (en) 2007-04-30 2008-10-30 Navteq North America, Llc Street level video simulation display system and method
US20080320545A1 (en) 2007-06-22 2008-12-25 Schwartz Richard T System and method for providing audio-visual programming with alternative content
US20090048914A1 (en) * 2007-08-13 2009-02-19 Research In Motion Limited System and method for facilitating targeted mobile advertisement using pre-loaded ad content
US20090083814A1 (en) * 2007-09-25 2009-03-26 Kabushiki Kaisha Toshiba Apparatus and method for outputting video Imagrs, and purchasing system
US20090089249A1 (en) 2007-10-01 2009-04-02 Verosub Ellis M Techniques for Correlating Events to Digital Media Assets
US20090119704A1 (en) 2004-04-23 2009-05-07 Koninklijke Philips Electronics, N.V. Method and apparatus to catch up with a running broadcast or stored content
US20090118016A1 (en) 2007-11-01 2009-05-07 Guy Ben-Artzi System and method for mobile games
US20090138805A1 (en) 2007-11-21 2009-05-28 Gesturetek, Inc. Media preferences
US20090138332A1 (en) 2007-11-23 2009-05-28 Dimitri Kanevsky System and method for dynamically adapting a user slide show presentation to audience behavior
US20090144772A1 (en) 2007-11-30 2009-06-04 Google Inc. Video object tag creation and processing
US20090172022A1 (en) * 2007-12-28 2009-07-02 Microsoft Corporation Dynamic storybook
US20090187944A1 (en) * 2008-01-21 2009-07-23 At&T Knowledge Ventures, Lp System and Method of Providing Recommendations Related to a Service System
US20090210902A1 (en) * 2000-02-25 2009-08-20 Malcolm Slaney Targeted television content display
US20090222853A1 (en) 2008-02-29 2009-09-03 At&T Knowledge Ventures, L.P. Advertisement Replacement System
US20090249409A1 (en) 2008-03-25 2009-10-01 International Business Machines Corporation Dynamic rebroadcast scheduling of videos
US20090254931A1 (en) * 2008-04-07 2009-10-08 Pizzurro Alfred J Systems and methods of interactive production marketing
US20090265214A1 (en) 2008-04-18 2009-10-22 Apple Inc. Advertisement in Operating System
US20090282093A1 (en) 2008-05-06 2009-11-12 Microsoft Corporation Media content programming, delivery, and consumption
US7631327B2 (en) 2001-08-08 2009-12-08 Accenture Global Services Gmbh Enhanced custom content television
US20100077314A1 (en) 2007-03-19 2010-03-25 At&T Corp. System and Measured Method for Multilingual Collaborative Network Interaction
US20100088406A1 (en) 2008-10-08 2010-04-08 Samsung Electronics Co., Ltd. Method for providing dynamic contents service by using analysis of user's response and apparatus using same
US20100094841A1 (en) * 2008-10-14 2010-04-15 Disney Enterprises, Inc. Method and system for producing customized content
US20100125544A1 (en) 2008-11-18 2010-05-20 Electronics And Telecommunications Research Institute Method and apparatus for recommending personalized content
US20100188579A1 (en) * 2009-01-29 2010-07-29 At&T Intellectual Property I, L.P. System and Method to Control and Present a Picture-In-Picture (PIP) Window Based on Movement Data
US20100202750A1 (en) * 2005-09-16 2010-08-12 Flixor, Inc., A California Corporation Personalizing a Video
US20100257551A1 (en) 2009-04-01 2010-10-07 Embarq Holdings Company, Llc Dynamic video content
US7865567B1 (en) 1993-12-02 2011-01-04 Discovery Patent Holdings, Llc Virtual on-demand electronic book
US20110010231A1 (en) 2001-12-14 2011-01-13 Price William P Audiovisual system and method for displaying segmented advertisements tailored to the characteristic viewing preferences of a user
US20110029099A1 (en) 2008-04-11 2011-02-03 Thomson Licensing Method for automated television production
US20110064388A1 (en) * 2006-07-11 2011-03-17 Pandoodle Corp. User Customized Animated Video and Method For Making the Same
US20110066730A1 (en) * 2005-01-03 2011-03-17 Luc Julia System and method for delivering content to users on a ntework
US20110106744A1 (en) * 2009-04-16 2011-05-05 Ralf Becker Content recommendation device, content recommendation system, content recommendation method, program, and integrated circuit
US20110125777A1 (en) * 2009-11-25 2011-05-26 At&T Intellectual Property I, L.P. Sense and Match Advertising Content
US20110122094A1 (en) 2009-11-25 2011-05-26 Coretronic Corporation Optical touch apparatus and optical touch display apparatus
US20110200303A1 (en) 2010-02-12 2011-08-18 Telefonica, S.A. Method of Video Playback
US8016653B2 (en) 2007-02-01 2011-09-13 Sportvision, Inc. Three dimensional virtual rendering of a live event
US20110271301A1 (en) 2008-12-23 2011-11-03 Dish Network L.L.C. Systems and methods for providing viewer-related information on a display based upon wireless identification of a particular viewer
US8059201B2 (en) 2003-10-24 2011-11-15 Koninklijke Philips Electronics N.V. Method and apparatus for providing a video signal
US20110321075A1 (en) 2010-06-29 2011-12-29 International Business Machines Corporation Dynamically modifying media content for presentation to a group audience
US20110321082A1 (en) 2010-06-29 2011-12-29 At&T Intellectual Property I, L.P. User-Defined Modification of Video Content
US20120005595A1 (en) 2010-06-30 2012-01-05 Verizon Patent And Licensing, Inc. Users as actors in content
US20120030699A1 (en) 2010-08-01 2012-02-02 Umesh Amin Systems and methods for storing and rendering atleast an user preference based media content
US20120060176A1 (en) * 2010-09-08 2012-03-08 Chai Crx K Smart media selection based on viewer user presence
US20120072936A1 (en) 2010-09-20 2012-03-22 Microsoft Corporation Automatic Customized Advertisement Generation System
US20120072944A1 (en) * 2010-09-16 2012-03-22 Verizon New Jersey Method and apparatus for providing seamless viewing
US20120072940A1 (en) 2010-09-21 2012-03-22 Brian Shane Fuhrer Methods, apparatus, and systems to collect audience measurement data
US20120089908A1 (en) 2010-10-07 2012-04-12 Sony Computer Entertainment America, LLC. Leveraging geo-ip information to select default avatar
US20120094768A1 (en) * 2010-10-14 2012-04-19 FlixMaster Web-based interactive game utilizing video components
US20120110027A1 (en) * 2008-10-28 2012-05-03 Fernando Falcon Audience measurement system
US20120112877A1 (en) 2010-11-08 2012-05-10 Cox Communications, Inc. Automated Device/System Setup Based On Presence Information
US20120124604A1 (en) 2010-11-12 2012-05-17 Microsoft Corporation Automatic passive and anonymous feedback system
US20120124456A1 (en) 2010-11-12 2012-05-17 Microsoft Corporation Audience-based presentation and customization of content
US20120135684A1 (en) * 2010-11-30 2012-05-31 Cox Communications, Inc. Systems and methods for customizing broadband content based upon passive presence detection of users
US20120157197A1 (en) * 2010-12-17 2012-06-21 XMG Studio Inc. Systems and methods of changing storyline based on player location
US20120159327A1 (en) 2010-12-16 2012-06-21 Microsoft Corporation Real-time interaction with entertainment content
US20120223952A1 (en) * 2011-03-01 2012-09-06 Sony Computer Entertainment Inc. Information Processing Device Capable of Displaying A Character Representing A User, and Information Processing Method Thereof.
US20120246223A1 (en) 2011-03-02 2012-09-27 Benjamin Zeis Newhouse System and method for distributing virtual and augmented reality scenes through a social network
US20120304206A1 (en) 2011-05-26 2012-11-29 Verizon Patent And Licensing, Inc. Methods and Systems for Presenting an Advertisement Associated with an Ambient Action of a User
US20120317593A1 (en) 2011-06-10 2012-12-13 Myslinski Lucas J Fact checking method and system
US20120324493A1 (en) 2011-06-17 2012-12-20 Microsoft Corporation Interest-based video streams
US20120327172A1 (en) 2011-06-22 2012-12-27 Microsoft Corporation Modifying video regions using mobile device input
US20130006754A1 (en) 2011-06-30 2013-01-03 Microsoft Corporation Multi-step impression campaigns
US20130014145A1 (en) 2011-07-06 2013-01-10 Manish Bhatia Mobile content tracking platform methods
US20130024282A1 (en) 2011-07-23 2013-01-24 Microsoft Corporation Automatic purchase history tracking
US20130046637A1 (en) * 2011-08-19 2013-02-21 Firethorn Mobile, Inc. System and method for interactive promotion of products and services
US20130055087A1 (en) * 2011-08-26 2013-02-28 Gary W. Flint Device, Method, and Graphical User Interface for Editing Videos
US20130067052A1 (en) * 2011-09-13 2013-03-14 Jennifer Reynolds User adaptive http stream manager and method for using same
US20130085805A1 (en) 2012-10-02 2013-04-04 Toyota Motor Sales, U.S.A., Inc. Prioritizing marketing leads based on social media postings
US20130091243A1 (en) 2011-10-10 2013-04-11 Eyeview Inc. Using cloud computing for generating personalized dynamic and broadcast quality videos
US20130132999A1 (en) 2011-11-18 2013-05-23 Verizon Patent And Licensing, Inc. Programming based interactive content
US20130145240A1 (en) 2011-12-05 2013-06-06 Thomas G. Anderson Customizable System for Storytelling
US20130160051A1 (en) 2011-12-15 2013-06-20 Microsoft Corporation Dynamic Personalized Program Content
US20130205332A1 (en) 2012-02-02 2013-08-08 Michael Martin Stream Messaging for Program Stream Automation
US20130219417A1 (en) 2012-02-16 2013-08-22 Comcast Cable Communications, Llc Automated Personalization
US20130283162A1 (en) 2012-04-23 2013-10-24 Sony Mobile Communications Ab System and method for dynamic content modification based on user reactions
US20130290233A1 (en) 2010-08-27 2013-10-31 Bran Ferren Techniques to customize a media processing system
US20130298180A1 (en) * 2010-12-10 2013-11-07 Eldon Technology Limited Cotent recognition and censorship
US20130312018A1 (en) 2012-05-17 2013-11-21 Cable Television Laboratories, Inc. Personalizing services using presence detection
US20140007148A1 (en) 2012-06-28 2014-01-02 Joshua J. Ratliff System and method for adaptive data processing
US20140036152A1 (en) 2012-07-31 2014-02-06 Google Inc. Video alerts
US8650591B2 (en) 2010-03-09 2014-02-11 Yolanda Prieto Video enabled digital devices for embedding user data in interactive applications
US8726312B1 (en) * 2012-06-06 2014-05-13 Google Inc. Method, apparatus, system and computer-readable medium for dynamically editing and displaying television advertisements to include individualized content based on a users profile
US8725559B1 (en) 2009-05-12 2014-05-13 Amazon Technologies, Inc. Attribute based advertisement categorization
US20140136318A1 (en) 2012-11-09 2014-05-15 Motorola Mobility Llc Systems and Methods for Advertising to a Group of Users
US20140195345A1 (en) 2013-01-09 2014-07-10 Philip Scott Lyren Customizing advertisements to users
US20140223464A1 (en) 2011-08-15 2014-08-07 Comigo Ltd. Methods and systems for creating and managing multi participant sessions
US9965768B1 (en) 2011-05-19 2018-05-08 Amazon Technologies, Inc. Location-based mobile advertising

Patent Citations (137)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4260229A (en) 1978-01-23 1981-04-07 Bloomstein Richard W Creating visual images of lip movements
US4569026A (en) 1979-02-05 1986-02-04 Best Robert M TV Movies that talk back
US4600281A (en) 1985-03-29 1986-07-15 Bloomstein Richard W Altering facial displays in cinematic works
US4827532A (en) 1985-03-29 1989-05-02 Bloomstein Richard W Cinematic works with altered facial displays
US4884972A (en) 1986-11-26 1989-12-05 Bright Star Technology, Inc. Speech synchronized animation
US6054999A (en) 1988-03-22 2000-04-25 Strandberg; Oerjan Method and apparatus for computer supported animation
US5111409A (en) 1989-07-21 1992-05-05 Elon Gasper Authoring and use systems for sound synchronized animation
US5623587A (en) 1993-10-15 1997-04-22 Kideo Productions, Inc. Method and apparatus for producing an electronic image
US7865567B1 (en) 1993-12-02 2011-01-04 Discovery Patent Holdings, Llc Virtual on-demand electronic book
US7109993B2 (en) 1995-10-08 2006-09-19 Yissum Research Development Company Of The Hebrew University Of Jerusalem Method and system for the automatic computerized audio visual dubbing of movies
US5926575A (en) 1995-11-07 1999-07-20 Telecommunications Advancement Organization Of Japan Model-based coding/decoding method and system
US6317593B1 (en) 1996-08-12 2001-11-13 Gateway, Inc. Intelligent cellular telephone function
US20070165022A1 (en) 1998-07-15 2007-07-19 Shmuel Peleg Method and system for the automatic computerized audio visual dubbing of movies
US20050138656A1 (en) 1999-09-24 2005-06-23 United Video Properties, Inc. Interactive television program guide with enhanced user interface
US20070005795A1 (en) * 1999-10-22 2007-01-04 Activesky, Inc. Object oriented video system
US20090210902A1 (en) * 2000-02-25 2009-08-20 Malcolm Slaney Targeted television content display
US20030163371A1 (en) 2000-04-11 2003-08-28 Jeremy Beard System and method for presenting information over time to a user
US20070162951A1 (en) 2000-04-28 2007-07-12 Rashkovskiy Oleg B Providing content interruptions
US20020029384A1 (en) 2000-07-20 2002-03-07 Griggs Theodore L. Mechanism for distributing content data
US20070294740A1 (en) * 2000-08-31 2007-12-20 Eddie Drake Real-time audience monitoring, content rating, and content enhancing
US20020063714A1 (en) 2000-10-04 2002-05-30 Michael Haas Interactive, multimedia advertising systems and methods
US7020888B2 (en) 2000-11-27 2006-03-28 Intellocity Usa, Inc. System and method for providing an omnimedia package
US20020077900A1 (en) 2000-12-14 2002-06-20 Thompson Tiffany A. Internet protocol-based interstitial advertising
US20020075318A1 (en) 2000-12-20 2002-06-20 Hong Yang System and method for providing adaptive scaling of selected features in an integrated receiver decoder
US20020133397A1 (en) 2001-01-16 2002-09-19 Wilkins Christopher M. Distributed ad flight management
US20020120931A1 (en) 2001-02-20 2002-08-29 Thomas Huber Content based video selection
US20040181592A1 (en) 2001-02-22 2004-09-16 Sony Corporation And Sony Electronics, Inc. Collaborative computer-based production system including annotation, versioning and remote interaction
US7631327B2 (en) 2001-08-08 2009-12-08 Accenture Global Services Gmbh Enhanced custom content television
US7945926B2 (en) 2001-08-08 2011-05-17 Accenture Global Services Limited Enhanced custom content television
US20100083306A1 (en) * 2001-08-08 2010-04-01 Accenture Global Services Gmbh Enhanced custom content television
US20030051256A1 (en) 2001-09-07 2003-03-13 Akira Uesaki Video distribution device and a video receiving device
US20110010231A1 (en) 2001-12-14 2011-01-13 Price William P Audiovisual system and method for displaying segmented advertisements tailored to the characteristic viewing preferences of a user
US20050125718A1 (en) * 2002-01-08 2005-06-09 Koninklijke Philips Electronics N. V Controlling application devices simultaneously
US20060174264A1 (en) 2002-12-13 2006-08-03 Sony Electronics Inc. Content personalization for digital conent
US20060116965A1 (en) * 2003-07-14 2006-06-01 Takahiro Kudo Content delivery apparatus and content reproduction apparatus
US20060010240A1 (en) 2003-10-02 2006-01-12 Mei Chuah Intelligent collaborative expression in support of socialization of devices
US8059201B2 (en) 2003-10-24 2011-11-15 Koninklijke Philips Electronics N.V. Method and apparatus for providing a video signal
US20090119704A1 (en) 2004-04-23 2009-05-07 Koninklijke Philips Electronics, N.V. Method and apparatus to catch up with a running broadcast or stored content
US20060037037A1 (en) 2004-06-14 2006-02-16 Tony Miranz System and method for providing virtual video on demand
US20060064717A1 (en) 2004-09-14 2006-03-23 Sony Corporation Information processing device, information processing method and program therefor
US20060074550A1 (en) 2004-09-20 2006-04-06 Freer Carl J System and method for distributing multimedia content via mobile wireless platforms
US20110066730A1 (en) * 2005-01-03 2011-03-17 Luc Julia System and method for delivering content to users on a ntework
US20080028422A1 (en) 2005-07-01 2008-01-31 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Implementation of media content alteration
US20100202750A1 (en) * 2005-09-16 2010-08-12 Flixor, Inc., A California Corporation Personalizing a Video
US20070099684A1 (en) 2005-11-03 2007-05-03 Evans Butterworth System and method for implementing an interactive storyline
US20070122786A1 (en) * 2005-11-29 2007-05-31 Broadcom Corporation Video karaoke system
US20070155307A1 (en) 2006-01-03 2007-07-05 Apple Computer, Inc. Media data transfer
US20070214473A1 (en) 2006-03-01 2007-09-13 Barton James M Customizing DVR functionality
US20070244750A1 (en) 2006-04-18 2007-10-18 Sbc Knowledge Ventures L.P. Method and apparatus for selecting advertising
US20070271580A1 (en) 2006-05-16 2007-11-22 Bellsouth Intellectual Property Corporation Methods, Apparatus and Computer Program Products for Audience-Adaptive Control of Content Presentation Based on Sensed Audience Demographics
US20070288978A1 (en) 2006-06-08 2007-12-13 Ajp Enterprises, Llp Systems and methods of customized television programming over the internet
US20110064388A1 (en) * 2006-07-11 2011-03-17 Pandoodle Corp. User Customized Animated Video and Method For Making the Same
US20080065468A1 (en) 2006-09-07 2008-03-13 Charles John Berg Methods for Measuring Emotive Response and Selection Preference
US20080109843A1 (en) * 2006-09-14 2008-05-08 Shah Ullah Methods and systems for securing content played on mobile devices
US20080126193A1 (en) 2006-11-27 2008-05-29 Grocery Shopping Network Ad delivery and implementation system
US20080215436A1 (en) 2006-12-15 2008-09-04 Joseph Roberts System for delivering advertisements to wireless communication devices
US20080168489A1 (en) 2007-01-10 2008-07-10 Steven Schraga Customized program insertion system
US20110211094A1 (en) 2007-01-10 2011-09-01 Steven Schraga Customized program insertion system
US8016653B2 (en) 2007-02-01 2011-09-13 Sportvision, Inc. Three dimensional virtual rendering of a live event
US20100077314A1 (en) 2007-03-19 2010-03-25 At&T Corp. System and Measured Method for Multilingual Collaborative Network Interaction
US20080250468A1 (en) 2007-04-05 2008-10-09 Sbc Knowledge Ventures. L.P. System and method for scheduling presentation of future video event data
US20080266324A1 (en) 2007-04-30 2008-10-30 Navteq North America, Llc Street level video simulation display system and method
US20080320545A1 (en) 2007-06-22 2008-12-25 Schwartz Richard T System and method for providing audio-visual programming with alternative content
US20090048914A1 (en) * 2007-08-13 2009-02-19 Research In Motion Limited System and method for facilitating targeted mobile advertisement using pre-loaded ad content
US20090083814A1 (en) * 2007-09-25 2009-03-26 Kabushiki Kaisha Toshiba Apparatus and method for outputting video Imagrs, and purchasing system
US20090089249A1 (en) 2007-10-01 2009-04-02 Verosub Ellis M Techniques for Correlating Events to Digital Media Assets
US20090118016A1 (en) 2007-11-01 2009-05-07 Guy Ben-Artzi System and method for mobile games
US20090138805A1 (en) 2007-11-21 2009-05-28 Gesturetek, Inc. Media preferences
US20090138332A1 (en) 2007-11-23 2009-05-28 Dimitri Kanevsky System and method for dynamically adapting a user slide show presentation to audience behavior
US20090144772A1 (en) 2007-11-30 2009-06-04 Google Inc. Video object tag creation and processing
US20090172022A1 (en) * 2007-12-28 2009-07-02 Microsoft Corporation Dynamic storybook
US20090187944A1 (en) * 2008-01-21 2009-07-23 At&T Knowledge Ventures, Lp System and Method of Providing Recommendations Related to a Service System
US20090222853A1 (en) 2008-02-29 2009-09-03 At&T Knowledge Ventures, L.P. Advertisement Replacement System
US20090249409A1 (en) 2008-03-25 2009-10-01 International Business Machines Corporation Dynamic rebroadcast scheduling of videos
US20090254931A1 (en) * 2008-04-07 2009-10-08 Pizzurro Alfred J Systems and methods of interactive production marketing
US20110029099A1 (en) 2008-04-11 2011-02-03 Thomson Licensing Method for automated television production
US20090265214A1 (en) 2008-04-18 2009-10-22 Apple Inc. Advertisement in Operating System
US20090282093A1 (en) 2008-05-06 2009-11-12 Microsoft Corporation Media content programming, delivery, and consumption
US20100088406A1 (en) 2008-10-08 2010-04-08 Samsung Electronics Co., Ltd. Method for providing dynamic contents service by using analysis of user's response and apparatus using same
US20100094841A1 (en) * 2008-10-14 2010-04-15 Disney Enterprises, Inc. Method and system for producing customized content
US20120110027A1 (en) * 2008-10-28 2012-05-03 Fernando Falcon Audience measurement system
US20100125544A1 (en) 2008-11-18 2010-05-20 Electronics And Telecommunications Research Institute Method and apparatus for recommending personalized content
US20110271301A1 (en) 2008-12-23 2011-11-03 Dish Network L.L.C. Systems and methods for providing viewer-related information on a display based upon wireless identification of a particular viewer
US20100188579A1 (en) * 2009-01-29 2010-07-29 At&T Intellectual Property I, L.P. System and Method to Control and Present a Picture-In-Picture (PIP) Window Based on Movement Data
US20100257551A1 (en) 2009-04-01 2010-10-07 Embarq Holdings Company, Llc Dynamic video content
US20110106744A1 (en) * 2009-04-16 2011-05-05 Ralf Becker Content recommendation device, content recommendation system, content recommendation method, program, and integrated circuit
US8725559B1 (en) 2009-05-12 2014-05-13 Amazon Technologies, Inc. Attribute based advertisement categorization
US20110125777A1 (en) * 2009-11-25 2011-05-26 At&T Intellectual Property I, L.P. Sense and Match Advertising Content
US20110122094A1 (en) 2009-11-25 2011-05-26 Coretronic Corporation Optical touch apparatus and optical touch display apparatus
US20110200303A1 (en) 2010-02-12 2011-08-18 Telefonica, S.A. Method of Video Playback
US8650591B2 (en) 2010-03-09 2014-02-11 Yolanda Prieto Video enabled digital devices for embedding user data in interactive applications
US20110321082A1 (en) 2010-06-29 2011-12-29 At&T Intellectual Property I, L.P. User-Defined Modification of Video Content
US20110321075A1 (en) 2010-06-29 2011-12-29 International Business Machines Corporation Dynamically modifying media content for presentation to a group audience
US20120005595A1 (en) 2010-06-30 2012-01-05 Verizon Patent And Licensing, Inc. Users as actors in content
US20120030699A1 (en) 2010-08-01 2012-02-02 Umesh Amin Systems and methods for storing and rendering atleast an user preference based media content
US20130290233A1 (en) 2010-08-27 2013-10-31 Bran Ferren Techniques to customize a media processing system
US20120060176A1 (en) * 2010-09-08 2012-03-08 Chai Crx K Smart media selection based on viewer user presence
US20120072944A1 (en) * 2010-09-16 2012-03-22 Verizon New Jersey Method and apparatus for providing seamless viewing
US20120072936A1 (en) 2010-09-20 2012-03-22 Microsoft Corporation Automatic Customized Advertisement Generation System
US20120072940A1 (en) 2010-09-21 2012-03-22 Brian Shane Fuhrer Methods, apparatus, and systems to collect audience measurement data
US20120089908A1 (en) 2010-10-07 2012-04-12 Sony Computer Entertainment America, LLC. Leveraging geo-ip information to select default avatar
US20120094768A1 (en) * 2010-10-14 2012-04-19 FlixMaster Web-based interactive game utilizing video components
US20120112877A1 (en) 2010-11-08 2012-05-10 Cox Communications, Inc. Automated Device/System Setup Based On Presence Information
US20120124604A1 (en) 2010-11-12 2012-05-17 Microsoft Corporation Automatic passive and anonymous feedback system
US20120124456A1 (en) 2010-11-12 2012-05-17 Microsoft Corporation Audience-based presentation and customization of content
US20120135684A1 (en) * 2010-11-30 2012-05-31 Cox Communications, Inc. Systems and methods for customizing broadband content based upon passive presence detection of users
US20130298180A1 (en) * 2010-12-10 2013-11-07 Eldon Technology Limited Cotent recognition and censorship
US20120159327A1 (en) 2010-12-16 2012-06-21 Microsoft Corporation Real-time interaction with entertainment content
US20120157197A1 (en) * 2010-12-17 2012-06-21 XMG Studio Inc. Systems and methods of changing storyline based on player location
US20120223952A1 (en) * 2011-03-01 2012-09-06 Sony Computer Entertainment Inc. Information Processing Device Capable of Displaying A Character Representing A User, and Information Processing Method Thereof.
US20120246223A1 (en) 2011-03-02 2012-09-27 Benjamin Zeis Newhouse System and method for distributing virtual and augmented reality scenes through a social network
US9965768B1 (en) 2011-05-19 2018-05-08 Amazon Technologies, Inc. Location-based mobile advertising
US20120304206A1 (en) 2011-05-26 2012-11-29 Verizon Patent And Licensing, Inc. Methods and Systems for Presenting an Advertisement Associated with an Ambient Action of a User
US20120317593A1 (en) 2011-06-10 2012-12-13 Myslinski Lucas J Fact checking method and system
US20120324493A1 (en) 2011-06-17 2012-12-20 Microsoft Corporation Interest-based video streams
US20120327172A1 (en) 2011-06-22 2012-12-27 Microsoft Corporation Modifying video regions using mobile device input
US20130006754A1 (en) 2011-06-30 2013-01-03 Microsoft Corporation Multi-step impression campaigns
US20130014145A1 (en) 2011-07-06 2013-01-10 Manish Bhatia Mobile content tracking platform methods
US20130024282A1 (en) 2011-07-23 2013-01-24 Microsoft Corporation Automatic purchase history tracking
US20140223464A1 (en) 2011-08-15 2014-08-07 Comigo Ltd. Methods and systems for creating and managing multi participant sessions
US20130046637A1 (en) * 2011-08-19 2013-02-21 Firethorn Mobile, Inc. System and method for interactive promotion of products and services
US20130055087A1 (en) * 2011-08-26 2013-02-28 Gary W. Flint Device, Method, and Graphical User Interface for Editing Videos
US20130067052A1 (en) * 2011-09-13 2013-03-14 Jennifer Reynolds User adaptive http stream manager and method for using same
US20130091243A1 (en) 2011-10-10 2013-04-11 Eyeview Inc. Using cloud computing for generating personalized dynamic and broadcast quality videos
US20130132999A1 (en) 2011-11-18 2013-05-23 Verizon Patent And Licensing, Inc. Programming based interactive content
US20130145240A1 (en) 2011-12-05 2013-06-06 Thomas G. Anderson Customizable System for Storytelling
US20130160051A1 (en) 2011-12-15 2013-06-20 Microsoft Corporation Dynamic Personalized Program Content
US20130205332A1 (en) 2012-02-02 2013-08-08 Michael Martin Stream Messaging for Program Stream Automation
US20130219417A1 (en) 2012-02-16 2013-08-22 Comcast Cable Communications, Llc Automated Personalization
US20130283162A1 (en) 2012-04-23 2013-10-24 Sony Mobile Communications Ab System and method for dynamic content modification based on user reactions
US20130312018A1 (en) 2012-05-17 2013-11-21 Cable Television Laboratories, Inc. Personalizing services using presence detection
US8726312B1 (en) * 2012-06-06 2014-05-13 Google Inc. Method, apparatus, system and computer-readable medium for dynamically editing and displaying television advertisements to include individualized content based on a users profile
US20140007148A1 (en) 2012-06-28 2014-01-02 Joshua J. Ratliff System and method for adaptive data processing
US20140036152A1 (en) 2012-07-31 2014-02-06 Google Inc. Video alerts
US20130085805A1 (en) 2012-10-02 2013-04-04 Toyota Motor Sales, U.S.A., Inc. Prioritizing marketing leads based on social media postings
US20140136318A1 (en) 2012-11-09 2014-05-15 Motorola Mobility Llc Systems and Methods for Advertising to a Group of Users
US20140195345A1 (en) 2013-01-09 2014-07-10 Philip Scott Lyren Customizing advertisements to users

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PCT International Search Report; International App. No. PCT/US2013/053444; Jan. 17, 2014; pp. 1-2.
Young, Robert; "Google . . . the OS for Advertising," GIGAOM; Nov. 9, 2006; http://gigaom.com/2006/11/09/google-the-os-for-advertising/.

Also Published As

Publication number Publication date
US20140068661A1 (en) 2014-03-06

Similar Documents

Publication Publication Date Title
US10455284B2 (en) Dynamic customization and monetization of audio-visual content
US9300994B2 (en) Methods and systems for viewing dynamically customized audio-visual content
US20140040039A1 (en) Methods and systems for viewing dynamically customized advertising content
US20140039991A1 (en) Dynamic customization of advertising content
US20140040946A1 (en) Dynamic customization of audio visual content using personalizing information
US10237613B2 (en) Methods and systems for viewing dynamically customized audio-visual content
US20140040931A1 (en) Dynamic customization and monetization of audio-visual content
US11200028B2 (en) Apparatus, systems and methods for presenting content reviews in a virtual world
US11860936B2 (en) Method and system for producing customized content
US20200014979A1 (en) Methods and systems for providing relevant supplemental content to a user device
Klinger Beyond the multiplex: Cinema, new technologies, and the home
US9471924B2 (en) Control of digital media character replacement using personalized rulesets
US20130268955A1 (en) Highlighting or augmenting a media program
US20140172891A1 (en) Methods and systems for displaying location specific content
JP2015521413A (en) Determining the subsequent part of the current media program
US11343595B2 (en) User interface elements for content selection in media narrative presentation
US9516373B1 (en) Presets of synchronized second screen functions
US9596502B1 (en) Integration of multiple synchronization methodologies
US20140040945A1 (en) Dynamic customization of audio visual content using personalizing information
Orlebar The television handbook
US20150172773A1 (en) Systems and methods for selectively printing three-dimensional objects within media assets
Kyncl et al. Streampunks: How YouTube and the new creators are transforming our lives
Noam The content, impact, and regulation of streaming video: The next generation of media emerges
Johnston ‘Pop-out footballers’, pop concerts and popular films: The past, present and future of three-dimensional television
US20170072302A1 (en) Movie Master Game Method

Legal Events

Date Code Title Description
AS Assignment

Owner name: ELWHA LLC, A LIMITED LIABILITY COMPANY OF THE STAT

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GATES, WILLIAM H., III;GERRITY, DANIEL A.;HOLMAN, PAUL;AND OTHERS;SIGNING DATES FROM 20121221 TO 20130312;REEL/FRAME:030036/0001

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20231022