WO2021152280A1 - T entertainment system and method of delivery augmented content - Google Patents

T entertainment system and method of delivery augmented content Download PDF

Info

Publication number
WO2021152280A1
WO2021152280A1 PCT/GB2020/050181 GB2020050181W WO2021152280A1 WO 2021152280 A1 WO2021152280 A1 WO 2021152280A1 GB 2020050181 W GB2020050181 W GB 2020050181W WO 2021152280 A1 WO2021152280 A1 WO 2021152280A1
Authority
WO
WIPO (PCT)
Prior art keywords
payload
primary
ancillary
content
data
Prior art date
Application number
PCT/GB2020/050181
Other languages
French (fr)
Inventor
Jason Charles Regler
Original Assignee
Jason Charles Regler
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jason Charles Regler filed Critical Jason Charles Regler
Priority to PCT/GB2020/050181 priority Critical patent/WO2021152280A1/en
Publication of WO2021152280A1 publication Critical patent/WO2021152280A1/en

Links

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/165Controlling the light source following a pre-assigned programmed sequence; Logic control [LC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4131Peripherals receiving signals from specially adapted client devices home appliance, e.g. lighting, air conditioning system, metering devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43079Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of additional data with content streams on multiple devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
    • H04N21/43637Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8186Monomedia components thereof involving executable data, e.g. software specially adapted to be executed by a peripheral of the client device, e.g. by a reprogrammable remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8586Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/175Controlling the light source by remote control
    • H05B47/19Controlling the light source by remote control via wireless transmission

Definitions

  • This invention relates, in general, to an entertainment system and is particularly, but not exclusively, applicable to a system and associated method in which augmented content, including control information and/or complementary data, is delivered in a multi-media environment.
  • ancillary content to the browser’s user, in the form, of product advertisements that, in some instances, are targeted at the user foliowing profiling of the user in a registration-like process.
  • This ancillary information is therefore pushed with access to a particular web-page, and the precise content potentially targeted according to partitioning of the user's profile into a particularly characteristic or demographic, information is therefore provided directly to the device used to undertake web access and browsing activity,
  • a method of delivering augmented content in an interconnected network of a first device wirelessly connected to at least a second device wherein the first device includes a display and the second device includes a transceiver for receiving, from the first device, a signal containing at least two audio channels wherein a first one of the audio channels includes time-varying audio and a second one of the pair of audio channels contains time-aligned control signaling that correlates a sensory-perceivable function with the time-varying audio
  • the method comprising: providing a media source file to the first device, said media source file including at least said at least two audio channels; selectively displaying, when available in the media source file, video content from the media source file on the display of the first device; transmitting the signal to the second device, wherein the signal is derived from an aspect of the media source file; at the second device, processing the signal received by a receiver function at the second device to cause a speaker remote to the first device to output the time-
  • the secondary device may be an adaptor and the method further includes coupling to the adaptor into an input port of a wireless speaker.
  • the time-aligned control signaling preferably indude sub-audible tones
  • the third device is an LED light whose operation is controlled in response to the instruction.
  • the media source file is one of: a) streamed to the first device over a communications link; b) downloaded to the first device upon request; and c) pre-loaded into memory of the first device.
  • a method of delivering augmented content to a device comprising: at a primary recipient device, receiving a data packet containing a primary payload and secondary payload, the primary payload for delivery by the primary recipient device through a sensory output assoriated with the primary recipient device; in response to identifying the presence of secondary payload, extracting the secondary payload from the data packet and activating a transmitter in the primary recipient device to send the secondary payload to an ancillary device registered that is pre-registered with the primary recipient device; at the ancillary device, receiving the secondary payload and then at least one of: storing the secondary payload; playing the secondary payload through an output device of the ancillary device; changing a mode of operation of the ancillary device; and controlling operation of the ancillary device in response to the secondary payload, wherein the primary payload is a music video or a streamed concert and the secondary payload is a scripted sequence of actuation codes that are complementary to
  • an entertainment system comprising: a television or computer having: an input over which is received, from a remote content provider, data containing a primary payload directly executable by the television or computer and secondary payload having content distinct to the primary payload; a display; a transmitter; and a first controller executing control logic to control display of content, extracted from the primary payload, on the display and selectively to control sending of secondary payload using the transmitter; and an ancillary device wireless coupled to the television or computer, the ancillary device including: a receiver arranged to receive the secondary payload sent from the transmitter; a second controller arranged to control operation of the ancillary device, wherein the second controller is further arranged to interpret said content of the secondary payload; and a sensory output horn which the secondary payload can be presented, wherein presentation of the secondary payload at the ancillary device is dependent upon interpretation, by one of the first controller and the second controller, of the content of the secondary payload, and wherein the primary payload is a music video
  • the preferred embodiments operate automatically to cascade secondary information - whether control data and/or content - into a locally registered ancillary recipient device that is connected to a relay device, such as a television or computer, arranged to deliver primary data that is sent to it
  • a relay device such as a television or computer
  • Connection of the ancillary recipient device to the relay device is typically a short-range wireless connection.
  • the primary data is delivered from the relay device through a user interface, such as a screen.
  • Data packets that contain the primary data are augmented with the secondary information that is to be cascaded to any locally registered ancillary device.
  • These received data packets include a header that is interpreted by control logic at the relay device such that a local transmitter at the relay device is activated to push the secondary information to the locally registered recipient device.
  • the secondary data can be used either to control the ancillary device to receive and/or deliver related multi-media content pertinent to the data, programme and/or advertisement being received and played at the relay device.
  • the result is an augmented sense of connection with - or immersion in - the data, programme and/or advert being received and viewed, and a system in which a remote content provider can gain access to and deliver secondary data to an ancillary device which is not directly known to the content provider.
  • the option for receipt of this supplementary control information or supplementary data is controlled locally by the user, e.g. by registering the ancillary recipient device with the television or computer such as to receive this supplementary control information or supplementary data.
  • the registration process may, optionally, be associated with a pre-set user profile, such as age, gender and interests, thereby allowing a local controller in the television or computer to decide whether the supplementary control information or supplementary data is pushed onwards and locally to the registered ancillary recipient device or otherwise withheld.
  • This arrangement allows for flexibility and permits the content provider to globally encode all downlink communicated content, whether provided upon user interaction or broadcast.
  • FIG. 1 is an entertainment system embodying the present invention
  • FIG. 2 is a waveform diagram that reflects a tone encoding process according to a preferred embodiment of the present invention
  • FIG. 3 is a waveform diagram showing relative timing and signal qualities between left and right audio channels according to a preferred tone control process.
  • FIG. 1 shows an entertainment system 10 according to various embodiments of the present invention.
  • a content provider 12 assembles content in the form of multi-media content and/or applications (collectively “media content”).
  • This media content may include television programmes, audio files, advertisements (whether on-line or television-based), interactive or player-downloaded games and general web-page content and information.
  • This media context may be streamed live, or otherwise delayed and stored, such as within a database 14, for delayed broadcast or user-based request access, e.g. in the context of catch-up TV or website information.
  • the database is accessible by a controller 16, such as a server, that either directly administers operation of the context provider or otherwise provides the context provider with an ability to regulate, control, release and/or code any media content to which it has access.
  • Media content is encoded, via the controller 16, with metadata and additional payload.
  • the metadata provides instructions that can be interpreted locally by a recipient device to which the media content, containing the metadata, is transmitted, addressed or otherwise broadcast.
  • the metadata might simply be an identify for a type of device that can make use of supplementary control data.
  • the additional payload can be supplementary control information and/or supplementary data that is designed to be pushed onwards, in a cascaded fashion, to one or more ancillary devices that are locally registered with the recipient device. In this way, the header can be kept to a minimum length, and payload included only when there is an identified ancillary device detailed in the metadata.
  • the metadata may include an indication that the additional payload requires the ancillary device to include an output controllable tri-coloured LED device, an audio speaker and/or memory into which a program or application can be downloaded.
  • the content provider 12 - and thus the media content - is connected via a communications network 18, such as a wide area network that may include wireless and wireline aspects, to user devices at locations, such as houses or offices 20-24.
  • the user devices may be uniquely addressable, or just responsive to a broadcast signal simultaneously receivable by multiple other user devices.
  • a communications network 18 such as a wide area network that may include wireless and wireline aspects
  • the user devices may be uniquely addressable, or just responsive to a broadcast signal simultaneously receivable by multiple other user devices.
  • tiie following description equates the broadcast media content as a TV recording of a concert sponsored by a business group, such as Diageo ® . Delivery of the media content is generally not relevant to the present invention.
  • each packet 28 (or group of packets) contains associated metadata 30.
  • the metadata and in fact the entire packet, may encoded/encrypted and there will usually be some form of header 32 containing, for example, at least one of an address [of a device], data relating to the payload and/or error correcting bits.
  • the packet may therefore contain two distinct payloads: Payload A 34 relating to source-provided media content, e.g.
  • Payload B 36 relating to supplementary control information or supplementary data, such as a URL providing a storable link to a complementary website of the business group and/or an instruction to engage with and control through a script a particularly identified form of registered ancillary device.
  • the script may, in fact, be within the payload rather than the metadata per se.
  • the location of the script simply depends upon the nature of and number of bits of information that is being communicated in order for there to be effective local control and/or effective delivery of supplementary content to an ancillary device, so this is a design option.
  • the data packet 28 is received at the house 20 at a network interface 40; this may be a wireline router or a radio interface.
  • the network interface conventionally supports interface and operation between two pieces of equipment or protocol layers used within the entirety of the system 10.
  • the network interface 40 therefore can pass information uplink and downlink, including packets of information to a targeted principal recipient device, such as television 42.
  • Communication between the network interface 40 and the recipient device may wireless or via a wire, such as an Ethernet cable.
  • the principal recipient device 42 will typically have some form of user interface and a screen to display data recovered from the payload of received packets.
  • the recipient device might have auxiliary audio speakers 44 or the like to enhance the basic functionality.
  • the recipient device will all include some programmable memory 46 and at least one processor 48 to oversee operation thereof.
  • the user interface provides the user with the ability to select functions on the recipient device, e.g. changing channels or updating or downloading software to the principal recipient device 42.
  • Such limited control may be realized by a screen-based graphic-user interface “GUI” accessed and controlled, typically, by a wireless remote controller 50 or a cell phone containing a suitable app, such as the EnadoTM interface from Wyrestorm® Limited.
  • the remote controller 50 therefore provides a known way to access to control level functions of the principal recipient device 42.
  • the recipient device may include a pre-installed app that functions to cascade the supplementary control information and/or supplementary data to a registered ancillary device
  • this software may be provided as a downloadable app obtained from the content supplier 12 (or a third party).
  • the software could, of course, be provided by different processes or on a discrete memory stick or CD ROM.
  • the environment, e.g. the bouse, in which the principal recipient device is located also includes one or more registrable ancillary recipient devices 51-54.
  • these ancillary recipient devices 51 -54 are wireless connectable to the principal recipient device 42.
  • Connectivity typically makes use of a short-range communications protocol, such as Bluetooth'11* or the like.
  • Registration may take the form of a simply ‘push- lo-link' function on the ancillary recipient device, or via a user interface that involves confirmation of a dedicated password to establish a long-term association between the principal recipient device 42 and the ancillary recipient devices 51-54.
  • these ancillary recipient devices 51-54 can be realized by one or more of: i) a cell phone or smartphone 51 having a memory, a display 56 and, typically, a light 58 (which may be white or a multi-coloured LED): ii) audio speakers and preferably wireless speakers (herein denoted “aux” to represent a variety of sensory-perceivable functions that can be generated and output); iii) an animated plush toy 52 having motor- controlled limbs 60, eyes 62 and/or a mouth and/or an audio speaker arranged to output audio that is either p re-stored in local memory or streamed for reception by and local broadcast from the plush toy; iv) a specific light box, e.g. a Xyloband LED wristband 54, or light controller connected between the electrical supply and the bulb; and/or a motorized device that has controllable motors.
  • a specific light box e.g. a Xyloband LED wristband 54, or light
  • the ancillary device may contain a GUI 70 which may. in fact, double-up as the auxiliary [sensory] output 82.
  • the ancillary device will include a microcontroller 74 for operation control thereof, which microcontroller (or processing module) is operatively coupled, as will be understood, via a bus 76, to a transceiver 78 and memory 80 arranged to store program code. They may be an additional auxiliary output which, in the context of an animated plush toy 52 may be a motor controller, microphone or audio circuit
  • the principal recipient device 42 is programmed with logic (executed by its processor 48) that interprets the header and/or metadata communicated downlink, across the communications network 18, from a server (not shown) of the content provider 12.
  • the download may be “pushed” content in that it is pushed independently by the server on a one-to-one (direct) or one-to-many (broadcast) basis, or otherwise may be requested “pulled” content that is delivered following an uplink request (from the client/user side and emanating from the principal recipient device 42).
  • the primary content is supplemented with augmenting secondary content at or with the instruction of the content provider.
  • the principal recipient device 42 Upon receipt of a data packet to the principal recipient device 42, its locally-installed control logic is arranged to activate a local transmitter 43 such that, when appropriate metadata is present, the transmitter 43 is selectively activated to communicate the augmenting secondary content onwards to locally registered ancillary devices, thereby automatically distributing this secondary content for immediate use, including immediate local control of the ancillary device or storage of such pushed/communicated data in memory 80 of the ancillary device 51-54.
  • the secondary content (which may be data related to the internet of things) is targeted at the ancillary device, which secondary content may only be indirectly related to the primary content displayed/broadcast of, for example, a screen of the principal recipient device 42.
  • the system and methodology of the preferred embodiments enable the content provider 12 (at the server side of the network) to communicate additional secondary data or secondary content or control data directly to a registered ancillary device 51-54 via an intermediate [relaying] principal recipient device 42 notwithstanding that the content provider 12 remains unaware of the existence of the ancillary device.
  • the principal recipient device 42 therefore acts as a gatekeeper guarding release/access to the secondary content, with such secondary content only released locally to locally registered ancillary devices.
  • the payload includes primary media content as Payload A 34 and augmenting secondary media content as Payload B 36.
  • the presence of the augmenting secondary content is identified by the setting of bits in the header and/or by the metadata.
  • the precise communication protocol used to communicate media content, and therefore also the nature of the advisory header, is a design choice. It is suffice to say that the delivery of primary media content as Payload A 34 and augmenting secondary media content as Payload B 36 simply needs to be identifiable and the presence resolvable at the principal recipient device.
  • the primary media content, delivered as Payload A 34, is basic context, e.g. the television programme or webpage.
  • 'Die augmenting secondary media content, delivered as Payload B 36 is data and/or control that is inserted by (or with the knowledge of) the content provider and relates to links or control that is to be cascaded automatically downwards from the principal recipient device 42 to registered ancillary devices 51-54 in a push operation.
  • the augmenting secondary media content can then either be presented immediately at each registered (and, preferably, uniquely addressed) ancillary device, stored in local memory at the ancillary device for later recall and/or used to execute a local function to generate a sensory effect at the ancillary device.
  • the augmenting secondary media content could take the form of; a downloadable static or moving image from an affiliated sponsor of the primary content, e.g. a TV show, the downloadable image immediately displayable on the ancillary device and/or locally storable for later recall; a redeemably promotional coupon for an advert being presented as the primary content, thus allowing a smartphone to automatically receive and store the redeemably promotional coupon for subsequent use by the user at the point when the primary content is being screened/viewed: a voice or music file with associated queue points that allows for a registered ancillary device, such as a plush toy, to provide an interactive contemporaneous output that is distinct from that presented by or on the principal recipient device 42, i.e. TV.
  • a registered ancillary device such as a plush toy
  • the augmenting secondary media content may be pari of the primary content albeit actioned from a spatial distinct point relative to the TV; a scripted sequence of actuation codes that is complementary to contemporaneously presented primary content.
  • the primary content may be a concert and the secondary content is a control sequence that actuates changing patterns of coloured LEDs on a Xyloband ® wristband (or actuates a light on a smartphone) in a synchronized fashion in time with the music in the primary content; and interactive content that links at least one local ancillary device into media that is playing through the TV, such that the local media device and TV (and the local environment in which the ancillary device is active) together become part of the “set” albeit that the set is personal to the vicinity/room in which the TV is situated.
  • the ancillary device may be controlled, by the received and selective and onwardly communicated payload, to (for example) turn on a local motor and/or project speech or sound at a point in time that correlates to a related on-screen event, For example, movement of an actor's hand in a video projected from the television screen would see a motor energised on the ancillary device to reflect sensory perception of the hand’s movement and its touching of an object. In another example, a crack of lightning in the video could lead to a vibration of the ancillary device to generate a local sensation of a shudder.
  • control logic executable by the microprocessor 48 can be downloaded and stored in the memory 46 of the principal recipient device.
  • ancillary devices can be pre-programmed or otherwise programmed from a download with appropriate control logic and permits control instructions, received from the principal recipient device, to be interpreted and actioned in a timely and coordinated fashion. If there is no locally registered ancillary device, then the augmenting secondary media content can be ignored by the principal recipient device and the principal recipient device tasked simply to deliver, in a conventional sense, the primary media content.
  • the terms “module”, “system”, “terminal”, “server”, . “user/subscriber equipment/device” and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution.
  • a component of the system can be, but is not limited to being, a process running on a processor (or interchangeably a “controller”), an object, an executable, a thread of execution, a program, and/or a computer.
  • a processor or interchangeably a “controller”
  • an object an executable
  • a thread of execution a program
  • a computer a computer
  • an application running on a computing device can be a component.
  • One or more components can reside within a process and/or thread of execution and a component can be localized on one processing board of a computing device, and/or distributed between two or more computing boards in many devices.
  • these components can execute from various computer readable media having various data structures stored thereon.
  • the components can communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).
  • a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).
  • packet data delivery is but one data delivery option.
  • aspects of that functionality can be dissected packaged into a separate component ⁇ ) which is/are then communicatively linked to the recipient device arranged to receive or inherently contain the source data.
  • the term “recipient device” will therefore be understood [unless the context other requires a different definition] to mean the initial device (but not the final device in an inter-connection of multiple devices) to which the source data is provided or stored.
  • the source data (which can be live or animated image data, audio data or a combination of audio and video data) can be provided:
  • (iii) can be a broadcast or other recorded event that has been requested and then streamed to the smartphone (or the like) from a content provider, such as a TV station, YouTube or social media platform.
  • a content provider such as a TV station, YouTube or social media platform.
  • the source data is still consistent with the other embodiments in that it is supplied with and includes primary media content, i.e. primary payload, and complementary embedded control data, i.e. secondary payload.
  • primary media content i.e. primary payload
  • complementary embedded control data i.e. secondary payload.
  • the control information i.e. the secondary payload
  • the control information is fully embedded and synchronized with aspects of the primary payload.
  • embedded control signals realize the secondary payload. These embedded control signals are preferably tone-based but may also be implemented in a different fashion to reflect queuing/trigger points in time.
  • the embedded control signals occupy one of at least two audio channels provided in the source data, with the tones aligned in time with selected musical queue points. These embedded tones can, for example, align with the start of a particular guitar riff and then terminate at the precise time when the riff finishes and a choms begins. Referring briefly to FIGs. 2 and 3.
  • FIG. 2 is a waveform diagram illustrating how (in accordance with a preferred embodiment of the present invention) synchronized encoded tones (which are preferably sub-audio) are placed relative to time-varying audio segments designated for play as primary payload.
  • the waveform may in fact be continuous in that the primary payload is a continuous audio-visual source, rather than discrete envelope of speech as illustrated for exemplary purposes in FIG.2.
  • tones are selectively pre-correlated to a succession (but not necessarily a contiguous succession) of audio segments shown as envelopes in FIG. 2.
  • the source media file may contain pauses where there is silence, although the envelope may actually be continuous and span many second, several minutes or hours depending on whether the source is speech, music or video.
  • each audio envelope - or a selected one or more of the audio envelopes or one or more discrete segments within a specific audio envelope - is correlated with a unique identifying code in the form of a tone (such as a sub-audio CTCSS tone).
  • the tone corresponds to a desired functional effect, such as the duty cycle applied to a particular LED having a controllable colour.
  • the tone is typically present for the entirety of the light effect that is to be controlled with the presence thereby defining an on and off state for the [contextually exemplary] LED.
  • the control tone therefore rises at substantially the beginning of the envelope (or its equivalent digital representation) and then ceases at substantially the end of the specific envelope.
  • a first LED will have a first associated tone for addressing purposes, whereas a second LED will have a second, but different associated tone.
  • tone x another different third tone
  • the third tone may stipulate a different flash-rate patterns for otherwise independently and differently addressable groups of LEDs that have assigned colours according to a group designation.
  • tones may be for a duration that is less than or equal to the duration of each envelope. Control tones may, with an appropriate coding scheme that can be interpreted as containing distinct functions, also overlap within an envelope.
  • the tones are therefore taken from a tone/code library that correlates to a pre-orchestrated effect. Mixing of the tones into each audio segment is through conventional, signal-processing techniques known to the skilled addressee.
  • the tones therefore act to control and synchronize operation of an interactive devices, such as LED Xyloband wristbands and other lights, located remotely from a central media player, such as a smartphone which receives the source content and which plays the video content aspect from the source data/content.
  • the tone library is available to other components of the system so that those components can effect the desired function defined by an decoded instruction representative of the tone/code.
  • CTCSS is an acronym for “Continuous Tone Coded Squelch System”.
  • CTCSS is a sub audible tone in the range of 67Hz to 254Hz.
  • any one or more of about fifty tones (sometimes referred to as “sub-channels”) can be used to gain access to a repeater in a two-way radiotelephone system.
  • Each CTCSS is therefore essentially a sine wave having a specific frequency.
  • other forms of tone coding are possible.
  • FIG. 3 is a waveform diagram showing relative timing between an audio signal and the control tones respectively presented on right and left audio channels.
  • the audio channel is shown as a simple undulating wave, rather than the underlying and more complex amplitude-varying envelope shown in FIG. 2.
  • audio for remote generation at a remote speaker is consolidated (mixed down) into a composite signal envelope that is assigned for transmission on the left channel of a stereo audio circuit
  • Each envelope has been mixed with its assigned control tone or code; this is represented by the overlaying of the small amplitude control tone and the instantaneous audio output.
  • the control tone or code preferably has a relatively low power level compared to the magnitude of the audio components in the envelope; this reduces the likelihood of introducing distortion, such as harmonics, into any audio signal recovered from the composite signal envelope for output.
  • distortion such as harmonics
  • time buffering 36 may also be included, if necessary, to time separate adjacent audio outputs.
  • time buffering can take the form of a background media channel output earmarked for reproduction on a remote speaker.
  • a right channel of the audio circuit is assigned to communicate desired speaker output that, together with encoded specific audio, produces the distributed effect and complete media sound stage having sensory components distributed across multiple devices.
  • the audio desired for output is placed entirely on one audio channel.
  • a sub-audible tone relevant to activating an effect is then placed on a different channel and on an audio content timeline for the duration of additional sensory effect, e.g. motion and/or light, that is being produced for reasons of enhancing the user-experience.
  • additional sensory effect e.g. motion and/or light
  • a single effect-producing remote LED can be programmed to respond to several different control tones that define different hues. For example, a tone of 67I-Iz can be assigned to activate a red hue in colour at an on-off frequency of 1 Hz or fractions of a Hertz (Hz), whereas a tone of 71.9Hz can make the same LED change to a pulsating blue colour effect that grows and diminishes in light intensity over several seconds.
  • a tone of 67I-Iz can be assigned to activate a red hue in colour at an on-off frequency of 1 Hz or fractions of a Hertz (Hz)
  • a tone of 71.9Hz can make the same LED change to a pulsating blue colour effect that grows and diminishes in light intensity over several seconds.
  • Other effects are possible as will be understood having regard to Xyloband ® interactive wristband used widely at concerts.
  • Information in the audio channels is therefore split in the sense that one of its audio channels is assigned to contain the control tone/queue signaling scheme whereas at least one other distinct audio channel will deliver audio content (which could in fact be the primary payload). In a stereo environment, this would mean production of a pseudo-stereo effect achieved by replicating the distinct audio channel as identical mono-outputs from each of the two speakers of the stereo system.
  • the other audio channel that carries the tone control is thus a carrier or control channel and contributes nothing to the audio content, e.g. a song or piece of instrumental music, per se.
  • the smartphone/recipient device/video player operates to deliver a local video output of the primary payload from the local display on the smartphone, video player, etc.
  • a further auxiliary device such as a Bluetooth-connected speaker
  • establishment of an active Bluetooth connection preferably acts to suppress local reproduction of audio content at the smartpbone/recipient device/video player. This suppression avoids the potential for control tones to be generated and heard from a local speaker of the smartphone/recipient device/video player.
  • the source data is multi-media, this means that the audio is separated from the video, with the smartphone/recipient device/video player only generating the video whilst the audio data is appropriately modulated and addressed so that it is transmitted onwards for reproduction/generation elsewhere in the multi-component system.
  • connectivity to the exemplary Bluetooth speaker may itself be achieved via an independently supplied dongle/adaptor which connects via a suitable multi-pin connector into the circuitry of the Bluetooth connected speaker.
  • the dongle/adaptor is therefore an intermediate, independently merchandisable component.
  • the use of a further dongle/adaptor allows for backwards integration of the control technology with existing Bluetooth speakers.
  • the dongle/adaptor (or its equivalent circuitry if located in a related device) can thus be understood to be a gateway for the audio channels and therefore it either acts to present the audio channel [carrying the audio content] to the Bluetooth speaker or forwards (by further transmission) the control data embedded in the other audio channel to yet another responsive device, e.g. a wirelessly controlled LED wristband or controllable disco lights either in or remote from the cabinet of the Bluetooth speaker.
  • the dongle/adaptor (or an updated Bluetooth speaker incorporating the dongle/adaptor’s functionality as now explained) includes a receiver chain, a transmitter, control logic (such as a PIC) and memory.
  • the receiver chain processes, i.e. demodulates, the received signal from (for example) the video player and converts from the digital to analog domain.
  • This incident received (exemplary) Bluetooth signal contains two audio components, namely the left and right audio channels.
  • the dongle/adaptor circuitry passes the audio data (essentially as a line input) to the speaker circuitry that operates to convert the processed signal into an amplified output.
  • the recovered audio signal is duplicated and used to generate a dual mono audio output as the audible output. lt will be understood that using a complementary external speaker provides better audio response since, invariably, a dedicated speaker has better quality components and higher fidelity relative to generally smaller, lower costs speakers provided within the smartphone, recipient device and/or video player.
  • Control circuity in the dongle/adaptor is arranged to interpret tire demodulated carrier to recover the control tones on the other audio channel. More particularly, typically using a PIC (programmable IC) or a microcontroller, the demodulated embedded control signal is interpreted relative to frequency/tone codes that are pre-stored in local memory. These codes correlate to, for example, on-off duration times for a particular colour of LED in a Xyloband ® interactive LED wristband (see http://xylobands.com/).
  • the PIC is arranged to instantiate the transmitter function and to cause an appropriate control instruction to be modulated onto a carrier (or otherwise communicated) for transmission from the dongle/adaptor’s transmitter to the Xyloband LED wristband or the like.
  • the wristband then itself interprets the received instructions, following appropriate decoding, to coordinate/synchronize the lighting effects with the audio generated at the speaker and/or smartphone, etc. and also to be coordinated with the video (if appropriate and available) from a display of the video player/smanphone.
  • control functionality routes the audio channel containing the audio signals to the speaker or transmits instructions to control lighting effects in a way that is time-synchronized with the video and/or audio content being played by the components (e.g. the smartphone or video player and separate Bluetooth speaker).
  • the components e.g. the smartphone or video player and separate Bluetooth speaker.
  • this system arrangement permits, for example, an augmented coded album of an artist to be played by a fan in a home environment, with the coding of the audio permitting the fan to experience, on a smaller but personal scale, the light-show experienced at the gig.
  • the music may be played through a distinct speaker that is wirelessly linked to the smartphone on which the coded album resides or is streamed, whilst video is viewed on the smartphone.
  • the lighting effect is controlled by the tones being interpreted at the secondary device (not the smartphone), which tones are then transmitted onwards (generally as specific cross-referenceable instructions) to, for example, controllable LED wristbands that are linked to the secondary device.
  • controllable LED wristbands cross-reference received instructions and then execute the corresponding lighting effect.
  • the system is considered to be more robust and the exemplary ELD devices can then multi-platform provided that the secondary device includes a reference and preferably updateable database that correlates tones with desired functions.

Abstract

FIG. 1.schematically represents a system that automatically cascades secondary information (36) - whether control data and/or content - into a locally registered ancillary recipient device (51-54) that is connected to a relay device, such as a television (42) or computer, arranged to deliver primary data that is sent to it. Connection of the ancillary recipient device to the relay device is typically a short-range wireless connection. The primary data is delivered from the relay device through a user interface, such as a screen. Data packets (28) that contain the primary data (34) are augmented with the secondary information (36) that is to be cascaded to any locally registered ancillary device (51-54). These received data packets include a header (32) that is interpreted by control logic at the relay device such that a local transmitter (43) at the relay device is activated to push the secondary information to the locally registered recipient device (51-54). The secondary data can be used either to control the ancillary device to receive and/or deliver related multi-media content pertinent to the data, programme and/or advertisement being received and played at the relay device. The result is an augmented sense of connection with - or immersion in - the data, programme and/or advert being received and viewed, and a system in which a remote content provider ( 12) can gain access to and deliver secondary data to an ancillary device (53 -54) which is not directly known to the content provider (12).

Description

T ENTERTAINMENT SYSTEM AND METHOD OF DELIVERY AUGMENTED CONTENT Field of the Invention This invention relates, in general, to an entertainment system and is particularly, but not exclusively, applicable to a system and associated method in which augmented content, including control information and/or complementary data, is delivered in a multi-media environment. Summary of the Prior Art
In a browser-based environment, it is known to provide ancillary content, to the browser’s user, in the form, of product advertisements that, in some instances, are targeted at the user foliowing profiling of the user in a registration-like process. This ancillary information is therefore pushed with access to a particular web-page, and the precise content potentially targeted according to partitioning of the user's profile into a particularly characteristic or demographic, information is therefore provided directly to the device used to undertake web access and browsing activity,
Summary of the Invention According to first aspect of the present invention there is provided a method of delivering augmented content in an interconnected network of a first device wirelessly connected to at least a second device, wherein the first device includes a display and the second device includes a transceiver for receiving, from the first device, a signal containing at least two audio channels wherein a first one of the audio channels includes time-varying audio and a second one of the pair of audio channels contains time-aligned control signaling that correlates a sensory-perceivable function with the time-varying audio, the method comprising: providing a media source file to the first device, said media source file including at least said at least two audio channels; selectively displaying, when available in the media source file, video content from the media source file on the display of the first device; transmitting the signal to the second device, wherein the signal is derived from an aspect of the media source file; at the second device, processing the signal received by a receiver function at the second device to cause a speaker remote to the first device to output the time-varying audio; at the second device, decoding the time-aligned control signaling to identify the sensory-perceivable function; and transmitting, using a transmit function of the transceiver in the second device, an instruction to a third remote device, the instruction representing the sensory-perceivable function decoded from die time-aligned control signaling and wherein the instruction is configured to cause the third device to synchronize performance of the sensory-perceivable function with output of the time-varying audio at the second device.
The secondary device may be an adaptor and the method further includes coupling to the adaptor into an input port of a wireless speaker.
The time-aligned control signaling preferably indude sub-audible tones, and the third device is an LED light whose operation is controlled in response to the instruction. The media source file is one of: a) streamed to the first device over a communications link; b) downloaded to the first device upon request; and c) pre-loaded into memory of the first device.
In a second aspect of the invention there is provided a method of delivering augmented content to a device, the method comprising: at a primary recipient device, receiving a data packet containing a primary payload and secondary payload, the primary payload for delivery by the primary recipient device through a sensory output assoriated with the primary recipient device; in response to identifying the presence of secondary payload, extracting the secondary payload from the data packet and activating a transmitter in the primary recipient device to send the secondary payload to an ancillary device registered that is pre-registered with the primary recipient device; at the ancillary device, receiving the secondary payload and then at least one of: storing the secondary payload; playing the secondary payload through an output device of the ancillary device; changing a mode of operation of the ancillary device; and controlling operation of the ancillary device in response to the secondary payload, wherein the primary payload is a music video or a streamed concert and the secondary payload is a scripted sequence of actuation codes that are complementary to the presented sensory output on the primary recipient device and the scripted sequence of actuation codes change, in a synchronized fashion in time with music in the primary payload, .illumination patterns of coloured LEDs associated with the ancillary device. In another aspect of the invention there is provided an entertainment system comprising: a television or computer having: an input over which is received, from a remote content provider, data containing a primary payload directly executable by the television or computer and secondary payload having content distinct to the primary payload; a display; a transmitter; and a first controller executing control logic to control display of content, extracted from the primary payload, on the display and selectively to control sending of secondary payload using the transmitter; and an ancillary device wireless coupled to the television or computer, the ancillary device including: a receiver arranged to receive the secondary payload sent from the transmitter; a second controller arranged to control operation of the ancillary device, wherein the second controller is further arranged to interpret said content of the secondary payload; and a sensory output horn which the secondary payload can be presented, wherein presentation of the secondary payload at the ancillary device is dependent upon interpretation, by one of the first controller and the second controller, of the content of the secondary payload, and wherein the primary payload is a music video or a streamed concert and the secondary payload is a scripted sequence of actuation codes that are complementary to the presented sensory output on the television or computer and the scripted sequence of actuation codes change, in a synchronized fashion in lime with music in the primary payload, .illumination patterns of coloured LEDs associated with the ancillary device. Advantageously, the preferred embodiments operate automatically to cascade secondary information - whether control data and/or content - into a locally registered ancillary recipient device that is connected to a relay device, such as a television or computer, arranged to deliver primary data that is sent to it Connection of the ancillary recipient device to the relay device is typically a short-range wireless connection. The primary data is delivered from the relay device through a user interface, such as a screen. Data packets that contain the primary data are augmented with the secondary information that is to be cascaded to any locally registered ancillary device. These received data packets include a header that is interpreted by control logic at the relay device such that a local transmitter at the relay device is activated to push the secondary information to the locally registered recipient device. The secondary data can be used either to control the ancillary device to receive and/or deliver related multi-media content pertinent to the data, programme and/or advertisement being received and played at the relay device. The result is an augmented sense of connection with - or immersion in - the data, programme and/or advert being received and viewed, and a system in which a remote content provider can gain access to and deliver secondary data to an ancillary device which is not directly known to the content provider. By embedding the programme or data with such supplementary control information or supplementary data at the source, the option for receipt of this supplementary control information or supplementary data is controlled locally by the user, e.g. by registering the ancillary recipient device with the television or computer such as to receive this supplementary control information or supplementary data. In fact, the registration process may, optionally, be associated with a pre-set user profile, such as age, gender and interests, thereby allowing a local controller in the television or computer to decide whether the supplementary control information or supplementary data is pushed onwards and locally to the registered ancillary recipient device or otherwise withheld. This arrangement allows for flexibility and permits the content provider to globally encode all downlink communicated content, whether provided upon user interaction or broadcast.
Brief Description of theDrawings:
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which:
FIG. 1 is an entertainment system embodying the present invention;
FIG. 2 is a waveform diagram that reflects a tone encoding process according to a preferred embodiment of the present invention;
FIG. 3 is a waveform diagram showing relative timing and signal qualities between left and right audio channels according to a preferred tone control process.
Detailed Descriptlon of a Preferred Embodiment FIG. 1 shows an entertainment system 10 according to various embodiments of the present invention.
From a server or broadcaster side, a content provider 12 assembles content in the form of multi-media content and/or applications (collectively “media content”). This media content may include television programmes, audio files, advertisements (whether on-line or television-based), interactive or player-downloaded games and general web-page content and information. This media context may be streamed live, or otherwise delayed and stored, such as within a database 14, for delayed broadcast or user-based request access, e.g. in the context of catch-up TV or website information. The database is accessible by a controller 16, such as a server, that either directly administers operation of the context provider or otherwise provides the context provider with an ability to regulate, control, release and/or code any media content to which it has access. Media content is encoded, via the controller 16, with metadata and additional payload. The metadata provides instructions that can be interpreted locally by a recipient device to which the media content, containing the metadata, is transmitted, addressed or otherwise broadcast. The metadata might simply be an identify for a type of device that can make use of supplementary control data. The additional payload can be supplementary control information and/or supplementary data that is designed to be pushed onwards, in a cascaded fashion, to one or more ancillary devices that are locally registered with the recipient device. In this way, the header can be kept to a minimum length, and payload included only when there is an identified ancillary device detailed in the metadata. For example, the metadata may include an indication that the additional payload requires the ancillary device to include an output controllable tri-coloured LED device, an audio speaker and/or memory into which a program or application can be downloaded.
The content provider 12 - and thus the media content - is connected via a communications network 18, such as a wide area network that may include wireless and wireline aspects, to user devices at locations, such as houses or offices 20-24. The user devices may be uniquely addressable, or just responsive to a broadcast signal simultaneously receivable by multiple other user devices. For the sake of explanation only, tiie following description equates the broadcast media content as a TV recording of a concert sponsored by a business group, such as Diageo®. Delivery of the media content is generally not relevant to the present invention. Assuming, for the sake of explanation, that the media content is provided in a packet-based system (although other delivery formats are clearly possible and contemplated, as will be readily appreciated), each packet 28 (or group of packets) contains associated metadata 30. The metadata, and in fact the entire packet, may encoded/encrypted and there will usually be some form of header 32 containing, for example, at least one of an address [of a device], data relating to the payload and/or error correcting bits. The packet may therefore contain two distinct payloads: Payload A 34 relating to source-provided media content, e.g. the TV recording of the concert; and Payload B 36 relating to supplementary control information or supplementary data, such as a URL providing a storable link to a complementary website of the business group and/or an instruction to engage with and control through a script a particularly identified form of registered ancillary device. The script may, in fact, be within the payload rather than the metadata per se.
The location of the script, in many respect, simply depends upon the nature of and number of bits of information that is being communicated in order for there to be effective local control and/or effective delivery of supplementary content to an ancillary device, so this is a design option.
Returning to FIG. 1 and referring to local configuration of hardware and software components in, for example, a first house 20, The configurations between houses may actually vary based upon the recipient device and the ancillary device(s).
Again, assuming a packet transmission of data, the data packet 28 is received at the house 20 at a network interface 40; this may be a wireline router or a radio interface. The network interface conventionally supports interface and operation between two pieces of equipment or protocol layers used within the entirety of the system 10. The network interface 40 therefore can pass information uplink and downlink, including packets of information to a targeted principal recipient device, such as television 42. Communication between the network interface 40 and the recipient device may wireless or via a wire, such as an Ethernet cable.
The principal recipient device 42 will typically have some form of user interface and a screen to display data recovered from the payload of received packets. The recipient device might have auxiliary audio speakers 44 or the like to enhance the basic functionality. The recipient device will all include some programmable memory 46 and at least one processor 48 to oversee operation thereof. For example, the user interface provides the user with the ability to select functions on the recipient device, e.g. changing channels or updating or downloading software to the principal recipient device 42. Such limited control may be realized by a screen-based graphic-user interface “GUI” accessed and controlled, typically, by a wireless remote controller 50 or a cell phone containing a suitable app, such as the Enado™ interface from Wyrestorm® Limited. The remote controller 50 therefore provides a known way to access to control level functions of the principal recipient device 42.
Although the recipient device may include a pre-installed app that functions to cascade the supplementary control information and/or supplementary data to a registered ancillary device, this software may be provided as a downloadable app obtained from the content supplier 12 (or a third party). The software could, of course, be provided by different processes or on a discrete memory stick or CD ROM.
Returning to FIG. 1, the environment, e.g. the bouse, in which the principal recipient device is located also includes one or more registrable ancillary recipient devices 51-54. Typically, these ancillary recipient devices 51 -54 are wireless connectable to the principal recipient device 42. Connectivity typically makes use of a short-range communications protocol, such as Bluetooth'11* or the like. Registration may take the form of a simply ‘push- lo-link' function on the ancillary recipient device, or via a user interface that involves confirmation of a dedicated password to establish a long-term association between the principal recipient device 42 and the ancillary recipient devices 51-54. As to the nature of these ancillary recipient devices 51-54, these can be realized by one or more of: i) a cell phone or smartphone 51 having a memory, a display 56 and, typically, a light 58 (which may be white or a multi-coloured LED): ii) audio speakers and preferably wireless speakers (herein denoted “aux” to represent a variety of sensory-perceivable functions that can be generated and output); iii) an animated plush toy 52 having motor- controlled limbs 60, eyes 62 and/or a mouth and/or an audio speaker arranged to output audio that is either p re-stored in local memory or streamed for reception by and local broadcast from the plush toy; iv) a specific light box, e.g. a Xyloband LED wristband 54, or light controller connected between the electrical supply and the bulb; and/or a motorized device that has controllable motors.
Turning now to the functional configuration that may be adopted in each of the ancillary devices 51-54, As shown in FIG. 1, the ancillary device may contain a GUI 70 which may. in fact, double-up as the auxiliary [sensory] output 82. The ancillary device will include a microcontroller 74 for operation control thereof, which microcontroller (or processing module) is operatively coupled, as will be understood, via a bus 76, to a transceiver 78 and memory 80 arranged to store program code. They may be an additional auxiliary output which, in the context of an animated plush toy 52 may be a motor controller, microphone or audio circuit
In terms of system operation, the principal recipient device 42 is programmed with logic (executed by its processor 48) that interprets the header and/or metadata communicated downlink, across the communications network 18, from a server (not shown) of the content provider 12. The download may be “pushed” content in that it is pushed independently by the server on a one-to-one (direct) or one-to-many (broadcast) basis, or otherwise may be requested “pulled” content that is delivered following an uplink request (from the client/user side and emanating from the principal recipient device 42). As such, the primary content is supplemented with augmenting secondary content at or with the instruction of the content provider.
Upon receipt of a data packet to the principal recipient device 42, its locally-installed control logic is arranged to activate a local transmitter 43 such that, when appropriate metadata is present, the transmitter 43 is selectively activated to communicate the augmenting secondary content onwards to locally registered ancillary devices, thereby automatically distributing this secondary content for immediate use, including immediate local control of the ancillary device or storage of such pushed/communicated data in memory 80 of the ancillary device 51-54. In this fashion, the secondary content (which may be data related to the internet of things) is targeted at the ancillary device, which secondary content may only be indirectly related to the primary content displayed/broadcast of, for example, a screen of the principal recipient device 42. As such, the system and methodology of the preferred embodiments enable the content provider 12 (at the server side of the network) to communicate additional secondary data or secondary content or control data directly to a registered ancillary device 51-54 via an intermediate [relaying] principal recipient device 42 notwithstanding that the content provider 12 remains unaware of the existence of the ancillary device. The principal recipient device 42 therefore acts as a gatekeeper guarding release/access to the secondary content, with such secondary content only released locally to locally registered ancillary devices.
The payload includes primary media content as Payload A 34 and augmenting secondary media content as Payload B 36. The presence of the augmenting secondary content is identified by the setting of bits in the header and/or by the metadata. The precise communication protocol used to communicate media content, and therefore also the nature of the advisory header, is a design choice. It is suffice to say that the delivery of primary media content as Payload A 34 and augmenting secondary media content as Payload B 36 simply needs to be identifiable and the presence resolvable at the principal recipient device.
By linking the primary and secondary media content into a single packet, timing is maintained between the complementary media contents thereof. Of course, an alternative approach is to send the primary and secondary media content as separate transmissions and then to buffer and align these two media contents at the principal recipient device 42. However, this increases the processing burden at the principal recipient device 42, requires setting of an appropriate delay flag in the header as well as requiring the establishment of a link between potentially two disparate transmissions, although this alternative may allow for secondary content to be sourced from a separate server-side entity rather than for the content provider to pre-compile the all the media contents/information in advance. In this latter alternative, logic at the server side may make use of demographic information and registered user identification to locate, source and then cause the sending of secondary media content in near real-time to an identified principal recipient device 42.
The primary media content, delivered as Payload A 34, is basic context, e.g. the television programme or webpage. 'Die augmenting secondary media content, delivered as Payload B 36, is data and/or control that is inserted by (or with the knowledge of) the content provider and relates to links or control that is to be cascaded automatically downwards from the principal recipient device 42 to registered ancillary devices 51-54 in a push operation. The augmenting secondary media content can then either be presented immediately at each registered (and, preferably, uniquely addressed) ancillary device, stored in local memory at the ancillary device for later recall and/or used to execute a local function to generate a sensory effect at the ancillary device. By way of non-limiting examples, the augmenting secondary media content could take the form of; a downloadable static or moving image from an affiliated sponsor of the primary content, e.g. a TV show, the downloadable image immediately displayable on the ancillary device and/or locally storable for later recall; a redeemably promotional coupon for an advert being presented as the primary content, thus allowing a smartphone to automatically receive and store the redeemably promotional coupon for subsequent use by the user at the point when the primary content is being screened/viewed: a voice or music file with associated queue points that allows for a registered ancillary device, such as a plush toy, to provide an interactive contemporaneous output that is distinct from that presented by or on the principal recipient device 42, i.e. TV. In this way, the augmenting secondary media content may be pari of the primary content albeit actioned from a spatial distinct point relative to the TV; a scripted sequence of actuation codes that is complementary to contemporaneously presented primary content. For example, the primary content may be a concert and the secondary content is a control sequence that actuates changing patterns of coloured LEDs on a Xyloband® wristband (or actuates a light on a smartphone) in a synchronized fashion in time with the music in the primary content; and interactive content that links at least one local ancillary device into media that is playing through the TV, such that the local media device and TV (and the local environment in which the ancillary device is active) together become part of the “set” albeit that the set is personal to the vicinity/room in which the TV is situated. In this way, the ancillary device may be controlled, by the received and selective and onwardly communicated payload, to (for example) turn on a local motor and/or project speech or sound at a point in time that correlates to a related on-screen event, For example, movement of an actor's hand in a video projected from the television screen would see a motor energised on the ancillary device to reflect sensory perception of the hand’s movement and its touching of an object. In another example, a crack of lightning in the video could lead to a vibration of the ancillary device to generate a local sensation of a shudder.
Unique addressing is preferably since some augmenting secondary media content may not be applicable to certain types of device, e.g. a webpage could not be displayed on a plush toy, so the metadata is used to indicate the ancillary devices for which augmenting secondary media content is relevant. This addressing function can therefore cut down unnecessary local wireless transmissions.
Once received at the principal recipient device, the header and/or metadata are inteipreted by control logic executable by the microprocessor 48. The control logic can be downloaded and stored in the memory 46 of the principal recipient device.
Similarly, with the ancillary devices, these can be pre-programmed or otherwise programmed from a download with appropriate control logic and permits control instructions, received from the principal recipient device, to be interpreted and actioned in a timely and coordinated fashion. If there is no locally registered ancillary device, then the augmenting secondary media content can be ignored by the principal recipient device and the principal recipient device tasked simply to deliver, in a conventional sense, the primary media content. As used in this application, the terms “module”, “system", “terminal”, “server”, . “user/subscriber equipment/device” and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component of the system (and as the context requires) can be, but is not limited to being, a process running on a processor (or interchangeably a “controller”), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one processing board of a computing device, and/or distributed between two or more computing boards in many devices. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components can communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).
It is understood that the specific order or hierarchy of steps in the processes disclosed herein is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged while remaining within the scope of the present disclosure. The accompanying method claims present elements of the various steps in sample order, and are not meant to be limited to the specific order or hierarchy presented unless a specific order is expressly described or is logically required. Those skilled in the art will further appreciate that the various illustrative logical blocks, modules, circuits, methods and algorithms described in connection with the examples disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, methods and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application while remaining, either literally or equivalently, within the scope of the accompanying claims. Unless specific arrangements are mutually exclusive with one another, the various embodiments described herein can be combined to enhance system functionality and/or to produce complementary functions in the effective automatic pushed delivery of augmenting secondary content to ancillary devices. Such combinations will be readily appreciated by the skilled addressee given the totality of the foregoing description, Likewise, aspects of the preferred embodiments may be implemented in standalone arrangements where more limited and thus specific component functionality is provided within each of the interconnected - and therefore interacting - system components albeit that, in sum, they together support, realize and produce the described real-world effects). Indeed, it will be understood that unless features in the particular preferred embodiments are expressly identified as incompatible with one another or the surrounding context implies that they are mutually exclusive and not readily combinable in a complementary and/or supportive sense, the totality of this disclosure contemplates and envisions that specific features of those complementary embodiments can be selectively combined to provide one or more comprehensive, but slightly different, technical solutions.
It with, therefore, be appreciated that the above description has been given by way of example only and that modification in detail may be made within the scope of the present invention. For example, packet data delivery is but one data delivery option. For example, in another arrangement, rather than to integrate the functionality [that controls the auxiliary device] directly into the TV or computer, aspects of that functionality can be dissected packaged into a separate component^) which is/are then communicatively linked to the recipient device arranged to receive or inherently contain the source data. In the following description of arrangements, the term “recipient device” will therefore be understood [unless the context other requires a different definition] to mean the initial device (but not the final device in an inter-connection of multiple devices) to which the source data is provided or stored.
In this respect, the source data (which can be live or animated image data, audio data or a combination of audio and video data) can be provided:
(i) as a file on a data carrier such on a USB memory stick, or
(ii) preloaded into memory of a portable and commercially-purchaseable video player (which means that, in this arrangement, the portable video player itself would act as the source with the video player potentially supplied as merchandise from a gig), or
(iii) can be a broadcast or other recorded event that has been requested and then streamed to the smartphone (or the like) from a content provider, such as a TV station, YouTube or social media platform.
The source data is still consistent with the other embodiments in that it is supplied with and includes primary media content, i.e. primary payload, and complementary embedded control data, i.e. secondary payload. However, in contrast with having to interrogate a header and to decide how the control information is onwardly communicated, the control information (i.e. the secondary payload) is fully embedded and synchronized with aspects of the primary payload.
In certain embodiments, embedded control signals realize the secondary payload. These embedded control signals are preferably tone-based but may also be implemented in a different fashion to reflect queuing/trigger points in time. The embedded control signals occupy one of at least two audio channels provided in the source data, with the tones aligned in time with selected musical queue points. These embedded tones can, for example, align with the start of a particular guitar riff and then terminate at the precise time when the riff finishes and a choms begins. Referring briefly to FIGs. 2 and 3.
FIG. 2 is a waveform diagram illustrating how (in accordance with a preferred embodiment of the present invention) synchronized encoded tones (which are preferably sub-audio) are placed relative to time-varying audio segments designated for play as primary payload. Of course, the waveform may in fact be continuous in that the primary payload is a continuous audio-visual source, rather than discrete envelope of speech as illustrated for exemplary purposes in FIG.2. To achieve specific functional control of a sensory-perceivable effect, tones are selectively pre-correlated to a succession (but not necessarily a contiguous succession) of audio segments shown as envelopes in FIG. 2. Therefore, the source media file may contain pauses where there is silence, although the envelope may actually be continuous and span many second, several minutes or hours depending on whether the source is speech, music or video. Potentially each audio envelope - or a selected one or more of the audio envelopes or one or more discrete segments within a specific audio envelope - is correlated with a unique identifying code in the form of a tone (such as a sub-audio CTCSS tone). The tone corresponds to a desired functional effect, such as the duty cycle applied to a particular LED having a controllable colour. The tone is typically present for the entirety of the light effect that is to be controlled with the presence thereby defining an on and off state for the [contextually exemplary] LED. The control tone therefore rises at substantially the beginning of the envelope (or its equivalent digital representation) and then ceases at substantially the end of the specific envelope. Following the same scheme, a first LED will have a first associated tone for addressing purposes, whereas a second LED will have a second, but different associated tone. At points in time when the audio segment is itself a composition of multiple audio outputs, another different third tone (“tone x”) may be applied so as to control an effect collecti vely across multiple otherwise independently-addressable devices. For example, the third tone may stipulate a different flash-rate patterns for otherwise independently and differently addressable groups of LEDs that have assigned colours according to a group designation. As indicated, tones may be for a duration that is less than or equal to the duration of each envelope. Control tones may, with an appropriate coding scheme that can be interpreted as containing distinct functions, also overlap within an envelope.
The tones (or codes, as the case may be) are therefore taken from a tone/code library that correlates to a pre-orchestrated effect. Mixing of the tones into each audio segment is through conventional, signal-processing techniques known to the skilled addressee. The tones therefore act to control and synchronize operation of an interactive devices, such as LED Xyloband wristbands and other lights, located remotely from a central media player, such as a smartphone which receives the source content and which plays the video content aspect from the source data/content. The tone library is available to other components of the system so that those components can effect the desired function defined by an decoded instruction representative of the tone/code.
CTCSS is an acronym for “Continuous Tone Coded Squelch System”. CTCSS is a sub audible tone in the range of 67Hz to 254Hz. Conventionally, any one or more of about fifty tones (sometimes referred to as “sub-channels”) can be used to gain access to a repeater in a two-way radiotelephone system. Each CTCSS is therefore essentially a sine wave having a specific frequency. Of course, other forms of tone coding are possible.
To appreciate further the nature of these embodiments, reference is made to FIG. 3 which is a waveform diagram showing relative timing between an audio signal and the control tones respectively presented on right and left audio channels. For ease of reproduction only, the audio channel is shown as a simple undulating wave, rather than the underlying and more complex amplitude-varying envelope shown in FIG. 2.
In FIG. 3, audio for remote generation at a remote speaker is consolidated (mixed down) into a composite signal envelope that is assigned for transmission on the left channel of a stereo audio circuit Each envelope has been mixed with its assigned control tone or code; this is represented by the overlaying of the small amplitude control tone and the instantaneous audio output. The control tone or code preferably has a relatively low power level compared to the magnitude of the audio components in the envelope; this reduces the likelihood of introducing distortion, such as harmonics, into any audio signal recovered from the composite signal envelope for output. Of course, it will be understood that only the tones have any value and the audio on this channel is unnecessary other than to provide for a base reference to the useable audio on the other channel.
To provide an effective reference in time for delivery of the audio signal, some form of “fill” or buffering 36 may also be included, if necessary, to time separate adjacent audio outputs. For example, time buffering can take the form of a background media channel output earmarked for reproduction on a remote speaker.
A right channel of the audio circuit is assigned to communicate desired speaker output that, together with encoded specific audio, produces the distributed effect and complete media sound stage having sensory components distributed across multiple devices.
In other words, to begin with, the audio desired for output is placed entirely on one audio channel. A sub-audible tone relevant to activating an effect is then placed on a different channel and on an audio content timeline for the duration of additional sensory effect, e.g. motion and/or light, that is being produced for reasons of enhancing the user-experience. One advantage of this is that the decoder in the remote device becomes significantly less likely to miss a tone as it constantly receives input.
As an example, a single effect-producing remote LED (or the like) can be programmed to respond to several different control tones that define different hues. For example, a tone of 67I-Iz can be assigned to activate a red hue in colour at an on-off frequency of 1 Hz or fractions of a Hertz (Hz), whereas a tone of 71.9Hz can make the same LED change to a pulsating blue colour effect that grows and diminishes in light intensity over several seconds. Other effects are possible as will be understood having regard to Xyloband® interactive wristband used widely at concerts.
Information in the audio channels is therefore split in the sense that one of its audio channels is assigned to contain the control tone/queue signaling scheme whereas at least one other distinct audio channel will deliver audio content (which could in fact be the primary payload). In a stereo environment, this would mean production of a pseudo-stereo effect achieved by replicating the distinct audio channel as identical mono-outputs from each of the two speakers of the stereo system. The other audio channel that carries the tone control is thus a carrier or control channel and contributes nothing to the audio content, e.g. a song or piece of instrumental music, per se.
If the primary payload is video, the smartphone/recipient device/video player operates to deliver a local video output of the primary payload from the local display on the smartphone, video player, etc. In instances where the smartphone/recipient device/video player is further deliberately communicatively linked to a further auxiliary device, such as a Bluetooth-connected speaker, establishment of an active Bluetooth connection preferably acts to suppress local reproduction of audio content at the smartpbone/recipient device/video player. This suppression avoids the potential for control tones to be generated and heard from a local speaker of the smartphone/recipient device/video player. If the source data is multi-media, this means that the audio is separated from the video, with the smartphone/recipient device/video player only generating the video whilst the audio data is appropriately modulated and addressed so that it is transmitted onwards for reproduction/generation elsewhere in the multi-component system.
Of course, rather than a Bluetooth connection, other forms of local link between the [erstwhile Bluetooth] connected speaker and auxiliary device [that has received or has the source data] can be implemented, with these well-known and readily appreciated by the skilled person, e.g. a wireless optical connection or other wireless connection.
Moreover, connectivity to the exemplary Bluetooth speaker may itself be achieved via an independently supplied dongle/adaptor which connects via a suitable multi-pin connector into the circuitry of the Bluetooth connected speaker. The dongle/adaptor is therefore an intermediate, independently merchandisable component. The use of a further dongle/adaptor allows for backwards integration of the control technology with existing Bluetooth speakers. The dongle/adaptor (or its equivalent circuitry if located in a related device) can thus be understood to be a gateway for the audio channels and therefore it either acts to present the audio channel [carrying the audio content] to the Bluetooth speaker or forwards (by further transmission) the control data embedded in the other audio channel to yet another responsive device, e.g. a wirelessly controlled LED wristband or controllable disco lights either in or remote from the cabinet of the Bluetooth speaker.
The dongle/adaptor (or an updated Bluetooth speaker incorporating the dongle/adaptor’s functionality as now explained) includes a receiver chain, a transmitter, control logic (such as a PIC) and memory.
The receiver chain processes, i.e. demodulates, the received signal from (for example) the video player and converts from the digital to analog domain. This incident received (exemplary) Bluetooth signal, as explained, contains two audio components, namely the left and right audio channels. However, since only one of the left or right audio channel includes audio data (i.e. information relating to the instrumental and the vocals of a digitized musical track) whereas the other audio channel includes tone controls that define functions, the dongle/adaptor circuitry passes the audio data (essentially as a line input) to the speaker circuitry that operates to convert the processed signal into an amplified output. Moreover, (he recovered audio signal is duplicated and used to generate a dual mono audio output as the audible output. lt will be understood that using a complementary external speaker provides better audio response since, invariably, a dedicated speaker has better quality components and higher fidelity relative to generally smaller, lower costs speakers provided within the smartphone, recipient device and/or video player.
Control circuity in the dongle/adaptor is arranged to interpret tire demodulated carrier to recover the control tones on the other audio channel. More particularly, typically using a PIC (programmable IC) or a microcontroller, the demodulated embedded control signal is interpreted relative to frequency/tone codes that are pre-stored in local memory. These codes correlate to, for example, on-off duration times for a particular colour of LED in a Xyloband® interactive LED wristband (see http://xylobands.com/). Once the control code/function has been identified, the PIC is arranged to instantiate the transmitter function and to cause an appropriate control instruction to be modulated onto a carrier (or otherwise communicated) for transmission from the dongle/adaptor’s transmitter to the Xyloband LED wristband or the like. The wristband then itself interprets the received instructions, following appropriate decoding, to coordinate/synchronize the lighting effects with the audio generated at the speaker and/or smartphone, etc. and also to be coordinated with the video (if appropriate and available) from a display of the video player/smanphone.
Regardless of whether there is a dongle/adaptor or whether control logic and circuitry is integrated on a board in the speaker, control functionality routes the audio channel containing the audio signals to the speaker or transmits instructions to control lighting effects in a way that is time-synchronized with the video and/or audio content being played by the components (e.g. the smartphone or video player and separate Bluetooth speaker).
To provide a context, this system arrangement permits, for example, an augmented coded album of an artist to be played by a fan in a home environment, with the coding of the audio permitting the fan to experience, on a smaller but personal scale, the light-show experienced at the gig. The music may be played through a distinct speaker that is wirelessly linked to the smartphone on which the coded album resides or is streamed, whilst video is viewed on the smartphone. The lighting effect is controlled by the tones being interpreted at the secondary device (not the smartphone), which tones are then transmitted onwards (generally as specific cross-referenceable instructions) to, for example, controllable LED wristbands that are linked to the secondary device. The controllable LED wristbands cross-reference received instructions and then execute the corresponding lighting effect. By using instructions in the final hop, the system is considered to be more robust and the exemplary ELD devices can then multi-platform provided that the secondary device includes a reference and preferably updateable database that correlates tones with desired functions.

Claims

Claims
1. A method of delivering augmented content in an interconnected network of a first device wirelessly connected to at least a second device, wherein the first device includes a display and the second device includes a transceiver for receiving, from the first device, a signal containing at least two audio channels wherein a first one of the audio channels includes time-varying audio and a second one of the pair of audio channels contains time- aligned control signaling that correlates a sensory-perceivable function with the time- varying audio, the method comprising: providing a media source file to the first device, said media source file including at least said at least two audio channels; selectively displaying, when available in the media source file, video content from the media source file on the display of the first device; transmitting the signal to the second device, wherein the signal is derived from an aspect of the media source file; at the second device, processing the signal received by a receiver function at the second device to cause a speaker remote to the first device to output the time-varying audio; at the second device, decoding the time-aligned control signaling to identify the sensory-perceivable function; and transmitting, using a transmit function of the transceiver in the second device, an instruction to a third remote device, the instruction representing the sensory-perceivable function decoded from the time-aligned control signaling and wherein the instruction is configured to cause the third device to synchronize performance of the sensory- perceivable function with output of the time-varying audio at the second device.
2. The method of claim 1 , wherein the secondary device is an adaptor and the method further includes coupling to the adaptor into an input port of a wireless speaker.
3. The method of claim 1 or 2, wherein the time-aligned control signaling include tones, and the third device is an LED light whose operation is controlled in response to the instruction.
4. The method of any preceding claim, wherein tlie media source file is one of: a) streamed to the first device over a communications link; b) downloaded to the first device upon request; and c) pre-loaded into memory of the first device.
5. A method of delivering augmented content to a device, the method comprising: at a primary recipient device, receiving a data packet containing a primary payload and secondary payload, the primary payload for delivery by the primary recipient device through a sensory output associated with the primary recipient device; in response to identifying the presence of secondary payload, extracting the secondary payload from the data packet and activating a transmitter in the primary recipient device to send the secondary payload to an ancillary device registered that is preregistered with the primary recipient device; at the ancillary device, receiving the secondary payload and then at least one of: storing the secondary payload; playing the secondary payload through an output device of the ancillary device; changing a mode of operation of the ancillary device; and controlling operation of the ancillary device in response to the secondary payload, wherein the primary payload is a music video or a streamed concert and the secondary payload is a scripted sequence of actuation codes that are complementary to the presented sensory output on the primary recipient device and the scripted sequence of actuation codes change, in a synchronized fashion in time with music in the primary payload, .illumination patterns of coloured LEDs associated with the ancillary device.
6. The method of claim 6, wherein the primary recipient device is one of a television and a computer.
7. The method of claim 5 or 6, wherein the data packet includes a header that addresses the primary recipient device.
8. Ttie method of claim 7, wherein the header includes a data field identifying the presence of the secondary payload.
9. The method of any of claims 5 to 8, wherein playing of content in the primary payload on the primary recipient device is coordinated in time with the playing, on the ancillary device, of content in the secondary payload.
10. The method of any of claims 5 to 9. wherein any change in operation of the ancillary device is coordinated in time with the playing of content in the primary payload on the primary recipient device.
11. The method of any of claims 5 to 8, wherein the secondary payload is actioned at the ancillary device independently of sensory presentation of the primary payload by the primary recipient device.
12. The method of any of claims 5 to 11, wherein the secondary payload further includes is at least one of: a downloadable static or moving image from an affiliated sponsor of the primary payload; a redeemably promotional coupon for an advert being presented as the primary payload; and a voice or music file with associated queue points that allows for the registered ancillary device to provide an interactive contemporaneous output that is distinct from the primary payload presented by or on the primary recipient device.
13. An entertainment system comprising: a television or computer (42) having: an input over which is received, from a remote content provider, data containing a primary payload directly executable by the television or computer and secondary payload having content distinct to the primary payload; a display; a transmitter; and a first controller executing control logic to control display of content, extracted from the primary payload, on the display and selectively to control sending of secondary payload using the transmitter; and an ancillary device (51-54) wireless coupled to the television or computer, the ancillary device including: a receiver arranged to receive the secondary payload sent from the transmitter; a second controller arranged to control operation of the ancillary device, wherein the second controller is further arranged to interpret said content of the secondary payload; and a sensory output from which the secondary payload can be presented, wherein presentation of the secondary payload at the ancillary device is dependent upon interpretation, by one of the first controller and the second controller, of the content of the secondary payload, and wherein the primary payload is a music video or a streamed concert and the secondary payload is a scripted sequence of actuation codes that are complementary to the presented sensory output on the television or computer (42) and the scripted sequence of actuation codes change, in a synchronized fashion in time with music in the primary payload, .illumination patterns of coloured LEDs associated with the ancillary device (51-54).
14. The entertainment system of claim 13, wherein the ancillary device includes a transmitter for communicating a registration request to a second receiver of the television or computer, wherein the registration request authorizes the receipt of secondary payload by the ancillary device.
15. The entertainment system of any of claims 13 to 15, wherein any change in operation of the ancillary device is coordinated in time with the playing of the primary payload on the television or computer.
Figure imgf000026_0001
PCT/GB2020/050181 2020-01-28 2020-01-28 T entertainment system and method of delivery augmented content WO2021152280A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/GB2020/050181 WO2021152280A1 (en) 2020-01-28 2020-01-28 T entertainment system and method of delivery augmented content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/GB2020/050181 WO2021152280A1 (en) 2020-01-28 2020-01-28 T entertainment system and method of delivery augmented content

Publications (1)

Publication Number Publication Date
WO2021152280A1 true WO2021152280A1 (en) 2021-08-05

Family

ID=69593726

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2020/050181 WO2021152280A1 (en) 2020-01-28 2020-01-28 T entertainment system and method of delivery augmented content

Country Status (1)

Country Link
WO (1) WO2021152280A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5191615A (en) * 1990-01-17 1993-03-02 The Drummer Group Interrelational audio kinetic entertainment system
US5636994A (en) * 1995-11-09 1997-06-10 Tong; Vincent M. K. Interactive computer controlled doll
US5655945A (en) * 1992-10-19 1997-08-12 Microsoft Corporation Video and radio controlled moving and talking device
US20110025912A1 (en) * 2008-04-02 2011-02-03 Jason Regler Audio or Audio/Visual Interactive Entertainment System and Switching Device Therefor
US20130143482A1 (en) * 2010-06-10 2013-06-06 Jason Regler Media Delivery System and a Portable Communications Module for Audio and Remote Control of Interactive Toys or Devices
US20140184386A1 (en) * 2011-08-11 2014-07-03 Regler Limited (a UK LLC No. 8556611) Interactive lighting effect wristband & integrated antenna

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5191615A (en) * 1990-01-17 1993-03-02 The Drummer Group Interrelational audio kinetic entertainment system
US5655945A (en) * 1992-10-19 1997-08-12 Microsoft Corporation Video and radio controlled moving and talking device
US5636994A (en) * 1995-11-09 1997-06-10 Tong; Vincent M. K. Interactive computer controlled doll
US20110025912A1 (en) * 2008-04-02 2011-02-03 Jason Regler Audio or Audio/Visual Interactive Entertainment System and Switching Device Therefor
US20130143482A1 (en) * 2010-06-10 2013-06-06 Jason Regler Media Delivery System and a Portable Communications Module for Audio and Remote Control of Interactive Toys or Devices
US20140184386A1 (en) * 2011-08-11 2014-07-03 Regler Limited (a UK LLC No. 8556611) Interactive lighting effect wristband & integrated antenna

Similar Documents

Publication Publication Date Title
US20210203708A1 (en) Internet streaming of dynamic content from a file
JP4334470B2 (en) Ambient light control
US20130198786A1 (en) Immersive Environment User Experience
CN1976431B (en) Control device and method for interacting between media source, amusement system and the same
KR100989079B1 (en) System and method for orchestral media service
US20130147396A1 (en) Dynamic Ambient Lighting
JP2004514162A (en) Method and apparatus for transmitting commands
CN103024454B (en) Method and system for transmitting interaction entry information to audiences in broadcasting and TV programs
US10521178B2 (en) Method of controlling mobile devices in concert during a mass spectators event
KR20200050449A (en) Performance directing system
US20240057234A1 (en) Adjusting light effects based on adjustments made by users of other systems
JP2005006037A (en) Medium synchronization system and service providing method used for the same
WO2021152280A1 (en) T entertainment system and method of delivery augmented content
KR20200050448A (en) Performance directing system
KR20070080381A (en) Method for playing multimedia data in wireless terminal
KR20180064010A (en) System for controlling lighting using broadcasting supplement service
CN113077799A (en) Decoder arrangement with two audio links
GB2577238A (en) Entertainment system and method of delivery of augmented content
US10863274B2 (en) Themed ornaments with internet radio receiver
US10999424B2 (en) Method of controlling mobile devices in concert during a mass spectators event
US10536496B2 (en) Themed ornaments with internet radio receiver
KR20190108758A (en) Apparatus of Karaoke Platform
US20230336916A1 (en) Themed ornaments with internet radio receiver
US9936316B2 (en) Themed ornaments with internet radio receiver
US9693140B2 (en) Themed ornaments with internet radio receiver

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20705781

Country of ref document: EP

Kind code of ref document: A1