GB2617804A - Home concert system, method therefor and audio system - Google Patents

Home concert system, method therefor and audio system Download PDF

Info

Publication number
GB2617804A
GB2617804A GB2100788.5A GB202100788A GB2617804A GB 2617804 A GB2617804 A GB 2617804A GB 202100788 A GB202100788 A GB 202100788A GB 2617804 A GB2617804 A GB 2617804A
Authority
GB
United Kingdom
Prior art keywords
audio
time
varying
remote
channels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2100788.5A
Other versions
GB202100788D0 (en
Inventor
Charles Regler Jason
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB2100788.5A priority Critical patent/GB2617804A/en
Publication of GB202100788D0 publication Critical patent/GB202100788D0/en
Publication of GB2617804A publication Critical patent/GB2617804A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B5/00Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied
    • G08B5/22Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission
    • G08B5/222Personal calling arrangements or devices, i.e. paging systems
    • G08B5/223Personal calling arrangements or devices, i.e. paging systems using wireless transmission
    • G08B5/224Paging receivers with visible signalling details
    • G08B5/228Paging receivers with visible signalling details combined with other devices having a different main function, e.g. watches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/28Arrangements for simultaneous broadcast of plural pieces of information
    • H04H20/30Arrangements for simultaneous broadcast of plural pieces of information by a single channel
    • H04H20/31Arrangements for simultaneous broadcast of plural pieces of information by a single channel using in-band signals, e.g. subsonic or cue signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41415Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance involving a public display, viewable by several users in a public space outside their home, e.g. movie theatre, information kiosk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/165Controlling the light source following a pre-assigned programmed sequence; Logic control [LC]
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/175Controlling the light source by remote control
    • H05B47/19Controlling the light source by remote control via wireless transmission

Abstract

External speakers (70, 72) are addressed via the intermediately located splitter (50) which provides inter-operability between different vendor technology platforms, namely a media player (22) and the speakers (70, 72). Separating audio generation, via the splitter (50), and assigning processing to a remote speaker rather than at an effective source of a media file (10) permits content-synchronized audio triggers (12a-12j), such as tones, to be embedded on one of the channels of a ubiquitous stereo pair, whilst a full record of, for example, a music track (60) is supported on the other channel of the stereo pair. The audio triggers are extracted by the splitter (50), converted into control data that is onwardly communicated to control local visual effects on, for example, RF-controlled LED wristbands or discrete lighting units. In response to the control data, these lighting devices present a pre-programmed and coordinated lighting effect synchronized to instantaneous audio output from the speaker(s) and video images generated on the media player (22). The splitter (50) replicates the music track on a plurality of audio channels and communicates this information so that it can be broadcast from the speaker(s) that each receive the same music content but over different channels. The effect is a fully immersive interactive sensory concert experience at a remote location, with the splitter (50) making use of a ubiquitous and universally-accepted audio channel whilst ensuring that synchronized control tones do not bleed into the generated audio because the audio processing is separated at the splitter.

Description

HOME CONCERT SYSTEM, METHOD THEREFOR
AND AUDIO SYSTEM
Field of the Invention
This invention relates, in general, to an entertainment system and is particularly, but not exclusively, applicable to a home concert system and associated method in which stage and/or arena lighting effects are brought (i.e. replicated in potentially near-real time) into a local (e.g. home) environment and coordinated with audio content that is played in the local environment. Aspects of the invention also relate to an audio system.
Summary of the Prior Art
Over the past decade, audience members (such as concert-goers) have become part of a lighting show at a concert or gig. The interactive effect has been achieved through a distribution, to those audience members, of radio frequency-controlled LED wristband 15 technologies, such as the XYLOBAND® concert wristband (see tips:lbw,cornivicieff-ilso;rch7cp---Xyloband+VOutu he && vi ew=detn &rn F7 A 481)804E31:1-95C6DCK/A48D804E3E1-.1-05C.:6&&MORM=Vii!DGARA:ru=%21Hticie os 5)2f:search (X7312ci 5)3 D X s, and%2520vou Lube%26:4s Dn 5.:26forria3 D QB R %26 %31 -1%26pci5,3 D x v loban;irk' byounibe%26e'713 DO- 1 6-k2(i sic%3D%.26cv %3D7E37313 S7C BC i1/2.46E4 AFC3442913233DFC 4).
Whilst there are also optical solutions that target selective LED activation in a wristband containing one or more coloured LEDs, radio frequency (RE) control is considered optimum because it is near instantaneous and isn't generally affected by atmospheric conditions, such as smoke. In terms of operation, an encrypted control code is sent over one or more radio channels in an unlicensed radio band. Each wristband, which may be uniquely identifiable or may be addressed as one of a batch of multiple wristbands, then decodes the control code and uses the code to look-up a locally-stored activation sequence. The wristband then controls LED activation, including colour presentation andlor flash rates, using the activation sequence.
Once the concert is over, the wristbands can be recycled, but otherwise they potentially become obsolete.
Each audience member therefore has a time-limited, one-off interactive experience between the light show and the delivered (typically) musical content, with the artist then moving on to another venue to deliver a set-list to a new audience.
Unfortunately, with all live events, audience numbers are restricted to the capacity of the concert venue, e.g. a stadium or theatre or outside gig, such as occurs at Glastonbury.
Besides the costs associated with attending a live event, there is also a degree of luck associated with obtaining tickets. An avid fan may therefore, for one reason or the other, simply never be in a position to experience a live interactive gig, and an artist presently may not really reach out into their entire fan-base to provide those fans with an immersive audio-visual experience.
The technical issue is therefore how can an interactive light show, related to musical delivery, be provided to a remotely located fan or customer at a time convenience to either the artist or the fan/customer.
Summary of the Invention
According to first aspect of the present invention there is provided an interconnected sound and light system comprising: i) a first device receptive to or containing a source file containing at least audio content including a time-varying audio track and related synchronized functional triggers; and ii) a splitter connected to the first device, the splitter containing: a signal splitting function coupled to receive the audio content and arranged to separate the time-varying audio track from the related synchronized functional triggers; an audio channel duplicator function, responsive to the signal splitting function, arranged to replicate the time-varying audio track on at least two channels; at least one output coupled to receive the at least two channels generated by the audio channel duplicator function, wherein the time-varying audio track or a digital representation thereof is provided as an output therefrom; a control signal generator coupled to receive the related synchronized functional triggers and arranged to convert those functional triggers into control data defining programs that control generation of sensory outputs of at least one sensory augmentation device remote to the splitter; a transmitter chain arranged to transmit the control data; iii) a speaker system having at least two channels and wherein the speaker system is coupled to the at least one output and generates commonly on its at least two channels the time-varying audio track; and iv) at least one sensory augmentation device each having a wireless receiver and at least one processor-controlled light source, wherein each sensory augmentation device is responsive to the control data and is arranged, in response thereto, to control light emissions from said at least one light source in synchronicity with generation of the time-varying audio track from the speaker system whereby a time relationship between the time-varying audio track and related synchronized functional triggers of the source file is substantially maintained.
The first device may include a display and the source file includes visual content that is time-aligned with the audio content and wherein the system is arranged to coordinate display of the visual content in synchronicity with generation of the time-varying audio track from the speaker system and controlled light emissions from said at least one light source.
The source file may streamed data received by the first device.
The first device can include a local speaker, wherein the local speaker is selectively disabled in response to the synchronized functional triggers.
The synchronized functional triggers are, in one embodiment, audio tones embedded on a first audio channel of a pair of audio channels that convey the audio content.
The sensory augmentation device can include a haptic generator operationally responsive to said received control data, as well a plurality of LEDs (such as incorporated into a wearable item of attire or an ornamental housing).
The system typically includes a plurality of the sensory augmentation device select to include at least one of a wristband, a lighting unit and a smartphone.
The transmitter chain may be further arranged to transmit the digital representation of the time-varying audio track to an addressable wirelessly-connected speaker.
In a second aspect of the invention there is provided a signal splitter device containing: a signal splitting function an-anged to receive audio content having a first audio signal component and a second audio signal component, wherein the first audio signal component is a time-varying audio track and the second audio signal component are functional audio triggers synchronized to events in the time-varying audio track; an audio channel duplicator function, responsive to the signal splitting function, arranged to replicate the time-varying audio track on at lea St two channels; at least one output coupled to receive the at least two channels generated by the audio channel duplicator function, wherein the time-varying audio track or a digital representation thereof is provided as an output therefrom; a control signal generator coupled to receive the related synchronized functional audio triggers and arranged to convert those functional audio triggers into control data defining programs that control generation of sensory-percei vahle outputs in at least one sensory augmentation device remote to the splitter; at first transmitter chain arranged to transmit the control data; and a second transmit chain arranged to transmit the replicated time-varying audio track commonly over at least two channels.
In a further aspect of the invention there is provided a method of processing, in a device, media content containing audio content having a time-varying audio track component and an audio trigger component wherein the audio trigger component contains a plurality of audio triggers each synchronized to an event in the time-varying audio track and wherein the device generates a remote light show that complements a light show produced at a music event associated with the media content but remote from the device, the method comprising: at the device, in response to reading a media file or receiving a data stream containing the media content related to the music event, splitting the time-varying audio track component from the audio trigger component that are time synchronized both to start and end with events in the time-varying audio track component; duplicating the time-varying audio track component on at least two channels and outputting those at least two channels for audio reproduction of the time-varying audio track component from a stereo speaker system external to, remote from but connected to the device through a wireless connection or wired output port; with reference to a look-up table in the device, cross-referencing identified audio triggers with pit-stored control data that define programs that control generation of sensory-perceivable outputs in at least one sensory augmentation device remote to the device; and addressing and selectively wirelessly transmitting the control data to at least one address-identifiable sensory augmentation device to control light emissions from said at least one address-identifiable sensory augmentation device, thereby augmenting sensory-perceivable outputs by coordinating the remote light show with the light show produced at the music event.
The media content further may comprise video content time-aligned with the audio content, and the method includes: on a display device remote to the music event, generating the video content substantially in time synchronicity with the remote light show and the audio reproduction of the time-varying audio track component from the stereo speaker system, In an embodiment, reproduction of the dine-varying audio track component is audibly isolated from speakers co-located in a media player containing the display device.
In yet another aspect of the invention there is provided a method of processing, in a device, media content containing audio content having a time-varying audio track component and an audio trigger component wherein the audio trigger component contains a plurality of audio triggers each synchronized to an event in the time-varying audio track and wherein the device generates a remote light show that complements a light show associated with an album containing music coded onto a data carrier, the method comprising: at the device, in response to reading a media file or receiving or downloading data containing the media content related to the music event, splitting the time-varying audio track component from the audio trigger component that are time synchronized both to start and end with events in the time-varying audio track component; duplicating the time-varying audio track component on at least two channels and outputting those at least two channels for audio reproduction of the time-varying audio track component from a speaker system external to, remote from but connected to the device through a wireless connection or wired output port; with reference to a look-up table in the device, cross-referencing identified audio triggers with pre-stored control data that define programs that control generation of sensory-perceivable outputs in at least one sensory augmentation device remote to the device; and addressing and selectively wirelessly transmitting the control data to at least one address-identifiable sensory augmentation device to control light emissions from said at least one address-identifiable sensory augmentation device, thereby augmenting sensory-perceivable outputs by coordinating the remote light show with musical content.
The data carrier may be a CD. DVD or portable memory device.
Advantageously, the system and method of the present invention permits, firstly, a universal interactive experience tube coordinated between a source event (whether "live" or recorded and regardless of whether either it is streamed, buffered or stored) and a local home environment to produce local generation of an immersive sensory experience from wristbands (and the lights) supporting controllable LED lights and/or haptic generators, e.g. vibrators, and the like.
In fact, the event could be a studio session where an artist is remotely isolated from a multitude of connected fans distributed at one or more remote locations, or in fact the event could be pre-recorded and made available in a source format, such as on a DVD, in memory of a purchased audio-visual device (such as a smartphone) or downloaded or streamed from a pay-to-view subscription on-line service.
An artist, in effect, is able to self-isolate (if desired) or otherwise play to a crowd at a gig and yet also instantaneously and universally reach more fans than could be accommodated at, for example, a stadium event. At times when social gatherings are restricted, such as a global pandemic, the present invention provides a mechanism by which concert experiences can be mimicked at a home environment and ensure that the "show must go on".
Aspects of the invention permit remotely-located fans to be able, in real-time or later, to subscribe to and watch, for example, a live-streamed concert [noting that live streaming -7-usually includes some limited time delay for broadcast manipulation and control reasons] on a television or smartphone (generically a "media player"), whilst having audio content produced on speakers local to the remotely-located fan. The speakers are addressed via an intermediate decoding device (or "splitter") which provides inter-operability between different vendor technology platforms, namely the media player and speakers. Separating audio generation, via the splitter, to a remote speaker permits audio triggers to be included on one of the channels of a ubiquitous stereo pair, whilst a full record of the audio content is supported on the other channel of the stereo pair. Synchronized control information in the preferred form of audio tones (generated using frequency modulation) are extracted by the splitter and then onwardly conununicated to control local visual effects on, for example, RF-controlled Xyloband wristbands or discrete lighting units. These lighting devices can then present a pre-programmed and coordinated lighting effect synchronized to instantaneous audio output from the speaker(s) and, in some embodiment, synchronized with video images generated on the media player. The effect is a fully immersive interactive concert experience at a remote location, with the splitter making use of a ubiquitous and universally-accepted audio channel whilst ensuring that synchronized control tones do not bleed into the generated audio. The splitter therefore provides, at the remote location, a component integration function that avoids having to adapt either the physical or software architectures of the media player or the physical or software functionality in the speaker(s), such as commonplace speakers supporting Bluetooth® connectivity and other wireless protocols.
The invention furthermore means that, from a suitably formatted source containing audio content, the gig can he delayed but replicated (and indeed repeatedly replicated) in a local 25 home environment.
In yet another aspect of the invention there is provided an audio system comprising: a master device receptive to or containing a source audio file having a time-varying audio track separated across a pair of channels, the master device including: a splitter arranged to process the source audio file to separate a first channel of the pair from a second channel of the pair; an audio output for outputting, as an audio signal, the first channel; and at least one transmitter chain arranged to pair the master device with a remote device and to transmit, when paired with the remote device, the second channel.
The audio system may further comprise a slave device including: at least one receiver chain configured to receive transmissions from the master device; processing intelligence coupled to the at least one receiver chain, the processing intelligence arranged to establish pairing with the master device and to recover the second channel of the time-varying audio track from received transmissions; a speaker for interactively outputting audio of the second channel in time synchronicity with audio output of the first channel from and at the master device.
The slave device preferably includes at least one motor and the processing intelligence is further configured to control operation of said at least one motor in response to audio levels determined from analysis of the second audio channel.
The master device may be a scriartphone. In an embodiment, the master device includes at least one receiver arranged to receive the source file from a remote component selected from the group consisting of: a television; a smartphone; and an internet connection. The source file can be communicated wirelessly to the master device from the remote component.
Brief Description of the Drawinas
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which: FIG. 1 illustrates a preferred process by which a media file is created from audio triggers and audio and video content; HG. 2 is a block diagram showing a preferred system that processes and presents the media file of FIG. 1; and HG. 3 is a block diagram showing processing functions of the splitter of FIG. 2.
Detailed Description of a Preferred Embodiment
FIG. 1 illustrates a prefen-ed process by which a media file 10 is created from audio triggers 12 (shown as waveforms 12a-12j) in combination with at least audio content 14 and preferably both audio content 14 and video content 16. Unless the content requires a more specific interpretation, the term "media content" will be used as a generic description for audio content and/or video content.
Contextual applications of the invention can, for example, be musical albums but also films and the like.
The audio triggers 12 and the media content are synchronized together and consolidated, i.e. packaged using conventional and well-known techniques, into a media file, such as a M P3 or M P4 file format. Synchronization is a time-al ignment Audio triggers 12 are preferably audio tones or otherwise some other appropriate form of coding. Preferably, the audio triggers 12 are frequency modulated such that different triggers are represented uniquely by different tones. These audio triggers are synchronized with an audio track (and/or the media content) such that a start of the audio trigger con-elates with a start point Leg. in a piece of music or video] for which a sensory output is required. In the exemplary content of a music, this start point -and thus the start point for the audio trigger -might coincide with the first beat of a chorus and, for a video, it might correspond to an explosion or a piece of rapid action.
The nature of the sensory output will be discussed below in relation to FIG. 2.
The tones of the audio triggers 12 are embedded on a first one of a stereo pair of audio channels, e.g. the left channel. Typically, there will be multiple discrete tones associated with the media content, e.g. at repeats for a chorus. The audio content 14 coded onto a second channel of the stereo pair, which second channel is independent of the first channel. The coding may he 16-bit or better.
Control coding may make use of existing signaling formats and, for example, redundant or unassigned bits in 24-bit audio.
Of course, for a multichannel system, there may exist other channels that can be used or spare bits. The common thread is that the triggers 12a-12j are, in a preferred embodiment, audio in nature and are discretely coded from any other audio content in the media file.
As will be understood, in a multimedia (av) file, the video content and the audio content are time-aligned but discrete given that their eventual processing paths are discrete.
Ending of each tone also corresponds to the ending of the sensory output and a specified point, e.g. the end of a musical segment of interest such as the end of a chorus, in the media file, e.g. an album containing many songs. The duration of the tone provides a control mechanism indicative of a desired duration for the sensory output. The tones/audio triggers are there tied into specific audio events in the audio content.
FIG. 2 is a block diagram showing a typical system 20 that processes and presents the media file 10 of FIG. 1.
The system includes a media player 22, such as (but not limited to) a tablet, TV or smartphone or, in one embodiment, a foldable CD-package that includes a display screen and operational and control electronics. The media player 22 is loaded with (e.g. via a memory stick or a disc) or receives, such as via a download or streaming event, a source file (i.e. the media file 10). The media file 10 may be stored or buffered locally in the media player 22.
The media player includes at least one processor 24 that executes control programs 26 that control functionality and operation of the media player 22. The media player will generally include a display 28 responsive to the processor, and also an optional speaker 30 which is selectively muteable via a switch function 32. The media player 22 further includes at least one of: (i) transmitter chain 34, such as an exemplary Bluetooth transmitter, and (ii) an output port 36.
The processor 24 is arranged to control the processing of media content, transmitter functions (including modulation and coding) and overall media output. Audio output, derived from the media file 10, is passed through an audio output interface 38 that selects how the audio is presented locally on the media player or onwardly communicated to a splitter 50. The audio output interface 38 may therefore be considered as a soft function rather than a discrete component or module.
The splitter may be an independent. i.e. discrete, device has is purchased independently of the media player and linked thereto, such as via a cable or its functional equivalent.
Alternatively, the splitter may be fully integrated into the functionality of the media player.
Under processor control, video content 40 (if there is any) is extracted from the media file, e.g. source file 10 or streamed incoming data, and processed such that it can he output on the display 28. Audio content (both channels of the stereo pair) is also processed. In the event that there are embedded audio triggers in the source file as determined by examination of a media file 10 (e.g. as recorded and notified to the processor by, for example, analysis of header information (such as specific bits) or user-advice or any other suitable mechanism), then the processor 24 isolates the local speaker 30 in the media player 22. The audio content (reference numeral 14 of FIG. 1) for both channels of the exemplary stereo pair is then, under process control, passed to or otherwise made available at the transmitter 34 or output port 36. These alternative or complementary output paths allow for either onward hardwired communication of the audio content 42 to the splitter and/or onward wireless transmission of the audio content 42. At this point, no audio reproduction in the media player 22 has taken place other than that the audio content 42 has been stripped out from the video content (if video content is present). The video content is, however, processed by and can be played on the media player (although it could, in some embodiments) by further communicated to a larger screen.
Given that there the audio content 42 is now being treated in a separate device (namely initially the splitter 50), some buffering of the video content may be necessary to compensate for finite transmission and processing delays in the longer audio path across multiple components to the end devices responsible for audio output and lighting effects.
This buffering is, however, optional if processing and transmission paths are respectively quick and short, but buffering is preferable to maintain tight synchronization between audio content and video content, e.g. so-called "lip sync". Buffering is particularly relevant if an artist is providing the source material from a remote site and the end-listener/consumer is receiving streamed information at a home environment. To this end, lip-sync is achieved for a gig or play set that is contemporaneously watched by many people in many countries and potentially on a worldwide basis.
The splitter 50 can, in certain commercial implementations, be directed incorporated into fo the media player 22 or into a speaker. For example, the splitter may be implemented within the packaging of a CD carton that includes a low-cost video display as well as the processing functionality as described above. This integration has the effect of potentially simplifying the system architecture and rendering certain functionality replicated as thus unnecessary and redundant. This will become apparent and will be understood by the skilled addressee having regard to the technical effect that the system achieves. The advantage of implementation the splitter as a discrete component, however, is that the splitter 50 provides an inter-operability function between disparate equipment and the splitter 50 can therefore be retailed as a separate commodity or an after-sales product that can make use (or re-use following appropriate re-programming) of Xylohand® wristbands acquired from an actual gig/event.
Referring to splitter 50. The splitter may include a receiver 52 to receive modulated wireless communication of the audio content 42, and/or a port interface that allows hardwired connectivity to the output port 36 of the media player 22. The port interface of 25 the splitter may simply be L and R audio connectors for a conventional stereo pair.
The splitter 50 includes a processor and appropriate memory and program code. The received audio content is processed by the splitter 50 to separate [exemplary] the stereo pair of channels "CH1" and "Cl-12" that respectively each contain one of the synchronized audio triggers or specific track information (as represented by the continuous time-varying waveform 60) into different processible signal components having different functions.
The now locally isolated audio triggers are interpreted (as represented by functional block 62 "converts to control data") with reference to a look-up table that translates tones to control functions. These control functions are linked to addressing of lighting devices that are tasked with generating lighting effects that are coordinated with audio and/or video events in the media file 10. The control functions are output on a control channel. The output is a modulated RF control signal that can address in time and concunently one or more LED lighting devices that process the control signals and according generate a lighting effect reflecting any relevant decoded control function. As an example, the The splitter 50 therefore converts audio triggers into specific control functions.
Additionally, the splitter 50 functions, as necessary and having regard to the nature of the received input signals, to demodulate, decode and/or generally signal process the received audio content to recover the track information 60 on the single channel and then to replicate this track information to produce a dual-mono output for a stereo pair of left and right channels. The dual mono output is, depending upon the structural configuration of the splitter (which is a design option) provided to an output port 64 that provides a wired connectivity capability for the dual-mono audio to an external speaker output 66, or otherwise to a transmitter chain 68 in the splitter 50. The splitter's transmitter chain 68 processes the now dual-mono audio content of the track 60 for wireless communication to the external speaker output 66, which processing will usually include at least one of modulation, coding and error correction mitigation techniques all known to the skilled addressee The nature of the connection to physical speakers 70, 72 and, indeed, the coding of the now dual-mono track content presented on left and right channels of the exemplary stereo pair of channels is entirely a design option.
Assuming, for explanation reasons only, that the final pair of speakers 70, 72 support a Bluetootha connection (or the like), the speaker(s) will include appropriate hardware and software to implement a receiver 74 and an audio signal processing function. Following processing, the speaker reproduces the audio content of the track 60 as a dual-mono broadcast in which the same audio content is replicated through each available channel.
Returning now to the control channel 65, the splitter 50 includcs a control transmitter chain 80 which, as will be understood and similar to other transmitters in FIG. 2, will typically include a modulator, an amplifier and filters. The control transmitter chain 80 may support Bluetooth® communication or another transmissions scheme that, preferably, is in licence-free spectrum. Of course, since the splitter potentially has to transmit both dual-mono audio content and also control signals, a common transmitter chain may be used with appropriate signal and transmission interleaving.
The control channel supports the sending of converted control codes/control functions 90 to specifically identified devices to actuate those devices and coordinate sensory output from those addressed devices with the broadcast of the audio track 60. This is shown in FIG. 2 by the linked association of a device with a time displaced part of the spectrum of the audio track. The devices can be, for example, Xyloband LED wristbands 92, 96 that have assigned unique and/or assigned batch names that, accordingly, can be addressed individually or in batches or collectively all together. Other addressable devices include LED beachballs or discrete lighting units 94 and the like that can have their respective sensory outputs controlled in terms of colours, flash rates and/or movement or even heat.
The devices may therefore include optional vibrators generated by haptic circuitry and components.
In FIG. 2, lighting device 94 has been actuated earlier in time relative to LED wristband 96, as shown by the relative position on the time arrow, T. In summary, external speakers 70, 72 are addressed via the intermediately-located splitter 50 which provides inter-operability between different vendor technology platforms, namely the media player 22 and speakers 70, 72. Separating audio generation, via the splitter 50, to a remote speaker (relative to a speaker of the media player 22 and the effective source point of the media file 10) permits audio triggers to be included/embedded on one of the channels of a ubiquitous stereo pair, whilst a full record of the audio (e.g. track 60) content is supported on the other channel of the stereo pair. Synchronized control information in the preferred form of audio tones (generated using frequency modulation) are extracted by the splitter from the appropriate one of the two stereo channels, and then onwardly communicated to control local visual effects on, for example, RF-controlled Xyloband wristbands or discrete lighting units (including an LED lightbulb into which is incorporated the functionality described herein with respect to concert mode operation). These lighting devices can then present a pre-programmed and coordinated lighting effect synchronized to instantaneous audio output from the speaker(s) and, in some embodiment, synchronized with video images generated on the media player 22. The effect is a fully immersive interactive concert experience at a remote location, with the splitter making use of a ubiquitous and universally-accepted audio channel whilst ensuring that synchronized control tones do not bleed into the generated audio because the audio processing is separated between different devices. The splitter 50 therefore provides, at the remote location, a component integration function that avoids having to adapt either the physical or software architectures of the media player or the physical or software functionality in the speaker(s), such as commonplace speakers supporting Bluetooth ® connectivity and other wireless protocols.
FIG. 3 is a block diagram showing processing functions of the splitter 50 of FIG. 2.
Considering the path of a received wireless signal carrying all the audio content 42, namely track information and time-aligned trigger points on respective independent channels, following demodulation and known receiver chain processing, the recovered haseband signals are processed in a digital to analog (D/A) converter to produce analog representations for a stereo pair CH1 and CH2. One of these channels contains the track information 60 whereas the other contains the audio triggers 12a-12j.
The alternative hardwired connection (i.e. an audio input port 104) for the audio content is generally already in the analog domain for the stereo pair.
The splitter 50 includes a channel splitting function 102 responsive to the analog signals from either the audio input port 104 or the D/A converter 100. As indicated above, the channel splitting function 102 separates the track audio from the audio triggers and provides the track information to a processor-controlled audio channel duplicator 106 that replicates the track to produce two identical (dual mono) signals for output either to (i) the output port 64 for delivery to the system's hardwired speakers 70, 72, or otherwise via an A/D converter 108 that provides a digital representation to the transmitter chain 68 for transmission to a wirelessly connected speaker 70, 72.
Concerning processing of the now reconstructed analog tones of the audio triggers 12a-12j, the system of a preferred embodiment is arranged such that a local processor 110, implementing control logic 112, is arranged to count zero-crossings in the sine wave over a time base to determine 114 a frequency associated with particular audio triggers. Once a frequency is determined, it is cross-referenced into a look-up table 116 that correlates differing audio trigger frequencies 120 to a code for a selected one of multiple alternative program instructions 122 that stipulate flash rate and colour for LEDs (and the like) and/or vibration activations in and at the remote devices 92, 94 responsible for generating coordinated sensory output. The lookup table is he programmed with device specific acdvation codes so that the final control data (send by the splitter over a wireless connection 130) appropriately addresses and correctly instructs operation of the remote devices 92, 94 (e.g. a Xyloband wristband or a sniartphone having a controllable LED) that generate the augmented and immersive sensory experiences, e.g. light and movement. This means that the splitter 50, particularly if provided as a discrete unit, can be tailored/supplied so as to convert control information received as time-aligned embedded audio trigger data on a channel of a ubiquitous stereo pair into specific control information to permit inter-operability between different component manufacturers supplying different parts to the arrangement of FIG. 2.
In essence, the time-varying audio component is processed locally in the splitter, whereas the tonal triggers are treated separately and so are not processed in the same path or to the same extent. With separating and sending on the music track in a preferred duplicated dual mono form -and by appropriately muting any local speaker in the component chain before the splitter -the system configuration and arrangements of the present invention result in the tone components in the media file not being audibly noticeable during broadcast, i.e. reproduction, of the audio track.
As used in this application, the terms "component", "module", "system" and "processor" and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component can be, but is not limited to being, a process running on a processor (or interchangeably a "controller"), a processor, an object, an executable, a thread of execution, a program, and/or a computer. The term "processing intelligence" or the like is correspondingly intended to reflect a programmed component tasked with implementing suitable code to achieve a function, with it being noted that the various aspects of the invention generally implemented using appropriately programmed ASICs/chipsets. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers in addition, these components can execute from various computer readable media having various data structures stored thereon. The components can communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).
It is understood that the specific order or hierarchy of steps in the processes disclosed herein is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged while remaining within the scope of the present disclosure. Any accompanying method claims present elements of the various steps in an exemplary order, and are not meant to be limited to the specific order or presented hierarchy, unless a specific order is expressly identified above as critical or is logically required to achieve an intermediate pre-processed condition for properties that are necessarily achieved as a pre-requisite in advance of further manipulation in a subsequent processing step.
Moreover, various aspects or features described herein can be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques. The term "article of manufacture" or any similar term as used herein is intended to encompass a computer program accessible from any computer-readable device or media. For example, computer-readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, etc.), optical disks (e.g., compact disk (CD), digital versatile disk (DVD), etc.), smart cards, and flash memory devices (e.g., Erasable Programmable Read Only Memory (EPROM), card, stick, key drive, etc.). Additionally, various storage media, such as databases and memories, described herein can represent one or more devices and/or other computer-readable media for storing information. The term "computer-readable medium" may include, without being lirnited to, optical, magnetic, electronic, electro-magnetic and various other tangible media capable of storing, containing, and/or carrying instruction(s) and/or data.
Those skilled in the art will thus appreciate that the various illustrative logical blocks, modifies, circuits, methods and algorithms described in connection with the examples disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, methods and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application while remaining, either literally or equivalently, within the scope of the accompanying claims.
Whilst the above description has made use of a two-channel system, the underlying concepts may, of course, use a 3.1, 5.1 or 7.1 multi-speaker system in which one of the specific channels is isolated for use as a control channel. The use of control, e.g. audio, tones permits multiple different devices to he selectively addressed, as has been explained.
Stereo in a Master-Slave Configuration If, however, the application is in the context of just an audio system and the desire to separate a stereo audio source file between a source and a recipient device, such as a plush toy. to allow for an interactive audio effect then a simpler arrangement is possible.
Particularly, for a 2-channel audio configuration in which there is a desire for a stereo pair to be controllably separated, the source file [in this configuration] does not need channel coding. Rather, the source can be a conventional stereo audio file provided or acquired by a smartphone or the like or otherwise the source is a conventional audio file delivered to (e.g. wirelessly received or streamed) or stored (permanently or temporally) in a master device. The master device, sharing a configuration not dissimilar to that shown in FIG. 2, contains amongst other functional components (a) at least one speaker and/or an audio output port 64, a splitter processing function arranged to separate the left and right channels of the stereo pair, and a wireless transmitter 68, e.g. a Bluetooth transmitter or the like. In operation, the source stereo audio is partially processed in that one, i.e. a "first", audio channel is separated for modulation and onward communication to a linked/associated speaker in a remote toy or the like, whereas the other, i.e. "second", audio channel is recovered from the source and then played locally (potentially as dual mono) from one or more speakers 70, 72 in the master device. This local generation of sound may he via an aux port or the like, e.g. audio port 64, if higher quality speakers can be wired into the master device or else it can be through a speaker commonly housed with the processing circuitry and within the master device itself.
The master unit may replicate the first audio channel and output this as a dual mono form to make use of all available speaker components in the master unit.
Again, in a similar fashion to the coding environment described herein, a remote slave device, which incorporates a speaker, is paired to the master device through a pre-use setup regime. Such wireless pairing processes are well-known and will usually include some form a handshaking to establish the association and for validation of the connection, as will he readily understood and so needs no additional explanation.
In view of the different processing legs for the stereo source, some signal buffering may be applied in the master device and/or remote slave device so that time synchronization between the stereo channels is maintained, especially in view of the different processing paths particularly required because of onward transmission. However, buffering is only a preferred option because overall processing and transmission speeds may be such that there is no discernible delay in audio presentation between respective channels from the speaker associated with the master and its paired slave device(s).
At the remote slave device, a received modulated signal containing the complementary [other second] channel of the stereo pair is recovered and then audibly output. In the exemplary context where the remote device is a plush toy (such as a cartoon character), the remote toy thus includes a receiver chain and an audio processing path that includes a speaker. In some embodiments, a processor-controlled motor circuit is included in the toy and is configured to actuate a motor that controls a specific local movement within the toy. For example, the motor may control opening or closing of the toy's mouth to enhance interactvity. The motor is thus made responsive to a signal level or determined value indicating sound intensity/strength, so the mouth is made to open or close based on a detection circuit resolving instantaneous changing and time-varying intensity levels within signal components of the second channel. Detection of levels can he in a discrete circuit or using functions performed by either a dedicated processor, such as a PIC or the like, or a multi-purpose processor in the remote device.
The complexity of code recovery, as reflected in FIG. 3, is therefore not required the simplified audio stereo embodiment.
It is however noted that, without tone control, any paired remote device will output the same audio at the same time. However, the separation of the stereo audio at the master unit (such as a smartphone or an online PC) and the wireless communication of one audio channel to a remote slave speaker allows for audio interactivity and, in some embodiments, both audio and visual interaction between the master speaker and any paired slave device.
Unless specific arrangements are mutually exclusive with one another, the various embodiments described herein can be combined to enhance system functionality and/or to produce complementary functions in the effective delivery of sensory-relevant audio sound stage. Such combinations will be readily appreciated by the skilled addressee given the totality of the foregoing description. Likewise, aspects of the preferred embodiments may be implemented in standalone arrangements where more limited and thus specific component functionality is provided within each of any interconnected -and therefore interacting -system components albeit that, in sum, they together support, realize and produce the described real-world effect(s). Indeed, it will be understood that unless features in the particular preferred embodiments are expressly identified as incompatible with one another or the surrounding context implies that they are mutually exclusive and not readily combinable in a complementary and/or supportive sense, the totality of this disclosure contemplates and envisions that specific features of those complementary embodiments can be selectively combined to provide one Or more comprehensive, but slightly different, technical solutions.

Claims (21)

  1. Claims 1. An interconnected sound and light system comprising: i) a first device (22) receptive to or containing a source file (10) containing at least audio content (14) including a time-varying audio track (60) and related synchronized functional triggers (12a-12j); and ii) a splitter (50) connected to the first device (22), the splitter containing: a signal splitting function (102) coupled to receive the audio content (14) and arranged to separate the time-varying audio track (60) from the related synchronized functional triggers; an audio channel duplicator function (106), responsive to the signal splitting function (102), arranged to replicate the time-varying audio track (60) on at least two channels (Cl-I1, Cl-12); at least one output (64, 68) coupled to receive the at least two channels (CH1, CH2) generated by the audio channel duplicator function (106), wherein the time-varying audio track or a digital representation thereof is provided as an output (66) therefrom; a control signal generator (62) coupled to receive the related synchronized functional triggers and arranged to convert those functional triggers into control data defining programs (122) that control generation of sensory outputs of at least one sensory augmentation device (92, 94) remote to the splitter (50); a transmitter chain (80) arranged to transmit the control data; iii) a speaker system having at least two channels (70, 72) and wherein the speaker system is coupled to the at least one output (64, 68) and generates commonly on its at least two channels the time-varying audio track; and iv) at least one sensory augmentation device each having a wireless receiver and at least one processor-controlled light source, wherein each sensory augmentation device is responsive to the control data and is arranged, in response thereto, to control light emissions from said at least one light source in synchronicity with generation of the time-varying audio track from the speaker system whereby a time relationship between the time-varying audio track (60) and related synchronized functional triggers (12a-12j) of the source file is substantially maintained.
  2. 2. The interconnected sound and light system of claim 1, wherein the first device includes a display and the source file (10) includes visual content (16) that is time-aligned with the audio content (14) and wherein the system is arranged to coordinate display of the visual content (16) in synchronicity with generation of the time-varying audio track from the speaker system and controlled light emissions from said at least one light source.
  3. 3. The interconnected sound and light system of claim 1 or 2, wherein the source file is streamed data received by the first device.
  4. 4. The interconnected sound and light system of claim 1, 2 or 3, wherein the first device includes a local speaker (30) and wherein the local speaker is selectively disabled in response to the synchronized functional triggers (12a-12j).
  5. 5. The interconnected sound and light system of any preceding claim, wherein the synchronized functional triggers (12a-12j) are audio tones embedded on a first audio channel of a pair of audio channels that convey the audio content (14).
  6. 6. The interconnected sound and light system of any preceding claim, wherein the sensory augmentation device includes a haptic generator operationally responsive to said received control data.
  7. 7. The interconnected sound and light system of any preceding claim, wherein the sensory augmentation device includes a plurality of LEDs.
  8. 8. The interconnected sound and light system of any preceding claim, wherein the system includes a plurality of the sensory augmentation device selected from a group that includes at least one of: a wristband, a lighting unit, a light bulb, a smartwatch, and a smartphone.
  9. 9. The interconnected sound and light system of any preceding claim, wherein the transmitter chain (80) is further arranged to transmit the digital representation of the time-varying audio track to an addressable wirelessly-connected speaker (70, 72).
  10. 10. A signal splitter device (50) containing: a signal splitting function (102) arranged to receive audio content (14) having a first audio signal component and a second audio signal component, wherein the first audio signal component is a time-varying audio track (60) and the second audio signal component are functional audio triggers synchronized to events in the time-varying audio track (60); an audio channel duplicator function (106), responsive to the signal splitting function (102), arranged to replicate the time-varying audio track (60) on at least two channels (CH1, CH2); at least one output (64, 68) coupled to receive the at least two channels (CH1, CH2) generated hv the audio channel duplicator function (106), wherein the time-varying audio track or a digital representation thereof is provided as an output (66) therefrom; a control signal generator (62) coupled to receive the related synchronized functional audio triggers and arranged to convert those functional audio triggers into 20 control data defining programs (122) that control generation of sensory-perceivable outputs in at least one sensory augmentation device (92, 94) remote to the splitter (50); at first transmitter chain (80) arranged to transmit the control data; and a second transmit chain (68) arranged to transmit the replicated time-varying audio track (60) commonly over at least two channels (0-11, CH2).
  11. 11. A method of processing, in a device, media content containing audio content having a time-varying audio track component (60) and an audio nigger component wherein the audio trigger component contains a plurality of audio niggers each synchronized to an event in the time-varying audio track (60) and wherein the device generates a remote light show that complements a light show produced at a music event associated with the media content, wherein the music event is remote from the device and the method comprises: at the device, in response to reading a media file or receiving a data stream containing the media content related to the music event, splitting the time-varying audio track component (60) from the audio trigger component that are time synchronized both to start and end with events in the time-varying audio track component (60); duplicating the time-varying audio track component (60) on at least two channels (CH1, CH2) and outputting those at least two channels for audio reproduction of the time-varying audio track component (60) from a speaker system external to, remote from but connected to the device through a wireless connection or wired output port; with reference to a look-up table in the device, cross-referencing identified audio triggers with pre-stored control data that define programs (122) that control generation of sensory-perceivable outputs in at least one sensory augmentation device (92, 94) remote to the device (50); and addressing and selectively wirelessly transmitting the control data to at least one address-identifiable sensory augmentation device to control light emissions from said at least one address-identifiable sensory augmentation device, thereby augmenting sensory-perceivable outputs by coordinating the remote light show with the light show produced at the music event.
  12. 12. The method of claim 11, wherein the media content further comprises video content (16) time-aligned with the audio content (14), and the method includes: on a display device remote to the music event, generating the video content substantially in dine synchronicity with the remote light show and the audio reproduction of the time-varying audio track component (60) from the speaker system,
  13. 13. The method of claim 12, wherein reproduction of the time-varying audio track component (60) is audibly isolated from speakers co-located in a media player containing the display device.
  14. 14. A method of processing, in a device, media content containing audio content having a time-varying audio track component (60) and an audio trigger component wherein the audio trigger component contains a plurality of audio triggers each synchronized to an event in the time-varying audio track (60) and wherein the device generates a remote light show that complements a light show associated with an album containing music coded onto a data carrier, the method comprising: at the device, in response to reading a media file or receiving or downloading data containing the media content related to the music event, splitting the time-varying audio 5 track component (60) from the audio trigger component that are time synchronized both to start and end with events in the time-varying audio track component (60); duplicating the time-varying audio track component (60) on at least two channels (CH1. CH2) and outputting those at least two channels for audio reproduction of the time-varying audio track component (60) from a speaker system external to, remote from but 10 connected to the device through a wireless connection or wired output port; with reference to a look-up table in the device, cross-referencing identified audio triggers with pre-stored control data that define programs (122) that control generation of sensory-perceivable outputs in at least one sensory augmentation device (92, 94) remote to the device (50); and addressing and selectively wirelessly transmitting the control data to at least one address-identifiahle sensory augmentation device to control light emissions from said at least one address-identifiable sensory augmentation device, thereby augmenting sensory-perceivable outputs by coordinating the remote light show with musical content.zo
  15. 15. The method of claim 14, wherein the data carrier is a CD, DVD or portable memory device.
  16. 16. An audio system comprising: a master device (22) receptive to or containing a source audio file (10) having a time-varying audio track (60) separated across a pair of channels, the master device including: a splitter (50) arranged to process the source audio file to separate a first channel of the pair from a second channel of the pair; an audio output for outputting, as an audio signal, the first channel; and at least one transmitter chain (80) arranged to pair the master device with a remote device and to transmit, when paired with the remote device, the second channel.
  17. 17. The audio system of claim 16, further comprising: a slave device including: at least one receiver chain configured to receive transmissions from the master device; processing intelligence coupled to the at least one receiver chain, the processing intelligence arranged to establish pairing with the master device and to recover the second channel of the time-varying audio track (60) from received transmissions; a speaker for interactively outputting audio of the second channel in time synchronicity with audio output of the first channel from and at the master device.
  18. 18. The audio system of claim 16 or 17, wherein the slave device includes at least one motor and the processing intelligence is further configured to control operation of said at 15 least one motor in response to audio levels determined from analysis of the second audio channel
  19. 19. The audio system of any of claims 16 to 18, wherein the master device is a smartphone.
  20. 20. The audio system of any of claims 16 to 19, wherein the master device includes at least one receiver arranged to receive the source file from a remote component selected from the group consisting of: a television; a smartphone; an internet connection.
  21. 21. The audio system of claim 20, wherein the source file is communicated wirelessly to the master device from the remote component.
GB2100788.5A 2021-01-21 2021-01-21 Home concert system, method therefor and audio system Pending GB2617804A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2100788.5A GB2617804A (en) 2021-01-21 2021-01-21 Home concert system, method therefor and audio system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2100788.5A GB2617804A (en) 2021-01-21 2021-01-21 Home concert system, method therefor and audio system

Publications (2)

Publication Number Publication Date
GB202100788D0 GB202100788D0 (en) 2021-03-10
GB2617804A true GB2617804A (en) 2023-10-25

Family

ID=74859042

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2100788.5A Pending GB2617804A (en) 2021-01-21 2021-01-21 Home concert system, method therefor and audio system

Country Status (1)

Country Link
GB (1) GB2617804A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011042731A1 (en) * 2009-10-05 2011-04-14 Jason Regler Interactive toys and a method of synchronizing operation thereof
WO2011154746A1 (en) * 2010-06-10 2011-12-15 Rb Concepts Limited Media delivery system and a portable communications module for audio and remote control of interactive toys or devices
WO2013021209A1 (en) * 2011-08-11 2013-02-14 Rb Concepts Limited Interactive lighting effect wristband & integrated antenna

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011042731A1 (en) * 2009-10-05 2011-04-14 Jason Regler Interactive toys and a method of synchronizing operation thereof
WO2011154746A1 (en) * 2010-06-10 2011-12-15 Rb Concepts Limited Media delivery system and a portable communications module for audio and remote control of interactive toys or devices
WO2013021209A1 (en) * 2011-08-11 2013-02-14 Rb Concepts Limited Interactive lighting effect wristband & integrated antenna

Also Published As

Publication number Publication date
GB202100788D0 (en) 2021-03-10

Similar Documents

Publication Publication Date Title
US20230266939A1 (en) Multiple-Device Setup
CA3140979C (en) Wake-word detection suppression
US10257035B2 (en) Configuring a new playback device for a bonded zone
US10891105B1 (en) Systems and methods for displaying a transitional graphical user interface while loading media information for a networked media playback system
CN104520927B (en) Audio content audition
US11546710B2 (en) Conditional content enhancement
CN104956689B (en) For the method and apparatus of personalized audio virtualization
CN105308901B (en) Playlist update in a media playback system
CN110087125B (en) Computing device and method and medium for automatic configuration of home playback devices
US10061556B2 (en) Audio settings
JP2004514162A (en) Method and apparatus for transmitting commands
KR20090038834A (en) Sensory effect media generating and consuming method and apparatus thereof
US20070087686A1 (en) Audio playback device and method of its operation
JPH05506587A (en) Entertainment system that correlates voice and movement
US11483670B2 (en) Systems and methods of providing spatial audio associated with a simulated environment
US20230376269A1 (en) Playback Queues for Shared Experiences
US20220116726A1 (en) Processing audio for live-sounding production
US20240036810A1 (en) Playing Media Content in Response to Detecting Items Having Corresponding Media Content Associated Therewith
US7447172B2 (en) Media synchronizing system and service providing method for using the same
GB2617804A (en) Home concert system, method therefor and audio system
US20120271638A1 (en) Interactive toys and a method of synchronizing operation thereof
EP4122297A1 (en) Method and system for generating light effects
JP2022548400A (en) Hybrid near-field/far-field speaker virtualization
WO2021152280A1 (en) T entertainment system and method of delivery augmented content
WO2021065496A1 (en) Signal processing device, method, and program