CA2947275C - Multi-channel pairing in a media system - Google Patents

Multi-channel pairing in a media system Download PDF

Info

Publication number
CA2947275C
CA2947275C CA2947275A CA2947275A CA2947275C CA 2947275 C CA2947275 C CA 2947275C CA 2947275 A CA2947275 A CA 2947275A CA 2947275 A CA2947275 A CA 2947275A CA 2947275 C CA2947275 C CA 2947275C
Authority
CA
Canada
Prior art keywords
audio
playback device
channel
user
playback
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CA2947275A
Other languages
French (fr)
Other versions
CA2947275A1 (en
Inventor
Christopher Kallai
Michael Darrell Andrew Ericson
Robert A. Lambourne
Robert Reimann
Mark TRIPLETT
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonos Inc
Original Assignee
Sonos Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonos Inc filed Critical Sonos Inc
Priority to CA3032479A priority Critical patent/CA3032479C/en
Priority to CA2947275A priority patent/CA2947275C/en
Priority claimed from PCT/IB2012/052071 external-priority patent/WO2012137190A1/en
Publication of CA2947275A1 publication Critical patent/CA2947275A1/en
Application granted granted Critical
Publication of CA2947275C publication Critical patent/CA2947275C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/003Digital PA systems using, e.g. LAN or internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/005Audio distribution systems for home, i.e. multi-room use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones

Abstract

A method comprising: receiving, at a first playback device from a source device, multi-channel audio data comprising at least first and second audio channels; detecting a command to form a multi-channel pair in which the first playback device is designated to play at least the first audio channel and the second playback device is designated to play at least the second audio channel; and after detecting the command: sending, from the first playback device to the second playback device, at least the second audio channel; and playing, by the first playback device, the first audio channel.

Description

Multi-Channel Pairing in a Media System Christopher Kallai Michael Darrell Andrew Ericson Robert A. Lambourne Robert Reimann Mark Triplett CROSS-REFERENCE TO RELATED APPLICATION
BACKGROUND
[0001] The presently described technology is directed towards technology for use in the area of consumer electronics. In particular, certain embodiments are directed to multi-channel pairing in a media system.
[0002] Music is µ'ery much a part of our everyday lives. And thanks to the advancement of technology, music content is now more accessible than ever. The same can be said of other types of media, such as television, movies, and other audio and video content. In fact, now a user can even access the content over the Internet through an online store, an Internet radio station, online music service, online movie service, and the like, in addition to the more traditional means of accessing audio and video content.
[0003] The demand for such audio and video content continues to surge.
Given the high demand over the years, technology used to access and play such content has likewise improved.
Even still, technology used in accessing the content and the playback of such content can be significantly improved or developed in ways that the market or end users may not anticipate.
SUMMARY
[0004] The embodiments described herein include, but are not limited to, various devices, systems, methods, and computer program products. This section is for the purpose of - =
summarizing some aspects ofcenain embodiments. Simplifications or omissions in this section as well as in the abstract or the title of this description may he made to avoid obscuring the purpose of this section, the abstract and the title. Such simplifications or omissions are not intended to limit the scope of the various inventions described herein.
[0005] In brief summary, the embodiments described herein provide technology for grouping, consolidating, and pairing individual playback devices to create or enhance a multi-channel listening environment. Particularly, the embodiments described herein enable two or more playback devices to be paired, such that multi-channel audio is achieved or enhanced. Such embodiments may be used to produce stereo sound or other audio environments suitable for audio content encoded w ith more than two channels, such as for certain kinds of television, movies, and music.
[0006] For example, an apparatus according to an embodiment comprises a network interface, a plurality ofspeaker drivers, an amplifier, and a processor. The network interface receives audio data over a network. The amplifier powers the plurality of speaker drivers. The processor processes the audio data to be output through the plurality of speaker drivers. The processor further configures a first equalization of the output from the plurality of speaker drivers in accordance with a first type of pairing and configuring a second equalization of the output from the plurality of speaker drivers in accordance with a second type of pairing.
[0007] In another example, a method according to an embodiment comprises receiving audio data over a network and processing the audio data to be output through a plurality of speaker drivers. The method further includes configuring a first equalization of the output from the plurality of speaker drivers in accordance with a first type of pairing and configuring a second equalization of the output from the plurality of speaker drivers in accordance with a second type of pairing.

[0007a] In another example, a method for a networked playback system comprising a plurality of playback devices configured to wirelessly communicate with and to be controlled by at least one controller device according to embodiment comprises: receiving, at a first playback device from a source device, multi-channel audio data comprising at least first and second audio channels; detecting, at the controller device, a selection by a user of an option displayed on a graphic interface of the controller device to form a multi-channel pair in which one or more of the playback devices are designated to play one or more audio channels; prompting, via the graphic interface of the controller device, the user to press a button on a playback device to be configured to play the first audio channel;
responsively detecting, by the first playback device, a first manual command input by a user to a first control interface that exists as a part of the first playback device for forming the multi-channel pair with a second playback device, wherein the first control interface comprises a button; prompting, via the graphic interface of the controller device, the user to press a button on a playback device to be configured to play the second audio channel;
responsively detecting, by the second playback device, a second manual command input by a user to a second control interface that exists as a part of the second playback device to faun the multi-channel pair with the first playback device, wherein the second control interface comprises a button; and after forming the multi-channel pair: sending, from the first playback device to the second playback device, at least the second audio channel; and playing, by the first playback device, the first audio channel.
[0008] One of the objects, features, and advantages of the present invention is to achieve or enhance a multi-channel listening environment. Many other objects, features, and advantages of the present invention will become apparent upon examining the following detailed description of an embodiment thereof, taken in conjunction with the attached drawings.

BRIEF DESCRIPTION OF THE DRAWINGS
[0010] These and other features, aspects, and advantages of the presently described technology will become better understood by a person skilled in the art with regard to the following description, appended claims, and accompanying drawings where:
[0011] FIG. 1 shows an illustrative configuration in which certain embodiments may be practiced;
[0012] FIG. 2A shows an illustrative functional block diagram of a player in accordance with certain embodiments;
[0013] FIG. 2B shows an example of a controller that may be used to remotely control one or more players of FIG. 2A;
[0014] FIG. 2C shows an example of a controller that may be used to remotely control one or more players of FIG. 2A;
[0015] FIG. 2D shows an example internal functional block diagram of a controller in accordance with certain embodiments;
[0016] FIG. 3A provides an illustration of a zone scene configuration;
[0017] FIG. 3B shows that a user defines multiple groups to be gathered at the same time;
[0018] FIG. 4 shows an example user interface that may be displayed on a controller or a computer of FIG. 1;
[0019] FIG. SA shows an example user interface to allow a user to form a scene;
[0020] FIG. 5B shows another example user interface to allow a user to form a scene;

[0021] FIG. 5C shows an example user interface to allow a user to adjust a volume level of the zone players in a zone scene individually or collectively;
[0022] FIG. 6 shows a flowchart or process of Foy iding a player theme or a zone scene for a plurality or players, where one or more of the players are placed in a zone;
[0023] FIG. 7 shows an illustrative configuration in which an audio source is played back on o players in accordance to an embodiment:
[0024] FIG. 8 shows an illustrative configuration of a pairing amongst multiple players in accordance to an embodiment;
[0025] FIG. 9 shows a flowchart or process of grouping a plurality of audio products to play separated sound tracks in synchronization to simulate a multi-channel listening environment; and [0026] FIGS. I OA to I OF show example snapshots of a controller used in certain embodiments.
[0027] In addition, the drawings are for the purpose of illustrating certain embodiments, but it is understood that the inventions are not limited to the arrangements and instrumentality shown in the drawings.

DETAILED DESCRIPTION
1. Overview [0028] The embodiments described herein relate to multi-channel pairing in a media system.
The embodiments are particularly useful in pairing two or more playback devices together to create or enhance multi-channel audio reproduction, like stereo, surround sound, or some other multi-channel environment. The embodiments will also find utility in connection with any system for which multi-channel pairing is desired.
[0029] In an embodiment, two playback devices that are each configured to output a plurality of audio channels independent of each other are selectively paired, such that subsequent to pairing, one playback device is configured to output a first subset of the plurality of audio channels and the other playback device is configured to output a second subset of the plurality of audio channels_ The first and second subsets are different. For instance, each playback device is configured to operate in a two-channel mode or stereo mode prior to being paired (e.g., each playback device is configured to play both right and left channel audio).
Subsequent to pairing, one playback device is reconfigured to output a first channel (e.g., right channel audio and not left channel audio) and die other playback device is reconfigured to output a second channel (e.g., left channel audio and not right channel audio), which is different from the first channel.
[0030] In another embodiment, a collection of three or more playback devices that are each configured to output a plurality of audio channels independent of another playback device in the collection are selectively paired, such that subsequent to pairing, each of the playback devices is configured to output a different audio channel from the collection. fins embodiment is particularly useful in a television or movie theater type setting where a particular playback device of the multiple playback devices is configured to output in two-channel or stereo mode at one time (e.g., when playing a song), and subsequent to pairing, is configured to output as a front-right channel, a front-center channel, a front-left channel, a rear-right channel, a rear-left channel, a subwoofer channel, and so on (e.ge when watching television or a movie or listening to music that contains more than two-channels).
[0031] In another embodiment, one of the paired playback devices processes the data of an audio item to separate the data into channels, each of the channels representing a single-sound track, for example. 1 he playback device sends the separated channel(s) to the other, respective playback device(s). The playback devices play their distinctive channels in synchrony, thus creating a multi-channel listening environment. Alternatively, each of the paired playback devices processes the data of an audio item, or a portion of the data, and plays only those one or more channels designated for the respectkc playback device.
[0032] In another embodiment, two or more playback devices may be grouped into a single or consolidated playback device and the consolidated playback device may be paired with one or more playback devices. For instance, two playback devices may be grouped into a first consolidated playback device and two additional playback devices may be grouped into a second consolidated playback device. The first and second consolidated playback devices may be paired, for example. In certain embodiments, each playback device of a consolidated playback device is put into consolidated mode, which may result in a changed equalization for one or more speaker drivers of any particular playback des ice. Further, one or more additional playback devices may be added to a consolidated playback device.
[0033] In certain embodiments, a playback device that is configured to output an audio channel is paired with one or more additional playback devices, such that the playback device is reconfigured to output a different audio channel. For instance, the playback device might be configured to output a right channel for stereo mode, but subsequent to being paired with one or more additional playback devices, might be reconfigured to output a rear, right channel for theater mode. The playback device may be paired to one or more other playback devices.

= =
[0034] In certain embodiments, a playback device that is configured to output a plurality of audio channels is paired with one or more additional playback devices, such that the playback device is configured to output a subset of the plurality of audio channels relative to the one or more additional playback devices. For instance, the playback device might be configured to output in two-channel or stereo mode, but subsequent to being paired with one or more playback devices might be configured to output a right or left channel. The playback device may be paired to one or more other playback devices.
[0035] In certain embodiments, a playback device comprises a network interface, one or more speaker drivers, an amplifier, and a processor. The network interface receives audio data over a network. The amplifier powers the speaker drivers. The processor processes the audio data to be output through the speaker drivers. The processor further configures a first equalization of the output from the speaker drivers in accordance with a first type of pairing and configuring a second equalization of the output from the speaker dri ers in accordance with a second type of pairing. The playback device may operate in any of: non paired mode, paired mode, consolidated mode, and grouped mode.
[0036] In certain embodiments, a controller is configured to, among other things, pair two or more playback devices to establish a multi-channel audio environment. That is, through the controller, a user can select which playback devices to pair. Once programmed, the playback devices may operate in paired mode until disengaged, for example. In some embodiments, the controller is wirelessly coupled to the one or more playback devices. In other embodiments, the controller is wired to the one or more playback devices.
[0037] According to certain embodiments, the action of pairing two or more playback devices is triggered based on a command from a user via a control interface (e.g., a manual command creates a pair) or responsive to an event (e.g., an automatic command creates a pair).
Example events include a detection in the change of audio content (e.g., the audio content goes from having rwo-channel content to three or more channel content, and vice-versa), a detection of a certain time, a detection of a certain kind of entertainment (e.g..
detecting that the user is watching television versus just listening to music), or any other event that is programmed to create a pair amongst playback devices. The event detection might occur by a controller, one of the playback devices, Or sonic other device, for example.
[0038] According to certain embodiments, in an attempt to optimize the multi-channel pairing, the configuration of a playback device may include any of: changing the equalization of the playback device by changing the equalization of one or more specific speaker drivers and optimizing the synchronization between paired devices. Examples or changing the equalization arc LICNCEIbcd more below.
[0039] These: embodiments and many additional embodiments arc described more below.
Further, the detailed description is presented largely in terms of illustrative environments, systems, procedures, steps, logic blocks, processing, and other symbolic representations that directly or indirectly resemble the operations of. data processing devices coupled to networks.
These process descriptions and representations are typically used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Numerous specific details arc set forth in order to provide a thorough understanding of the present invention. However, it is understood to those skilled in the art that certain embodiments of the present invention may be practiced without certain, specific details. In other instances, well known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the embodiments.
[0040] Reference herein to "embodiment- means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or
9 =
alternative embodiments mutually exclusive of other embodiments. The embodiments described herein, explicitly and implicitly understood by one skilled in the art, may he combined with other embodiments.
11. Example Environment [0041] Referring now to the drawings, in which like numerals may refer to like parts throughout the several views. FIG. 1 shows an exemplary configuration 100 in which certain embodiments may be practiced. The configuration 100 may represent, but not be limited to, a part of a residential home, a business building, or a complex with multiple zones. There are a number of multimedia players of which three examples 102, 104 and 106 are shown as audio devices. Each of the audio dcµ ices may be installed or provided in one particular area or zone and hence referred to as a zone player herein. It is understood that a zone can comprise more than one zone player.
[0042] As used herein, unless explicitly stated otherwise, an audio source or audio sources arc generally in digital format and can be transported or streamed over a data network. To Facilitate the understanding of the example environment of FIG. I, it is assumed that the configuration 100 represents a home. Though, it is understood that this technology is not limited to its place of application. Referring back to FIG. 1, the zone players 102 and 104 may be located in one or two of the bedrooms while the zone player 106 may be installed or positioned in a living room. All of the zone players 102, 104, and 106 are coupled directly or indirectly to a data network 108. In addition, a computing device 110 is shown to be coupled on the network 108. In reality, any other device such as a home gateway device, a storage device, or an MP3 player may be coupled to the network 108 as well.
[0043] The network 108 may be a wired network, a wireless network or a combination of both. In one example, all devices including the zone players 102, 104, and 106 are coupled to the network 108 by wireless means based on an industry standard such as IEEE
802.11. In yet another example, all devices including the zone players 102, 104, and 106 are part of a local area network that communicates with a wide area network (e.g., the Internet). In still another example, all devices including the zone players 102, 104 and 106 and a controller 142 forms an ad-hoc network and may be specifically named. e.g., a household identifier:
Smith Family, to be differentiated from a similar neighboring setup with a household identifier, e.g., Kallai Family.
[0044] Many devices on the network 108 are configured to download and store audio sources. For example, the computing device 110 can download audio sources, such as music or audio associated with videos, from the Internet (e.g., the -cloud") or some other source and store the downloaded audio sources locally for sharing with other devices on the Internet or the network 108. The computing dc \ ice 110 or any of the zone players 102, 104, and 106 can also be configured to receive streaming audio. Shown as a stereo system, the device 112 is configured to receive an analog audio source (e.g., from broadcasting) or retrieve a digital audio source (e.g., from a compact disk). The analog audio sources can he converted to digital audio sources. In accordance with certain embodiments, the various audio sources may be shared among the devices on the network 108.
[0045] Two or more zone players (e.g., any two or more of the zone players 102, 104, and 106) may be grouped together to form a new zone group. Any combinations of zone players and an existing zone group may be grouped together. In one instance, a new zone group is formed by adding one zone player to another zone player or an existing zone group.
[0046] In certain embodiments, there are two or more zone players in one environment (e.g., a living room in a house). Instead of grouping these two zone players to play back the same audio source in synchrony, these two zone players may be configured to play two separate sounds in left and right channels. In other words, the stereo effects of a sound are reproduced or enhanced through these two zone players, one for the left sound and the other for the right sound.
Likewise, for a 3-channel (or 2.1 sound effects) sound, three such zone players may he reconfigured as if there are three speakers: left and right speakers and a subwoofer to form a stereo sound. The details of the reconfiguring the zone players and operating these audio products are described more below. Similar configurations with multiple channels (greater than 3. such as 4, 5, 6, 7, 9 channels and so on) also apply. For example, configurations that use more than two channels may he useful in television and theater type settings. where video content such as in the form of television and movies is played together with audio content that contains more than two channels. Further, certain music might similarly be encoded w ith more than two channel sound.
[0047] In certain embodiments, two or more zone players may be consolidated to form a single, consolidated zone player. The consolidated zone player may further be paired with a single zone player or yet another consolidated zone player. A consolidated zone player may comprise one or more individual playback de\ ices. Each playback device of a consolidated playback device is preferably set in a consolidated mode.
[0048] According to some embodiments, one can continue to do any of: group, consolidate, and pair until a desired configuration is complete. The actions of grouping, consolidation, and pairing are preferably performed through a control interface and not by physically connecting and re-connecting speaker wire, for example, to individual, discrete speakers to create different configurations. As such, certain embodiments described herein provide a more flexible and dynamic platform through which sound reproduction can be offered to the end-user.
[0049] It is understood that the technology described herein is not limited to its place of application. For example, it is understood that zones and zone players, and the embodiments described herein, may also be used in vehicles, on water craft, airplanes, amphitheaters, outdoors, along the streets in a village or city, and so on, in addition to homes, offices, gyms, schools, hospitals, hotels, movie theaters, malls, stores, casinos, museum, entertainment parks, or any other place where audio content is played_ As such, it will be appreciated that the -embodiments described herein may be used in connection with any system or application for which multi-channel pairing is desired.
III. Example Playback Devices [0050] Referririg now to FIG. 2A, there is shown an exemplary Functional block diagram of a zone player 200 in accordance with an embodiment. The zone player 200 includes a network interface 202, a processor 204, a memory 206, an audio processing circuit 210.
a module 212, optionally, an audio amplifier 214 that may be internal or external, and optionally, a speaker unit 218 connected to the audio amplifier 214. The network interface 202 facilitates a data flow between a data network (i.e,, the data network 108 of FIG. 1) and the zone player 200 and typically executes a special set of rules (i.e., a protocol) to send data back and forth. One of the common protocols used in the Internet is TCP/IP (Transmission Control Protocol; Internet Protocol). In general, a network interface 202 manages the assembling of an audio source or file into smaller packets that arc to be transmitted over the data network or reassembles received packets into the original source or file. In addition, the network interface 202 handles the address part or cuch packet se that it gets to the right destination or intercepts packets destined for the zone player 200. Accordingly, in certain embodiments, each of the packets includes an IP-based source address as well as an IP-based destination address.
[0051] 'file network interface 202 may include one or both of a wireless interface 216 and a wired interface 217. The wireless interface 216, also referred to as an RE
interface, provides network interface functions by a wireless means for the zone player 200 to communicate with other devices in accordance with a communication protocol (such as the wireless standard IEEE
802.11a, 802.11b, 802.11g, 802.11n, or 802.15.1). The wired interface 217 provides network interface functions by a wired means (e.g., an Ethernet cable), In one embodiment, a zone player includes both of the interfaces 216 and 217, and other zone players include only a RE or wired interface. Thus these other zone players communicate with other devices on a network or retrieve audio sources via the zone player. The processor 204 is configured to control the operation of other parts in the zone player 200. The memory 206 may be loaded with one or more software modules that can be executed by the processor 204 to achieve desired tasks.
According to one embodiment, a software module implementing an embodiment, such as described herein, is executed the processor 204 operates in accordance with the software module in reference to a saved zone group configuration characterizing a zone group created by a user, the zone player 200 is caused to retrieve an audio source from another zone player or a device on the network and synchronize the players in the zone group to play back the audio source as desired.
According to another embodiment, a software module implementing an embodiment described herein creates a pair between two or more zone players to create a desired multi-channel audio environment.
[0052] According to one embodiment, the memory 206 is used to save one or more saved zone configuration files that may be retrieved for modification at any time.
Typically, a saved zone group configuration file is transmitted to a controller (e.g., the controlling device 140 or 142 of FIG. I, a computer. a portable device, or a TV) when a user operates the controlling device. The zone group configuration provides an interactive user interface so that various manipulations or control of the zone players may be performed.
[0053] In certain embodiments, the audio processing circuit 210 resembles the circuitry in an audio playback device and includes one or more digital-to-analog converters (DAC'), an audio preprocessing part, an audio enhancement part or a digital signal processor and others. In operation, when an audio source is retrieved via the network interface 202, the audio source is processed in the audio processing circuit 210 to produce analog audio signals.
The processed analog audio signals are then provided to the audio amplifier 214 for playback on speakers. In addition, the audio processing circuit 210 may include necessary circuitry to process analog signals as inputs to produce digital signals for sharing with other devices on a network.

[0054] Depending on an exact implementation. the module 212 ma's be implemented as a combination of hardware and software. In one embodiment. the module 212 is used to save a scene. The audio amplifier 214 is typically an analog circuit that powers the provided analog audio signals to drive one or more speakers.
[0055] It is understood that zone player 200 is an example of a playback device. Examples of playback devices include those zone players that are commercially offered for sale by Sonos, Inc.
of Santa Barbara, California. They currently include a ZonePlayer 90, ZonePlayer 120, and Sonos S. The ZonePiayer 90 is an example zone player without a built-in amplifier, whereas the ZoncPlayer 120 is an example zone player with a built-in amplifier. The S5 is an example zone player with a bui]t-in amplifier and speakers In particular, the S5 is a Five-driver speaker system that includes two tweeters. two mid-range drivers, and one subwoofer. When playing audio content via the S5, the left audio data of a track is sent out of the left tweeter and left mid-range driver, the right audio data of a track is sent out of the right tweeter and the right mid-range driver, and mono bass is sent out of the subwoofer. Further, both mid-range drivers and both tweeters have the same equalization (or substantially the same equalization).
That is, they are both sent the same frequencies, just from different channels of audio. While the S5 is an example of a zone player with speakers, it is understood that a zone player with speakers is not limited to one with a certain number of speakers (e.g., five speakers as in the S5), but rather can contain one or more speakers. Further, a zone player may be part of another device, which might even serve a primary purpose different than audio.
IV. Example Controller [0056] Referring now to FIG. 2B, there is shown an exemplary controller 240, which may correspond to the controlling device 140 or 142 of FIG. I. The controller 240 may be used to facilitate the control of multi-media applications, automation and others in a complex. In particular, the controller 240 is configured to facilitate a selection of a plurality of audio sources available on the network, controlling operations lone or more zone players (e.g., the zone player 200) through a RI' interface corresponding to the wireless interface 216 of FIG. 2A.
According to one embodiment, the wireless means is based on an industry standard (e.g., infrared, radio, wireless standard IEEE: 802.1 la, 802.11 b 802.11g, 802.11 n, or 802.15.1). When a particular audio source is being played in the zone player 200, a picture, if there is any, associated with the audio source may be transmitted from the zone player 200 to the controller 240 for display. In one embodiment, the controller 240 is used to synchronize audio playback of more than one zone player by grouping the zone players in a group. In another embodiment, the controller 240 is used to control the volume of each of the zone players in a zone group individually or together.
[0057] In an embodiment, the controller 240 is used to create a pairing between two or more playback devices to create or enhance a multi-channel listening environment.
For example, the controller 240 may be used to select and pair two or more playback devices. In addition, the controller 240 may be used to turn pairing on or off. The controller 240 may also be used to consolidate playback devices, and further to set a particular playback device in consolidated mode. Accordingly, in some embodiments, the controller 240 provides a flexible mechanism for dynamically configuring a multi-channel audio environment. In some instances, the pairing creates a multi-channel listening environment. In some instances, the pairing enhances a multi-channel listening environment by increasing the separation between devices.
For example, two individual playback devices, which are positioned at a distance from each other, may provide more channel separation to the listener than the audio coming from only a single device.
[0058] 'flic user interface for the controller 240 includes a screen 242 (e.g., a LCD screen) and a set of functional buttons as follows: a -zones" button 244, a "back"
button 246, a "music"
button 248, a scroll wheel 250, "ok" button 252, a set of transport control buttons 254, a mute button 262, a volume up/down button 264, a set of soft buttons 266 corresponding to the labels 268 displayed on the screen 242.

[0059] The screen 242 displays various screen menus in response to a user's selection. In one embodiment, the "zones- button 244 activates a zone management screen or "7-011C Menu'', which is described in more details below. The -back- button 246 may lead to different actions depending on the current screen. In one embodiment, the "back- button triggers the current screen display to go back to a previous one. In another embodiment, the 'back-button negates the user's erroneous selection. The "music" button 248 activates a music menu, which allows the selection of an audio source (e.g., a song) to be added to a zone player's music queue for playback.
[0060] The scroll wheel 250 is used for selecting an item within a list_ whenever a list is presented on th,:. screen 242. When the items in the list are too many to be accommodated in one screen display, a scroll indicator such as a scroll bar or a scroll arrow is displayed beside the list.
When the scroll indicator is displayed, a user may rotate the scroll wheel 250 to either choose a displayed item or display a hidden item in the list. The "OK- button 252 is used to confirm the user selection on the screen 242.
[0061] .there arc three transport buttons 254, which arc used to control the effect of the currently playing song. For example, the functions of the transport buttons may include play4tause and forward/rewind a song, move forward to a next song track, or move backward to a previous track. According to one embodiment, pressing one of the volume control buttons such as the mute button 262 or the volume up/down button 264 activates a volume panel. In addition, there are three soft buttons 266 that can be activated in accordance with the labels 268 on the screen 242. It is understood that, in a multi-zone system, there may be multiple audio sources being played respectively in more than one zone players. The music transport functions described herein shall apply selectively to one of the sources when a corresponding one of the zone players or zone groups is selected.

[0062] FIG. 2C shows an exemplary controller 260 which may correspond to the controlling device 140 or 142 of FIG. I. The controller 260 is provided with a touch screen that allows a user to interact with the controller, for example, to navigate a playlist of many items, to control operations of one or more players. In one embodiment as it will be further shown in FIGS. 10A
to 101-. a user may interact with the controller to make a multi-channel audio env ironment, such as create a stereo pair for example, and may even be used to separate the multi-channel audio environment, such as disengage a stereo pair. It should be noted that other network-enabled portable devices such as an iPhone, iPad or any other smart phone or network-enabled device may be used as a controller to interact or control multiple zone players in an environment (e.g., a networked computer such as a PC or Mae may also he used as a controller).
According to one embodiment, an application may be downloaded into a network enabled device.
Such an application may implement most of the functions discussed above for the controller 240 using a navigating mechanism or touch screen in the device. Those skilled in the art will appreciate the flexibility of such an application and its ability to be ported to a new type of portable device Riven the detailed description herein.
[0063] FIG. 21) illustrates an internal functional block diagram of an exemplary controller 270, which may correspond to the controller 240 of FIG. 2B, a computing device, smart phone, or any other communicative device. The screen 272 on the controller 270 may be an LCD screen.
The screen 272 communicates with and is commanded by a screen driver 274 that is controlled by a microcontroller (e.g., a processor) 276. The memory 282 may be loaded with one or more application modules 284 that can be executed by the microcontroller 276 with or without a user input via the user interface 278 to achieve desired tasks. In one embodiment, an application module is configured to facilitate grouping a number of selected zone players into a zone group and synchronizing the zone players for one audio source. In another embodiment, an application module is configured to control together the audio sounds (e.g., volume) of the 7011e players in a Zone group. In operation, when the microcontroller 276 executes one or more of the application modules 284, the screen driver 274 generates control signals to drive the screen 272 to display an application specific user interface accordingly_ more of which vill be described below.
[0064] The controller 270 includes a network interface 280 referred to as a RF interface 280 that facilitates wireless communication with a zone player via a corresponding RI' interface thereof. In one embodiment, the commands such as volume control and audio playback synchronization are sent via the RI; interfaces. In another embodiment, a saved zone group configuration is transmitted between a 7011c player and a controller via the RF interfaces. The controller 270 may control one or more zone players, such as 102, 104 and 106 of FIG. 1.
Nevertheless, there ma be more than one controller, each preferably in a zone (e.g., a room or rooms nearby each other) and configured to control any one and all of the zone players.
= [0065] In one embodiment, a user creates a zone group including at least two zone players from the controller 240 that sends signals Or data to one of the zone players.
As all the zone players are coupled on a network, the received signals in one zone player can cause other zone players in the group to be synchronized so that all the zone players in the group play back an identical audio source or a list of identical audio sources in a timely synchronized manner such that no (or substantially no) audible delays or hiccups could be heard.
Similarly, when a user increases the audio \ olume of the group from the controller, the signals or data of increasing the audio volume for the group arc sent to one of the zone players and causes other zone players in the group to be increased together in \ Italic and in scale.
[0066] According to one implementation, an application module is loaded in memory 282 for zone group management. When a predetermined key (e.g. the -zones- button 244) is activated on the controller 240, the application module is executed in the microcontroller 276. The input interface 278 coupled to and controlled by the microcontrollcr 276 receives inputs from a user. A
"Zone Menu" is then displayed on the screen 272. The user may start grouping zone players into a zone group by activating a "Link Zones" or "Add Zone" soft button, or de-grouping a zone group by activating an "Unlink Zones- or "Drop Zone- button. The detail of the zone group manipulation will be further discussed below.
[0067] As described above, the input interface 278 includes a number of function buttons as well as a screen graphical user interface. It should be pointed out that the controller 240 in FIG.
2B is not the only controlling device that may practice the embodiments. Other devices that provide the equivalent control functions (e.g.. a computing device, a hand-held device) may also be configured to practice the present invention. In the above description, unless otherwise specifically described, it is clear that keys Or buttons are generally referred to as either the physical buttons or soft buttons, enabling a user to enter a command or data.
[0068] One mechanism for Joining' zone players together for music playback is to link a number of zone players together to form a group. To link a number of zone players together, a user may manually link each zone player or room one after the other. For example, there is a multi-zone system that includes the following zones:
Bathroom Bedrooin Den Dining Room Family Room Foyer [0069] If a user wishes to link five of the six zone players using the current mechanism, the user may start with a single zone and then manually link each zone to that zone. This mechanism may be sometimes quite time consuming. According to one embodiment, a set of zones can be dynamically linked together using one command. Using what is referred to herein as a theme or a zone scene, zones can be configured in a particular scene (e.g., morning, afternoon, or garden), where a predefined zone grouping and setting of attributes for the grouping are automatically effectuated.
[0070] For instance, a "Morning- zone scene;configuration command would link the Bedroom, Den and Dining Room together in one action. Without this single command, the user Would need to manually and individually link each zone. FIG, 3A provides an illustration of one zone scene, where the left column shows the starting zone grouping all zones are separate, the column on the right shows the effects of grouping the zones to make a group of 3 zones named after "Morning-.
[0071] Expanding this idea further, a Zone Scene can be set to create multiple sets of linked zones. For example, a scene creates 3 separate groups of7ones, the downstairs zones would be linked together, the upstairs zones would be linked together in their own group, and the outside zones (in this case the patio) would move into a group of its own.
[0072] In one embodiment as shown in FIG. 313, a user defines multiple groups to be gathered at the same time. For example: an "Evening Scene- is desired to link the following zones:
Group I
o Bedroom Den o Dining Room Group 2 o Garage o Garden where Bathroom, Family Room and Foyer should be separated from any group if they were part of a group before the Zone Scene was invoked.

[0073] A feature of certain embodiments is that that zones do not need to be separated before a zone scene is invoked. In one embodiment, a command is provided and links all zones in one step, if invoked. The command is in a form of a zone scene. After linking the appropriate zones, a zone scene command could apply the following attributes:
Set volumes levels in each zones (each zone can have a diftCrent volume) Mute,"Unmute zones.
Select and play specific music in the zones.
Set the play mode of the music (Shuffle, Repeat, Shuffle-repeat) Set the music playback equalization of each zone (e.g., bass treble).
[0074] A further extension of this embodiment is to trigger a zone scene command as an alarm clock Function. For instance the zone scene is set to apply at 8:00am.
It could link appropriate zones automatically, set specific music to play and then stop the music after a defined duration. Although a single zone may be assigned to an alarm, a scene set as an alarm clock provides a synchronized alarm, allowing any zones linked in the scene to play a predefined audio (e.g., a favorable song, a predefined playlist) at a specific time or for a specific duration. If, for any reason, the scheduled music failed to be played (e.g., an empty playlist, no connection to a share, failed UPnP, no Internet connection for an Internet Radio station), a backup buzzer will sound. This buzzer will be a sound file that is stored in a zone player.
[0075] FIG. 4 shows an exemplary user interface 400 that may be displayed on a controller 142 or a computer 110 of FIG. I . The interface 400 shows a list of items that may be set up by a user to cause a scene to function at a specific time. In the embodiment shown in FIG. 4, the list of items includes "Alarm", "Time", "Zone", "Music", "Frequency" and "Alarm length".
"Alarm- can be set on or off. When "Alarm" is set on, "Time" is a specific time to set off the alarm. "Zone" shows which zone players are being set to play a specified audio at the specific dine. "Music- shows what to be played when the specific time arrives.
"Frequency" allows the user to define a frequency of the alarm. "Alarm length- defines how long the audio is to be played. It should be noted that the user interface 400 is provided herein to show some of the functions associated with setting up an alarm. Depending on an exact implementation, other functions, such as time zone, daylight savings, time synchronization, and time,date format for display may also be provided.
[0076] According to one embodiment, each zone player in a scene may be set up for different alarms. For example. a -Morning- scene includes three zone players, each in a bedroom, a den, and a dining room. After selecting the scene, the user may set up an alarm for the scene as whole.
As a result, each of the zone players will be activated at a specific time.
[0077] FIG. SA shows a user interface 500 to allow a user to form a scene.
The panel on the left shows the available zones in a household. The panel on the right shows the zones that have been selected and be grouped as part of this scene. Depending on an exact implementation ola user interface. Add Remove buttons may be pro\ icled to move zones between the panels, or zones may be dragged along between panels.
[0078] FIG. 5B shows another user interface 520 to allow a user to form a scene. The user interface 520 that may he displayed on a controller or a computing device, lists available zones in a system. A cheekbox is provided next to each of the zones so that a user may check in the zones to be associated with the scene.
[0079] FIG. 5C shows a user interface 510 to allow a user to adjust a volume level of the zone players in a zone scene individually or collectively. As shown in the user interface 510, the 'Volumes,..' button (shown as sliders, other forms are possible) allows the user to affect the volumes of the associated zone players when a zone scene is invoked. In one embodiment, the zone players can be set to retain whatever volume that they currently have when the scene is invoked. Additionally the user can decide if the volumes should be unmuted or muted when the scene is invoked.

= CA 02947275 2016-11-02 Providing Example Player Themes or Zone Scenes [0080] FIG. 6 shows a flowchart or process 600 of providing a player theme or a zone scene for a plurality of players. where one or more of the players are placed in a zone. The process 600 is presented in accordance with one embodiment of the present invention and may be implemented in a module to be located in the memory 282 of FIG. 2C.
[0081] The process 600 is initiated when a user decides to proceed with a zone scene at 602.
The process 600 then moves to 604 where it allows a user to decide which zone players to be associated with the scene. For example, there are ten players in a household, and the scene is named after "Morning-. The user may he given an interface to select four of the ton players to be associated with the scene. At 606, the scene is saved. The scene may be saved in any one of the members in the scene. In the example of FIG. I, the scene is saved in one of the zone players and displayed on the controller 142. In operation, a set of data pertaining to the scene includes a plurality of parameters. In one embodiment, the parameters include, but may not be limited to, identifiers (e.g., IP address) of the associated players and a plavlist. The parameters may also include volume/tone settings for the associated players in the scene. The user may go back to 602 to configure another scene if desired.
[0082] Given a saved scene, a user may activate the scene at any time or set up a timer to activate the scene at 610. The process 600 can continue when a saved scene is activated at 610.
At 612, upon the activation of a saved scene, the process 600 checks the status of the players associated with the scene. The status of the players means that each of the players shall be in condition to react in a synchronized manner. In one embodiment, the interconnections of the players are checked to make sure that the players communicate among themselves and/or with a controller if there is such a controller in the scene.
[0083] It is assumed that all players associated µvith the scene are in good condition. At 614, commands are executed with the parameters (e.g., pertaining to a playlist and volumes). In one = CA 02947275 2016-11-02 =
embodiment, data including the parameters is transported from a member (e.g., a controller) to other members in the scene so that the players are caused to synchronize an operation configured in the scene. The operation may cause all players to play back a song in identical or different volumes or to play back a pre-stored file.
Example Multi-channel Environments [0084] FIG. 7 shows an example configuration in Vs hich an audio source is played back on two players 702 and 704, according to an example embodiment. These two players 702 and 704 may be located in and around one place (e.g., a hall, room, or nearby rooms) and are designated to play two sound tracks respectively. For example, an audio source may have left and right sound channels or tracks (e.g., stereo sound). Instead of grouping the players 702 and 704 to play back the audio source together in synchrony, where each player 702 and 704 plays the same audio content at substantially the same time, the players 702 and 704 can be paired to play different channels of the audio source in synchrony. As a result of pairing, the stereo sound effects can be simulated or enhanced via two players 702 and 704 VerNUS one player or none of the players, for example.
[0085] In certain embodiments, each player of players 702 and 704 includes a network interface, one or more speaker drivers (two or more speaker drivers in some instances, such as when the player can play in stereo mode absent pairing), an amplifier, and a processor, such as shown in FIG. 2A. The network interface receives audio data over a network.
One or more amplifiers power the speaker drivers. The processor processes the audio data to be output through the speaker drivers. The processor may further configure a first equalization ofthe output from the speaker drivers in accordance with a first type of pairing and configuring a second equalization of the output from the speaker drivers in accordance with a second type of pairing.

= CA 02947275 2016-11-02 [0086] In an embodiment, the two players 702 and 704 are configured to output a plurality of audio channels independent ()reach other. For example. each player 702 and 704 may be configured to output audio content in stereo independently from each other.
Subsequent to pairing, one playback device (e.g., player 702) is configured to output a first subset of the plurality of audio channels and the other playback device (e.g., player 704) is configured to output a second subset of the plurality of audio channels. The first and second subsets are different. In this example, subsequent to pairing players 702 and 704, player 702 might play the right channel and player 704 might play the left channel. In another example, player 702 might play the right channel plus a center channel (e.g., in television or theater mode) and player 704 might play the left channel plus the center channel. Even in the latter example, the first and second subsets are different in that player 702 is playing channels Right Center and player 704 is playing channels Left I Center. In yet another embodiment, subsequent to pairing. player 702 might play all channels except certain bass frequencies, which may be played via player 704, thereby using player 704 as a subwoofer.
[0087] In another embodiment, a collection of three or more playback devices (e.g., players 702,704, and one or more additional players) are each configured to output a plurality of audio channels independent of another playback device in the collection. Subsequent to pairing, each of the playback devices is configured to output a generally different audio channel(s) from the collection. This embodiment is particularly useful in a television or movie theater setting where a particular playback device of the multiple playback devices is configured to output in two-channel or stereo mode at one time (e.g., when playing a song), and subsequent to pairing, is configured to output as a front-right channel, a front-center channel, a front-left channel, a rear-right channel, a rear-left channel, and so on (e.g., when watching a movie or television).
[0088] In another embodiment, one of the paired playback devices (e.g., player 702 or player 704) processes the data of the audio item, essentially separating the data into channels, each of the channels representing a single-sound track, for example, and being played hack in one of the = CA 02947275 2016-11-02 playback devices, thus creating or enhancing a multi-channel listening environment. In an alternative embodiment, both playback devices (e.g., players 702 and 704) may receive and process the data of the audio item and each playback device may output only the audio content designated for the respective player. For example, player 702 might receive both left and right channel audio, but only play the left channel, whereas player 704 might also receive both left and right channel audio, but only play the right channel.
[0089] In another embodiment, two or more playback devices (e.g., players 702 or 704) may be grouped into a single or consolidated playback device and the consolidated playback device (e.g.. consolidated player 702 4 704) may be paired with one or more playback devices. For instance, two pia \, back devices maybe grouped into a first consolidated playback de\ ice and two additional playback devices maybe grouped into a second consolidated playback device. Then.
the first and second consolidated playback devices may be paired to create or enhance a multi-channel listening environment.
[0090] In certain embodiments, a playback device (e.g., either player 702 or 704) that is configured to output an audio channel is paired with one or more additional playback devices, such that the playback device is configured to output a different audio channel than previously configured. For instance, the playback device might be configured to output a right channel in stereo mode, but subsequent to being paired with one or more additional playback devices, might be configured to output a rear, right channel in theater mode. The playback device may be paired to one or more other playback devices.
[0091] In certain embodiments, a playback device (e.g., either player 702 or 704) that is configured to output a plurality of audio channels is paired with one or more additional playback devices, such that the playback device is configured to output a subset of the plurality of audio channels relative to the one or more additional playback devices. For instance, the playback device might be configured to output in two-channel or stereo mode, but subsequent to being = CA 02947275 2016-11-02 paired with one or more playback devices might be configured to output a right or left channel.
The playback device may be paired to one or more other playback devices.
[0092] According to certain embodiments. the action of pairing two or more playback devices is triggered based on a command from a user via a control interface (e.g., a manual command) or responsive to an event (e.g., an automatic command). For example, using a controller, a user can create a pairing between two or more playback devices or disengage the pairing between two or MOW playback devices. In another example, pairing may be triggered by the audio content itself', a signal received from a source device, or some other predefined event, such that pairing occurs when the event is detected by the controller or playback device, for example. In addition, another device might be programmed to detect the event and provide a pairing signal to the controller and or playback de ices.
[0093] Further, .t .s understood that going from a configuration (lino pairing (unpaired or non paired) to a configuration of pairing or from one kind of pairing (e.g., a pairing used in a type of stereo mode or theater mode) to a different kind of pairing (e.g., another pairing used in a type of stereo mode or theater mode) are all various types of "pairing" that can occur according to certain embodiments. In addition, disengaging a pairing between multiple playback devices might go from pairing to no pairing or from pairing of a first kind back to pairing of a previous kind, for example.
[0094] In one example, a first type of pairing might include "no pairing" with another playback device and a second type of pairing might include pairing with one or more additional playback devices. In a second example, a first type of pairing might include pairing with a second playback device and a second type of pairing might include pairing with a plurality of playback devices. In a third example, a first type of pairing might include reproducing two channel sound via the speaker drivers and a second type of pairing comprises reproducing no More than one channel of the two channel sound via the speaker drivers. In a fourth example, a = CA 02947275 2016-11-02 first type of pairing might comprise reproducing a first audio channel via the speaker drivers and the second type ofpairing might include reproducing a second audio channel via the speaker drivers. In a fifth example, a first type of pairing might include reproducing the audio content via the speaker drivers in stereo mode and a second type of pairing might include reproducing the audio content via the speaker drivers in theater mode. In a sixth example, a first type of pairing might include reproducing the audio content via the speaker dri\ ers and a second type of pairing comprises reproducing the audio content via the speaker drivers when in consolidated mode. It is understood that various variations and modifications may he made to the examples described just above with the attainment of some or all of the advantages of the technology described herein.
[0095] According to certain embodiments, the configuration of a playback de \ ice may include any of: changing the equalization of the playback device by changing the equalization of one or more specific speaker drivers and optimizing the synchronization between paired devices.
Changing the equalization of the playback device might include any of: turning on or off (or effectively muting) one or more specific speaker drivers, changing the channel output of one or more speaker drivers, changing the frequency response of one or more specific speaker drivers, changing the amplifier gain of any particular speaker driver, changing the amplifier gain of the playback device as a whole.
[0096] In certain embodiments, changing the equalization of a playback device (e.g., changing the equalization of one or more speaker drivers of the playback device) may affect frequency dependent parameters. Examples might include the adjustment of the strength of frequencies within the audio data, a phase adjustment, and time-delay adjustment. In addition, a particular equalization may use a first type of pass filter, such as one that attenuates high, middle, or low frequencies, for example, while allowing other frequencies to pass unfiltered (or substantially unfiltered). Filters might also be different kinds or of a different order (e.g., first order filter, second order filter, third order filter, fourth order filter, and so on). For example, a first equalization of a playback device might include using a first type of pass filter to modify the output based on a first type of pairing and a second equalization of the playback device might include using a second type of pass filter to modify the output based on the second type of pairing. In this example, the first and second type of pass filters have one or different properties and or behaviors, thus changing the equalization and sonic behavior of the device.
[0097] By way of illustration, when two S5 devices are paired to create a stereo pair, for example, one S5 device may be configured as the "left- and the other S5 device may be configured as the "right." In one embodiment, the user may determine which is left or right. In this configuration, for example, the left and right audio data may be sent to both S5 devices, but the left audio data of the track is played out of the S5 device configured as left and the right audio data of a track is played out of the S5 device configured as right. In addition, the equalization of each S5 device is changed in an attempt to reduce or eliminate certain constructive or destructive interference. For example, one tweeter on each S5 device may be turned off or substantially muted. In certain embodiments, the crossover frequency to each driver may even be changed from a previous configuration so that two or more drivers are not necessarily outputting the exact same audio data, otherwise constructive and/or destructive interference may occur. In certain embodiments, the amplifier gain is adjusted for a particular speaker driver and:or for the playback device as a whole.
[0098] In operation, according to certain embodiments, a controller 706 (e.g., at controller 142 of FIG. I or 240 of FIG. 2B or a portable device) is used to initiate the operation_ Through a user interface, the controller 706 causes a player 702 to retrieve the audio source, provided the audio source is on a network 708 (e.g., the Internet or a local area network).
Similarly, the controller 706 may also cause a designated device (e.g., another networked device) to establish a communication session with the player 702 to deliver the requested audio source. In any case, either one or both of the players 702 and 704 may have access to the data representing the audio source.

[0099] In certain embodiments, a module in the player 702 is activated to process the data.
According to one embodiment, the right and left sound tracks are separated.
One sound track is retained locally in one player and the other sound track is pushed or uploaded to the other device (e.g.. via an ad-hoe network). When the right and left sound tracks are played back simultaneously or substantially simultaneously, the stereo sound effect can be appreciated.
[00100] In another embodiment, several tracks are separated, such as in television or theater mode. For example, the tracks may be separated into a center channel, right front channel, left front channel, right rear channel, left rear channel, and so on. Accordingly, one or more sound tracks mar be retained locally in one player and the other sound tracks are pushed or uploaded to the other devices.
[00101] In yet another embodiment, one player might process the data and retain one or more tracks locally, while the remaining data is sent onto another player. The receiving player may then process the data and retain one or more tracks locally and send any remaining data onto another player. This process, or one like it, may continue until all of the tracks are retained locally by corresponding player devices.
[00102] In yet another embodiment, each player might receive and process the data and play only the channel or channels that are designated for that player.
[00103] In certain embodiments, it is important to maintain good synchronization, especially when pairing two or more independently clocked playback devices so that the multi-channel audio content is played back as it was originally intended. According to an embodiment, a message may be initiated from one device to another that is also activated to send back an acknowledgement. tJpon receiving the acknowledgement, the time delay in transporting data from one device to another can be measured. The time delay will be considered when synchronizing the two players to play back the two separated sound tracks. In certain embodiments. if sending a packet (e.g., a packet in accordance with SNTP
protocol) to a = CA 02947275 2016-11-02 playback device and receiving a response takes more than fifteen milliseconds, for example, the timing information contained within that packet, such as clock infomiation, is discarded. If sending and receiving a packet is less than fifteen milliseconds, then the information from the packet is used to adjust playback, if so necessary.
[00104] Additional details of synchronizing operations of two or more independently clocked players are provided in commonly assigned US application No,: 10816,217, filed Apr. 1, 2004, entitled "System and Method For Synchronizing Operations Among A Plurality Of Independently Clocked Digital Data Processing Devices".
[001 0 5] FIG. 8 shows an example configuration of a pairing amongst multiple players 802, 804, 806, 808, 810. and 812 in a theater-like environment, in accordance to an embodiment.
Player 802 may operate as a front-left channel, player 804 may operate as a center channel, player 806 may operate as a front-right channel, pliier 808 may operate as a subwoofer, player 810 may operate as a rear, right channel, and player 812 may operate as a rear, right channel. In this example, the players 802, 804. 806. 808, 810, and 812 arc wirelcssly coupled over network 815 so as to receive and transmit data ON er a wireless network, and obtain power from power outlets in the wall or through some other power source (e.g., a battery).
Players 802, 804, 806, 808, 810, and 812 may be wired, if so configured in an alternate embodiment.
Controller 814 may be a network-enabled device, examples of which include a smart phone, tablet computer, laptop computer, desktop computer. or a television.
[001 0 6] In one embodiment, a designated player, such as player 804, receives multi-channel audio content from a source 816. Source 816 might include audio andlor video content downloaded or streamed from the Internet, a DVD or Blu-Ray player, or from some other source of audio andlor video content. Player 804 separates the multi-channel audio and sends respective audio channels to its playback owner. For example, if a particular audio channel is designated for = CA 02947275 2016-11-02 the front, right speaker, then that content is \\ irdessiv directed from player 804 to player 802, and so on. Players 802. 804, 806, 808, 810, and 812 play the audio content synchronously, so as to create a multi-channel listening envirimment. Moreover, if source 816 provides video content along with audio content, then the audio content is preferably played in synehrony with the video content.
[00107] In another embodiment, each player of players 802, 804, 806, 808, 810, and 812 may separate out its own one or more channels for playback. That is, either all audio content, or a portion thereof, is sent to each player (e.g., from source 816 or another playback device) and the player itself obtains its own data for playback.
[00108] In addition, players 802, 804, 806, 808. 810, and 812 may he reconfigured to operate in many different configurations, such as described above. For example, players 802 and 806 may be paired to operate in stereo mode. while the other pla),ers remain in sleep mode or turned oti(player 808 may remain on in any particular configuration, if so desired and configured, because it is operating as a subwoo ler). In another example, players 802 and 810 may be consolidated and output left channel audio, while players 806 and 812 may be consolidated and output right channel audio. In yet another example. some of players 802, 804, 806, 808, 810, and 812 are consolidated into a single player and paired with additional playback devices, such as in an adjacent room. In a further example, players 802, 804, 806, 808, 810, and 812 arc grouped and not paired, when the audio content is music (versus movie content, for example), These are just some configuration examples. Man)/ other configurations are possible using the teachings described herein.
[00109] FIG. 9 shows a flowchart or process 900 of grouping a plurality of audio products to play separated sound tracks in synchronization to simulate a multi-channel listening environment. The process 900 is presented in accordance with certain embodiments and may be implemented in a module to be located in the memory 282 ofFIG. 2D. TO
facilitate the = CA 02947275 2016-11-02 description of process 900. a listening environment of stereo sound with left and right channels is described. Those skilled in the art can appreciate that the description can he equally applied to other forms of multi-channel listening environment (e.g., three, five, seven channel environments).
[00110] Typically. there is a plurality of players being controlled by one or more controllers, '.vhere these players are disposed in various locations. For example, there are five players in a house; three of them arc respectively disposed in three rooms while two players are disposed in a larger room. Accordingly, these two players would be candidates to be paired to simulate a stereo listening environment, instead ofjust playing synchronized audio from both in a grouped fashion. In another example., there are four players in a large space or adjacent spaces, two pairs of the players may be paired to simulate a stereo listening environment, in Which two players in one consolidated pair can be grouped to play back one (left) sound track and the other two in the other consolidated pair can be grouped to play back one (right) sound track.
[00111] In any case, two groups of players or two players are decided to be paired at 902. If no players are paired, the process 900 will not be activated. It is assumed that two players from a group of players being controlled by a controller arc selected to be paired at 902. The process 900 proceeds.
[00112] At 904, a user may decide which player is to play back which sound track. Depending on the location of the user or listener(s) with respect to the selected players, it is assumed that a player or unit A is chosen to play back a left sound track and another player or unit B is chosen to play back a right sound track. In an alternative embodiment, the players themselves (or the controller) may automatically determine which unit is configured to play the right channel and which unit is configured to play the left channel without input from the user.
[00113] According to one embodiment, a time delay in transporting data between the two units A and B is measured at 906. This time delay may facilitate sound synchronization between the two units as one of the units will receive a processed sound track from the other. 1 he user may continue to operate on a controller to select a title (e.g., an audio source or an item from a playlist) for playback on the tw o units at 910.
[00114] Once the title is determined at 912, the data for the title is accessed. Depending on where the data is located, the controller may he configured to cause one of the two units to obtain or stream in the data. In one embodiment, the controller or unit A initiates a request to a remotely-networked device providing or storing the data. Assuming an authentication procedure, if any, completes successfully, the remote device starts to upload the data to the unit A.
Likewise. if the data is locally stored in the unit A, the data can be accessed locally without requesting the same from the netw ork. As the data is being received or accessed in the unit A. a processing module is activated in the unit A to process the data, essentially separating the data into two streams of sound tracks at 914. In an alternative embodiment, each unit may receive and process the data, essentially separating the data into a stream to be played by the respective unit.
[00115] At 916, one of the streams is uploaded from the unit A to unit 13 via a local network (e.g.. the ad-hoc network formed by all the players being controlled by the controller). As the streams arc being distributed, the two units are configured to play back the streams respectively, each reproducing the sound of a single sound track at 918. Together, in synchrony. the two units create a stereo sound listening environment.
[00116] It should be noted that the delay time, if noticeable, may be incorporated into the unit A to delay the consumption of the stream by the delay time to synchronize with the unit H.
Alternatively, a non-selected player may be used to process a streaming data of the title and configured to supply two streams to the pair of players, thus equalizing the delay time that would be otherwise experienced by the unit H.
[00117] FIGS. 10A-10F show illustrative screenshots of a controller for creating a stereo pair in accordance with certain embodiments. The screcnshots are from a computing device (e.g., a = CA 02947275 2016-11-02 tablet computer, laptop. or desktop) used as a controller. Those skilled in the art can appreciate that FIGS. 10A-10F may be readily modified to be used in a portable device with network capability, such as, for example, iPhone or iTouch or other smart phone or other network-enabled devices. Additionally, the controller might exist as part of a player itself or directly/indirectly coupled to the player, and therefore such screenshots may be modified accordingly ¨ such a controller need not have network capability as the player will have network connectivity.
[00118] FIG. 10A shows a graphic interface 1000 that may be displayed on a controller when a user desires to create a stereo pair with u\ o players in a system. It is understood that the system may include two or more players. If a stereo pair is desired, such as discussed with respect to the example of FIGS. 10A-10F, then any two players (one or both of which may be a consolidated player) in the system may he paired. However, if pairing more than two players is desired, such as creating an environment which is capable of playing more than two channel audio data, then the graphic interface 1000 may include an additional option or options. For example, an option might include "Make a Movie Surround Sound Pairing," "Make a Music Surround Sound Pairing," or "Make a Dolby Pro Logic Pairing." Any descriptive language may be used to appropriately indicate to the user the type of pairing that can be created.
Upon selecting an option, a setup wizard on the controller may help the user appropriately configure the system such that multi-channel discrete audio may be effectively realized by the system.
[00119] Turning back to FIG. 10A, the interface 1000 allows a user to initiate a stereo pair with a zone player named -7PS:5-Black." In certain embodiments, the system recognizes that ZPS5-Black is part of a particular zone (e.g., kitchen, family room, bedroom, and so on). The system may allow the user to pair ZPS5-Black with another player in the same zone only, or alternatively, the system may allow the user to pair ZPS5-Black with another player in a different zone (such as an adjacent zone). Pairing players in different zones may be particularly useful when an open space is divided into two or more zones (e.g., an open space might include a kitchen and family room, for example).
[00120] Additionally, the system may be programmed such that pairing players from different zones creates another zone to reflect the players in paired mode (e.g., a single kitchen-family room zone during paired operation might originate from a kitchen zone and a family room zone during non-paired operation). In such an embodiment, a user may be able to switch between zones or dynamically create new zones.
[00121] In certain embodiments, if another similar player is available to be paired, then the screenshot of FIG. 10B may be displayed. If the user wishes to continue with creating a pair, then the user may select "OK." If not, then the user may select --Cancel." In another embodiment, a different player (e.g., a player that is not an S5) may be paired together.
That is, different types of players may be paired, if the players are so designed to be paired. 'Fo accommodate the differences in player type, the equalization of one or more players may be adjusted accordingly to compensate for things like the number and size of speaker drivers used in one player versus the other player. In yet another embodiment, a list of the players in the system may be displayed (not shown), from which the user selects two or more players to make the stereo pair. The list of players may be automatically determined by the system based on a player's particular location within a home, room, or configuration w ith other players within a room, for example.
[00122] Turning now to FIG. 10C, in this example, it is assumed that the user may select a zone player named ¨ZPS5-White" to be paired with -ZPS5-Black" to create a stereo pair. If so desired, the user may select "OK" to proceed with the pairing. Otherwise, the user may select "Cancel." In certain embodiments, ZPS5-White may be in the same zone as ZPS5-Black. In other embodiments, ZPS5-White may be in a different zone as ZPS5-Black.
[00123] Upon selecting "OK" in FIG. 10C, a screenshot like that of FIG. IOD
may be displayed to the user, thereby instructing the user to press the mute button (or some other designated button) on the "LEFT" player of the stereo pair. Further, a light on the players may flash to further indicate that each of the players is a possibility for left channel pairing. Upon selection of the left player, FIG. IOE may be displayed to inform the user that a pair has been created along with a name for the pair, if so desired. Responsively, the system will play the left channel audio from the user designated player and will automatically play the right channel audio from the other player. FIG. I OF provides an example screenshot to allow the user to separate the stereo pair if so desired.
[00124] In an alternative embodiment, the creation of a stereo pair may be an option for a particular zone or a number of zones (e.g., a household of zones). For example, an option like "Create a Stereo Pair" may exist such that upon selection, a setup wizard may launch asking the user to press a flashing mute button (or some other designated button) on whichever speaker the user wanted to be the left speaker in the zone, a portion of zones, or all of the zones. In one embodiment, flashing would occur for all of the same speaker types. In another embodiment, flashing would occur for all speaker types that are capable of being paired.
After choosing the left speaker, the wizard screen would ask the user to do the same for the right speaker.
Preferably, only the speakers that are capable of being paired as the right speaker are flashing so as to appropriately narrow the choices for the user.
[00125] Additionally, in one embodiment and as shown in FIG. 3A or 3B, a graphic display is provided to show to the user all the players in a system and how they arc grouped or named. A
nickname for the stereo pair in the display 1040 may be highlighted and would he further displayed in FIG. 3A if FIG. 3A is modified after the stereo pair is complete [00126] A similar graphic interface may be used to create a pair in an environment having more than two channels. For example, in a home theater environment, the system may list more than two separate players from which the user can create a pairing by selecting which player is to operate as the front right, center, front left, rear right, and rear left. A
subwoofer may also be added to the list, so that it can be integrated into the multi-channel pairing by the user_ [00127] As an example, similar to what is described in the various embodiments above with respect to creating a stereo pair, the system may flash an indicator light on all relevant players and a setup \vizard may ask the user to select the -front-left," then the 'front-right' then the -front-center.- then the "rear-left,- then the "rear-right,- and so on until all of the players are appropriately paired. Preferably, only the speakers that are capable of being paired as the next speaker are flashing so as to appropriately narrow the choices for the user.
VII. Conclusion [00128] The components, elements, and. or functionality of the systems discussed above may he implemented alone or in combination in various forms in hardware, firmware, andor as a set of instructions in software, for example. Certain embodiments may be provided as a set of instructions residing on a computer-readable medium, such as a memory, hard disk, CD-ROM, DVD, anctor EPROM, for execution on a processing device, such as a controller and, or playback device_ [00129] The scope of the claims should not be limited by the preferred embodiments set forth above, but should be given the broadest interpretation consistent with the description as a whole.

Claims (11)

What is claimed is:
1. A method for a networked playback system comprising a plurality of playback devices configured to wirelessly communicate with and to be controlled by at least one controller device, the method comprising:
receiving, at a first playback device from a source device, multi-channel audio data comprising at least first and second audio channels;
detecting, at the controller device, a selection by a user of an option displayed on a graphic interface of the controller device to form a multi-channel pair in which one or more of the playback devices are designated to play one or more audio channels;
prompting, via the graphic interface of the controller device, the user to press a button on a playback device to be configured to play the first audio channel;
responsively detecting, by the first playback device, a first manual command input by a user to a first control interface that exists as a part of the first playback device for forming the multi-channel pair with a second playback device, wherein the first control interface comprises a button;
prompting, via the graphic interface of the controller device, the user to press a button on a playback device to be configured to play the second audio channel;
responsively detecting, by the second playback device, a second manual command input by a user to a second control interface that exists as a part of the second playback device to form the multi-channel pair with the first playback device, wherein the second control interface comprises a button; and after forming the multi-channel pair:
sending, from the first playback device to the second playback device, at least the second audio channel; and playing, by the first playback device, the first audio channel.
2 The method of claim 1, wherein:
the step of prompting the user to press a button on a playback device to be configured to play the first audio channel comprises displaying on the graphic interface of the controller, a prompt to press a button on whichever playback device the user wants to play the first audio channel; and/or the step of prompting the user to press a button on a playback device to be configured to play the second audio channel comprises displaying on the graphic interface of the controller, a prompt to press a button on whichever playback device the user wants to play the second audio channel.
3. The method of claim 1 or 2, wherein:
the step of prompting the user to press a button on a playback device to be configured to play the first audio channel includes causing a designated button to flash on playback devices that are capable of being paired; and/or the step of prompting the user to press a button on a playback device to be configured to play the second audio channel includes causing a designated button to flash on playback devices that are capable of being paired.
4. The method of any one of claims 1 to 3, wherein:
the step of prompting the user to press a button on a playback device to be configured to play the first audio channel includes causing a designated button to flash on playback devices that are a same speaker type; and/or the step of prompting the user to press a button on a playback device to be configured to play the second audio channel includes causing a designated button to flash on playback devices that are a same speaker type.
5. The method of any one of claims 1 to 4, wherein receiving the audio data comprises wirelessly receiving the audio data, and wherein sending the second audio channel comprises wirelessly sending the audio data.
6. The method of any one of claims 1 to 5, wherein detecting the selection by the user of the option to form the multi-channel pair comprises detecting a command to form a stereo pair in which the first playback device is designated as one of a left speaker and a right speaker and the second playback device is designated as the other of the left speaker and the right speaker.
7. The method of any one of claims 1 to 6, wherein the multi-channel audio data comprises left and right audio channels.
8. The method of claim 7, wherein playing the first audio channel of the multi-channel audio data comprises playing one of the left and right audio channels.
9. The method of claim 8, wherein sending at least the second audio channel of the multi-channel audio data comprises sending:
both of the left and right audio channels; or one of the left and right audio channels.
10. A system comprising first and second playback devices and a controller device, and configured for performing the method of any one of claims 1 to 9, wherein;
the first playback device comprises:
the first control interface;
a first network interface;
one or more first speaker drivers; and a first processor; and the second playback device comprises:
the second control interface;
a second network interface;
one or more second speaker drivers; and a second processor, wherein the first and second processors are configured for performing the steps of detecting the first and second manual commands input to the first and second control interfaces respectively, wherein the first processor is configured for performing the steps of:
sending, from the first playback device to the second playback device, at least the second audio channel; and playing the first audio channel through the one or more first speaker drivers, and wherein the second processor is configured for performing the steps of:
wirelessly receiving, from the first playback device via the network interface, at least the second audio channel; and playing, through the one or more second speaker drivers, the second audio channel.
11. A computer readable medium having instructions stored therein that, when executed by a processor, cause a first playback device to perform the method of one of claims 1 to 9.
CA2947275A 2012-04-26 2012-04-26 Multi-channel pairing in a media system Active CA2947275C (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CA3032479A CA3032479C (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system
CA2947275A CA2947275C (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
PCT/IB2012/052071 WO2012137190A1 (en) 2011-04-08 2012-04-26 Multi-channel pairing in a media system
CA2832542A CA2832542C (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system
CA2947275A CA2947275C (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CA2832542A Division CA2832542C (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CA3032479A Division CA3032479C (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system

Publications (2)

Publication Number Publication Date
CA2947275A1 CA2947275A1 (en) 2012-10-11
CA2947275C true CA2947275C (en) 2019-03-12

Family

ID=49628141

Family Applications (4)

Application Number Title Priority Date Filing Date
CA2832542A Active CA2832542C (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system
CA3184770A Pending CA3184770A1 (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system
CA2947275A Active CA2947275C (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system
CA3032479A Active CA3032479C (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CA2832542A Active CA2832542C (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system
CA3184770A Pending CA3184770A1 (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system

Family Applications After (1)

Application Number Title Priority Date Filing Date
CA3032479A Active CA3032479C (en) 2012-04-26 2012-04-26 Multi-channel pairing in a media system

Country Status (3)

Country Link
JP (1) JP5953366B2 (en)
CN (2) CN103597858B (en)
CA (4) CA2832542C (en)

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9654821B2 (en) 2011-12-30 2017-05-16 Sonos, Inc. Systems and methods for networked music playback
US9674587B2 (en) 2012-06-26 2017-06-06 Sonos, Inc. Systems and methods for networked music playback including remote add to queue
US9501533B2 (en) 2013-04-16 2016-11-22 Sonos, Inc. Private queue for a media playback system
US9361371B2 (en) 2013-04-16 2016-06-07 Sonos, Inc. Playlist update in a media playback system
US9247363B2 (en) 2013-04-16 2016-01-26 Sonos, Inc. Playback queue transfer in a media playback system
US10715973B2 (en) 2013-05-29 2020-07-14 Sonos, Inc. Playback queue control transition
US9495076B2 (en) 2013-05-29 2016-11-15 Sonos, Inc. Playlist modification
US9684484B2 (en) 2013-05-29 2017-06-20 Sonos, Inc. Playback zone silent connect
US9735978B2 (en) 2013-05-29 2017-08-15 Sonos, Inc. Playback queue control via a playlist on a mobile device
EP3826237A1 (en) * 2013-05-29 2021-05-26 Sonos Inc. Playback queue control via a playlist on a mobile device
US9703521B2 (en) 2013-05-29 2017-07-11 Sonos, Inc. Moving a playback queue to a new zone
US9798510B2 (en) 2013-05-29 2017-10-24 Sonos, Inc. Connected state indicator
US9953179B2 (en) 2013-05-29 2018-04-24 Sonos, Inc. Private queue indicator
US9348824B2 (en) 2014-06-18 2016-05-24 Sonos, Inc. Device group identification
JP6459379B2 (en) * 2014-10-17 2019-01-30 ヤマハ株式会社 Acoustic system
KR102221680B1 (en) 2014-12-11 2021-03-02 삼성전자주식회사 Sound output apparatus, sound output system and the controlling method
WO2016118314A1 (en) * 2015-01-21 2016-07-28 Qualcomm Incorporated System and method for changing a channel configuration of a set of audio output devices
CN105070302B (en) * 2015-07-09 2017-07-04 广东欧珀移动通信有限公司 A kind of method and terminal of playback equipment control
JP6602086B2 (en) * 2015-08-04 2019-11-06 株式会社ディーアンドエムホールディングス Wireless audio system
JP2017041756A (en) * 2015-08-19 2017-02-23 ヤマハ株式会社 Audio system and audio apparatus
JP6668636B2 (en) * 2015-08-19 2020-03-18 ヤマハ株式会社 Audio systems and equipment
JP6451596B2 (en) 2015-10-30 2019-01-16 ヤマハ株式会社 Audio apparatus and audio equipment
JP6572737B2 (en) * 2015-10-30 2019-09-11 ヤマハ株式会社 Audio system control program and control terminal device
CN105430570B (en) * 2015-11-27 2018-03-23 北京小鸟听听科技有限公司 Player method and playing device
CN106935251B (en) * 2015-12-30 2019-09-17 瑞轩科技股份有限公司 Audio playing apparatus and method
CN105898655B (en) * 2016-06-08 2017-05-17 维沃移动通信有限公司 Signal output circuit and mobile terminal
ES2913204T3 (en) * 2017-02-06 2022-06-01 Savant Systems Inc A/V interconnect architecture that includes an audio downmix transmitter A/V endpoint and distributed channel amplification
CN106878915B (en) * 2017-02-17 2019-09-03 Oppo广东移动通信有限公司 Control method, device and the playback equipment and mobile terminal of playback equipment
CN106686519B (en) * 2017-03-09 2019-04-09 Oppo广东移动通信有限公司 The method, apparatus and terminal of the stereo pairing of audio-frequence player device
CN107071655B (en) * 2017-03-20 2020-03-27 Oppo广东移动通信有限公司 Method and device for configuring stereo output, audio playing equipment and mobile terminal
CN107340990B (en) * 2017-06-30 2021-01-01 北京小米移动软件有限公司 Playing method and device
WO2019049245A1 (en) * 2017-09-06 2019-03-14 ヤマハ株式会社 Audio system, audio device, and method for controlling audio device
CN109343822A (en) * 2018-10-31 2019-02-15 广州虎牙科技有限公司 A kind of determination method, apparatus, equipment and the storage medium of audio frequency apparatus
US20200183640A1 (en) * 2018-12-06 2020-06-11 Sonos, Inc. Selection of Playback Devices
CN112073890B (en) * 2020-09-11 2022-08-02 成都极米科技股份有限公司 Audio data processing method and device and terminal equipment
CN114501296A (en) * 2022-01-28 2022-05-13 联想(北京)有限公司 Audio processing method and vehicle-mounted multimedia equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5005201A (en) * 1989-02-14 1991-04-02 Rca Licensing Corporation Apparatus and method thereof for improvement of stereophonic sound
US20020124097A1 (en) * 2000-12-29 2002-09-05 Isely Larson J. Methods, systems and computer program products for zone based distribution of audio signals
US7571014B1 (en) * 2004-04-01 2009-08-04 Sonos, Inc. Method and apparatus for controlling multimedia players in a multi-zone system
US8234395B2 (en) * 2003-07-28 2012-07-31 Sonos, Inc. System and method for synchronizing operations among a plurality of independently clocked digital data processing devices
KR100512473B1 (en) * 2004-11-30 2005-09-02 이종성 Network audio speaker system
US20080077261A1 (en) * 2006-08-29 2008-03-27 Motorola, Inc. Method and system for sharing an audio experience
CN101785182A (en) * 2007-08-16 2010-07-21 汤姆逊许可公司 network audio processor
JP5332243B2 (en) * 2008-03-11 2013-11-06 ヤマハ株式会社 Sound emission system
JP2011176581A (en) * 2010-02-24 2011-09-08 Sanyo Electric Co Ltd Speaker device, speaker system, and acoustic system

Also Published As

Publication number Publication date
CA3184770A1 (en) 2012-10-11
CN106375921A (en) 2017-02-01
CA3032479A1 (en) 2012-10-11
CA3032479C (en) 2023-02-21
CA2947275A1 (en) 2012-10-11
CN106375921B (en) 2020-07-14
JP5953366B2 (en) 2016-07-20
CN103597858B (en) 2016-10-05
CN103597858A (en) 2014-02-19
CA2832542A1 (en) 2012-10-11
CA2832542C (en) 2016-12-13
JP2014519726A (en) 2014-08-14

Similar Documents

Publication Publication Date Title
US11540050B2 (en) Playback device pairing
US11531517B2 (en) Networked playback device
US11758327B2 (en) Playback device pairing
CA2947275C (en) Multi-channel pairing in a media system
US20230283953A1 (en) Playback Device Pairing

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20161102