WO2016118314A1 - System and method for changing a channel configuration of a set of audio output devices - Google Patents

System and method for changing a channel configuration of a set of audio output devices Download PDF

Info

Publication number
WO2016118314A1
WO2016118314A1 PCT/US2016/012088 US2016012088W WO2016118314A1 WO 2016118314 A1 WO2016118314 A1 WO 2016118314A1 US 2016012088 W US2016012088 W US 2016012088W WO 2016118314 A1 WO2016118314 A1 WO 2016118314A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio output
audio
output devices
network
channel
Prior art date
Application number
PCT/US2016/012088
Other languages
French (fr)
Inventor
Johan Le Nerriec
Judah John MENTER
Daniel Tai
Matthew Daniel SMITH
Original Assignee
Qualcomm Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/601,510 external-priority patent/US9723406B2/en
Priority claimed from US14/601,585 external-priority patent/US9578418B2/en
Application filed by Qualcomm Incorporated filed Critical Qualcomm Incorporated
Priority to EP16705322.2A priority Critical patent/EP3248398A1/en
Priority to CN201680006508.XA priority patent/CN107211211A/en
Publication of WO2016118314A1 publication Critical patent/WO2016118314A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/003Digital PA systems using, e.g. LAN or internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/005Audio distribution systems for home, i.e. multi-room use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/13Aspects of volume control, not necessarily automatic, in stereophonic sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation

Definitions

  • Audio systems exist that utilize network connected audio output devices (e.g., speakers). In such systems, multiple connected speakers may be used to output the same content.
  • audio output devices e.g., speakers
  • FIG. 1 illustrates a network-based audio output system that is capable of dynamic configuration and/or calibration, according to various embodiments.
  • FIG. 2 illustrates an audio output device that is capable of being selected and operated as a leader device according to various embodiments.
  • FIG. 3 illustrates an example of a controller device for use with various embodiments.
  • FIG. 4 illustrates a mobile computing device on which various embodiments may be implemented.
  • FIG. 5 illustrates an audio output device on which various embodiments may be implemented.
  • FIG. 6 illustrates a method for dynamically determining and implementing channel configurations for a network-based audio system, according to various embodiments.
  • FIG. 7 illustrates a method for operating an audio output device as a leader device when distributing audio content to other audio output devices on a network, according to various embodiments.
  • FIG. 8 illustrates a method for calibrating an output of multiple audio output components on a network based on a relative position of a user, according to various embodiments.
  • FIG. 9 illustrates a method for calibrating an audio output device based on a position of a user, in accordance with various embodiments.
  • FIG. 10 illustrates a method for implementing a user interface to initiate dynamic configuration of a network-based audio system, according to various embodiments.
  • FIG. 1 1 illustrates a user interface for enabling speaker selection and
  • a set of audio output devices may be established and configured to output channel specific audio. Once established, the channel configuration may be changed and updated in response to events such as changes to user preference, or the addition or subtraction of audio output devices to the network. In some embodiments, the reconfiguration may be performed on the fly while audio content is being outputted by the devices.
  • the audio output devices may be controlled so that the output of the device is calibrated to the position of the user.
  • the arrival time and/or volume of the audio may be calibrated so that the user experiences the output from perspective of being equally separated from each audio output device, with each audio output device providing a uniform audio output.
  • Embodiments described herein provide for a system, method, and device for outputting audio content over a network.
  • multiple audio output devices that are connected on a network to form an audio output set for receiving and outputting at least a portion of an audio content originating from a source.
  • a controller device may determine a channel configuration for the audio output set.
  • the channel configuration may include a channel assignment for each audio output device that is connected on the network to form the audio output set.
  • the controller device may respond to an event or condition by changing the channel configuration.
  • a controller device determines a channel configuration for the audio output set.
  • the channel configuration may include a channel assignment for each audio output device that is connected on the network to form the audio output set.
  • the controller device receives audio content from a source, and outputs a channel portion of the audio content based on a channel assignment of the given audio output device.
  • the controller device communicates at least another portion of the audio content to the other audio output device. Additionally, the controller responds to an event or condition by changing the channel configuration and then outputting the channel portion of the audio content based on the new channel assignments.
  • each of multiple audio output devices is triggered to generate an acoustic identification signal.
  • a controller device may perform a comparison of the acoustic identification signal from each of the multiple audio output devices. The output from one or multiple audio output devices is controlled based on the comparison.
  • a speaker is intended to mean an audio output device, such as a network-connected audio output device.
  • a speaker includes a dedicated device that outputs audio such as music.
  • a speaker includes a multifunctional device, such as a mobile device or tablet, which may output video, capture and store audio content, enable user interaction and/or perform numerous other actions.
  • Various embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer- implemented method. Programmatically means through the use of code, or computer- executable instructions. A programmatically performed step may or may not be automatic.
  • a programmatic module or component may include a program, a subroutine, a portion of a program, or software or a hardware component capable of performing one or more stated tasks or functions.
  • a module or component may exist on a hardware component independently of other modules or components.
  • a module or component may be a shared element or process of other modules, programs, or machines.
  • various embodiments described herein may be implemented through instructions that are executable by one or more processors. These instructions may be carried on a computer-readable medium.
  • Machines shown or described with figures below provide examples of processing resources and computer-readable mediums on which instructions for implementing embodiments of the invention may be carried and/or executed.
  • the numerous machines shown with embodiments of the invention include processor(s) and various forms of memory for holding data and instructions.
  • Examples of computer-readable mediums include permanent memory storage devices, such as hard drives on personal computers or servers.
  • Other examples of computer storage mediums include portable storage units, such as CD or DVD units, flash or solid state memory (such as carried on many cell phones and consumer electronic devices), and magnetic memory.
  • Computers, terminals, network enabled devices are all examples of machines and devices that utilize processors, memory, and instructions stored on computer-readable mediums. Additionally, embodiments may be implemented in the form of computer-programs, or a computer usable carrier medium capable of carrying such a program.
  • FIG 1 illustrates a network-based audio output system 100 that is capable of dynamic configuration and/or calibration, according to various embodiments.
  • the audio output system 100 may be implemented in a local or closed network 101 , such as provided by a home or local area network.
  • the network 101 may include multiple connected devices, including a controller device 1 10 and multiple network enabled audio output devices 120, 122, 124, and 126.
  • the network 101 includes an access point 102 for providing a wireless connectivity medium.
  • each of the controller device 1 10 and the audio output devices 120, 122, 124, 126 may operate under IEEE Specifications of 802.1 1 (a), 802.1 1 (b), 802.1 1 (g), 802.1 1 (n), 802.1 1 (ac), or the like (collectively “Wi-Fi,” “Wi-Fi network,” or "802.1 1 protocol”). Still further, in some implementations, the controller device 1 10 and/or some or all of the audio output devices 120, 122, 124, 126 are capable of wireless peer-to- peer communications, such as provided by Wi-Fi Direct. Still further, some or all of the audio output devices 120, 122, 124, and 126 may be able to communicate directly with other devices on the network as peers.
  • the individual audio output devices 120, 122, 124, and 126 may communicate using a direct, wireless peer-to-peer communication protocol, such as provided by Wi-Fi Direct. Still further, in some variations, one or more of the audio output devices 120, 122, 124, and 126 may utilize a connectivity medium such as provided through an Ethernet connection or other network-based wired connection.
  • the audio output devices 120, 122, 124, and 126 may be connected and positioned in a physical region of the network 101 , based on preference of a user.
  • a physical region of the network 101 may correspond to a dwelling, or alternatively, to a room or space within the dwelling.
  • an environment of the network 101 may correspond to a home network in which multiple speakers or other audio output devices are provided with network connectivity for purposes of outputting audio content selected by the user.
  • the user may selectively position individual connected speakers about a room to enhance the user's enjoyment of rendered audio content.
  • the audio output devices 120, 122, 124, and 126 may be heterogeneous in nature, meaning that the audio output devices 120, 122, 124, and 126 may have different manufacturers, capabilities, resources and/or purposes.
  • one or more of the audio output devices 120, 122, 124, and 126 may correspond to a television, for which audio output is not a primary purpose.
  • One or more of the audio output devices 120, 122, 124, and 126 may also include programming or other logic to enable that audio output device to communicate with other devices on the network.
  • An example of such programming or logic includes ALLPLAY platform, manufactured by QUALCOMM CONNECTED EXPERIENCES, which may be installed or otherwise provided through firmware on wireless speakers.
  • audio output devices 120, 122, 124, and 126 are speakers (or dedicated audio output devices), other variations provide for audio output devices 120, 122, 124, and 126 which have multi-purposes, including televisions, desktop computers, or other multifunction audio output devices.
  • the controller device 1 10 operates to execute an application, software platform, or other programming logic in order to communicate with and control the audio output devices 120, 122, 124, and 126.
  • the controller device 1 10 may correspond to a mobile computing device, such as a multifunction cellular telephony/messaging device, tablet, hybrid device (so called "phablet"), or wearable computing device.
  • the controller device 1 10 may operate to control and configure the output of audio using the audio output devices 120, 122, 124, and 126. Any one of multiple audio distribution configurations may be used for purposes of outputting the audio content on multiple audio output devices 120, 122, 124, and 126 in accordance with a dynamically selected channel configuration. In some embodiments, the controller device 1 10 may be operated modally in order to select from multiple possible audio distribution configurations.
  • the controller device 1 10 distributes audio content ("AC") 1 13 directly or indirectly to each of the multiple audio output devices 120, 122, 124, or 126.
  • the controller device 1 10 is the source of the audio content 1 13 being distributed.
  • the audio content 1 13 may correspond to media files ("MF") 103 that are accessed from a media library 105 of the user.
  • the media library 105 may be local to the controller device 1 10, distributed amongst multiple devices on the network 101 , or remote to the controller device 1 10.
  • the media library 105 may be stored on other devices (including one or more of the audio output devices 120, 122, 124, or 126) or resources of the network 101 , and the controller device 1 10 may communicate with another device on the network 101 (e.g., home computer, cable box, etc.) in order to retrieve media files 103 from the media library 105.
  • the controller device 1 10 may access network services ("NS") 107 for the audio content 1 13, such as online media sites (e.g., PANDORA, SPOTIFY, GOOGLE PLUS, etc.).
  • the controller device 1 10 may also generate audio content 1 13 from other content sources (“CS") 109, such as cable, satellite or over-the-air broadcasts.
  • CS content sources
  • the controller device 1 10 may distribute the audio content 1 13 originating from multimedia content that is rendered on the device.
  • the controller device 1 10 may execute different applications which generate multimedia content (e.g., games), and audio from these active applications may be distributed as the audio content 1 13.
  • the controller device 1 10 may access another device or resource on the network 101 , such as a device that communicates with one or more of the audio output devices 120, 122, 124, or 126 through the access point 102.
  • the controller device 1 10 may use peer-to-peer wireless communications (e.g., Wi-Fi Direct) in order directly transmit the audio content 1 13 to each of the desired audio output devices 120, 122, 124, and 126 on the network 101 .
  • Wi-Fi Direct peer-to-peer wireless communications
  • the controller device 1 10 distributes the audio content 1 13 through one of the audio output devices 120, 122, 124, 126 that implement functionality for operating as the leader of the active output devices on the network 101 .
  • the controller device 1 10 may select one of the audio output devices 120, 122, 124, 126 to serve as the leader device.
  • the audio output device 120 that is selected as the leader may receive the audio content 1 13 from the controller device 1 10 (which may access the media library 105, network service 107 or content source 109) for distribution to the other audio output devices 122, 124, 126.
  • the audio output device 120 may receive the audio content 1 13 from another source (e.g., another device of network 101 ), under direction or control of the controller device 1 10, for distribution to the other audio output devices 122, 124, 126.
  • either the controller device 1 10 or the audio output device 120 that operates as the leader may channel-filter or augment the audio content 1 13 for transmission to the respective audio output devices.
  • the audio content 1 13 may be delineated into multiple channel portions 121 , and each channel portion 121 of the audio content 1 13 is communicated to an assigned audio output device 120, 122, 124, and 126.
  • the audio content 1 13 may be pre-structured into channeled components, and the augmented audio (“aug. audio”) 133 may be transmitted to the other audio output devices 122, 124, 126 where the augmented audio 133 is filtered into a corresponding channel portion 121 .
  • the controller device 1 10 includes an audio distribution logic 1 12, a dynamic selection logic 1 14, a channel configuration logic 1 16, and a calibration logic 1 18. Furthermore, in an example of FIG. 1 , one or more of the audio output devices 120, 122, 124, and 126 may be selected to implement the functionality of the leader, which may include components and functionality (e.g., as described with an example of FIG. 2). The functionality, shown to be described with either the controller device 1 10 or the audio output device 120 that is selected as the leader, may be interchangeable amongst the two devices (or amongst another device that may be substituted as the leader for the audio output device 120).
  • the controller device 1 10 may include functionality for implementing channel filtering or channel augmentation (e.g., as shown in FIG. 2).
  • the audio output device 120 may operate as the leader and also include one or more of the components of the controller device 1 10, such as one or more of the dynamic selection logic 1 14, channel configuration logic 1 16, or calibration logic 1 18.
  • the controller device 1 10 includes the channel configuration logic 1 16 for performing operations to determine a channel configuration 1 15 of the set of audio output devices 120, 122, 124, and 126.
  • the channel configuration 1 15 may be determined by (i) a number of available audio output devices 120, 122, 124, and 126, (ii) a configuration scheme 1 17 or layout that is based on preference and/or the number of available audio output devices 120, 122, 124, and 126, and/or (iii) the relative positioning of each audio output device 120, 122, 124, and 126 within the space or environment of the network 101 .
  • the channel configuration 1 15 may specify channel assignments 123 for each audio output device 120, 122, 124, and 126, given a desired configuration scheme 1 17 and the relative positioning of the audio output devices. Once determined, channel assignments 123 may be communicated to the audio output devices 122, 124, 126 as control or command data. Depending on implementation or mode of operation, the channel assignments 123 may be communicated directly from the controller device 1 10 or from the audio output device 120 that is acting as the leader. As described with various examples, the channel configuration logic 1 16 may dynamically re-determine and implement the channel configuration 1 15 based on the occurrence of conditions and events that affect usage of the audio output devices 120, 122, 124, and 126 on the network 101 .
  • the controller device 1 10 may have different modes of operation in order to implement an audio distribution configuration in which the audio distribution logic 1 12 directly distributes the audio content 1 13 to each of the audio output devices 120, 122, 124, and 126.
  • the audio distribution logic 1 12 of the controller device 1 10 may communicate either a full or partial stream to multiple audio output devices.
  • the controller device 1 10 may use the dynamic selection logic 1 14 to select one of the multiple audio output devices 120, 122, 124, 126 as a leader.
  • the determination to use the particular audio output device 120 as the leader may be made programmatically, based on, for example, available resources of the controller device 1 10 and/or preferences of the user. Various criteria may be used to select one audio output device 120 as the leader for the other audio output devices 122, 124, or 126 of the network 101 .
  • the audio output device 120, 122, 124, and 126 that is selected to be the leader may be required to have a minimum set of resources, such as a minimum processing capability and/or the ability to establish multiple simultaneous peer-to-peer connections with other devices on the network 101 .
  • the audio output device 120 that is selected as the leader may have the most or best of a desired resource or capability.
  • the audio output device 120 may be selected as the leader because the audio output device 120 satisfies a criterion of containing digital signal processor ("DSP"), or because the audio output 120 device is deemed to have the greatest amount of available bandwidth as compared to the other audio output devices.
  • DSP digital signal processor
  • control device 1 10 may communicate a leader selection 1 1 1 to the selected audio output device 120, 122, 124, or 126.
  • the controller device 1 10 makes the leader selection 1 1 1 programmatically using for example, the dynamic selection logic 1 14.
  • the audio output device 120 receives the audio content 1 13 from a content source (CS) 109, and then distributes the audio content 1 13 as the channel portions 121 to each of the other audio output devices 122, 124, 126 of the network 101 .
  • the source of the audio content 1 13 may, for example, correspond to controller device 1 10.
  • controller device 1 10 may generate the audio content 1 13 (e.g., gaming content) and/or store portions of the media library 105, such as a library of songs or albums, and the audio content 1 13 may correspond to a media file 103 from the media library 105.
  • controller device 1 10 may also serve as a source for audio content retrieved from both local network and remote sources.
  • the controller device 1 10 may access other media resource devices (e.g., home computer, cable box, etc.) on the network 101 in order to retrieve the media files 103 of the user's media library. Still further, the controller device 1 10 may access commercially available third party network services 107 for the audio content 1 13 (e.g., PANDORA, SPOTIFY, GOOGLE PLUS, etc.).
  • the content source 109 for the audio content 1 13 may be another device on the network 101 , such as a device that communicates with the controller device 1 10 and/or output device 120 through the wireless access point 102. Still further, in other variations, the source of the audio content 1 13 may be another content source 109 (e.g., cable or over-the-air broadcast) available through the network 101 .
  • the audio output device 120 processes the audio content 1 13 (audio data) to delineate the channel portions 121 from the audio content 1 13. Each channel portion 121 may then be communicated to corresponding audio output device 122, 124, 126. The channel portion 121 for the audio output device 120 may be played using a local audio output resource, in concert with the playback of the channel portions 121 of the other audio output devices 122, 124, 126.
  • the channel configuration 1 15 may be dynamically determined on the fly, based on conditions or events detected on the network 101 .
  • the controller device 1 10 may detect a particular network condition (e.g., limited bandwidth) and then output the channel configuration 1 15 to include an alternative set of channel assignments 123 for the respective audio output devices 120, 122, 124, and 126. Still further, the controller device 1 10 may receive input, or otherwise detect the addition or subtraction of an audio output device 122, 124, or 126, so as to affect a number of audio output devices 120, 122, 124, and 126 that are in use.
  • a particular network condition e.g., limited bandwidth
  • the controller device 1 10 may receive input, or otherwise detect the addition or subtraction of an audio output device 122, 124, or 126, so as to affect a number of audio output devices 120, 122, 124, and 126 that are in use.
  • a change in the number of audio output devices 120, 122, 124, and 126 that are in use may also change the configuration scheme 1 17 (e.g., from 7.1 to 5.1 ) and/or require further changes to the channel assignment 123, in order to accommodate a different number of audio output devices 120, 122, 124, and 126 that are in use (or available for use) on the network 101 .
  • the ability of the controller device 1 10 to dynamically determine and implement channel configurations may enable, for example, playback of the audio content from some or all of the audio output devices 120, 122, 124, and 126 to continue substantially uninterrupted while one or more channel assignments 123 takes place.
  • the controller device 1 10 may dynamically select the audio output device 120 that is the leader.
  • the determination of which audio output device 120 serves as the leader may be based on, for example, the available bandwidth for each of audio output device 120, 122, 124, or 126 that satisfy one or more criteria for being the leader.
  • the modal operation of the controller device 1 10 in distributing the audio content 1 13 may also be dynamically changed.
  • the controller device 1 10 may switch from using one audio output device 120 as the leader to directly transmitting the audio content 1 13 (or channel portions 121 thereof) to each audio output device 120, 122, 124, and 126.
  • the selection of which audio output device 120, 122, 124, 126 serves as the leader may also be dynamic, based on factors such as the available bandwidth to the respective audio output devices 120, 122, 124, 126.
  • the controller device 1 10 includes the calibration logic 1 18.
  • the calibration logic 1 18 may operate to adjust output of the audio output devices 120, 122, 124, 126 to accommodate a relative position of the user in the physical space of the environment of the network 101 .
  • the calibration logic 1 18 may operate to accommodate the proximity of the user to one or more of the audio output devices 120, 122, 124, and 126.
  • the calibration logic 1 18 may implement operations so that the audio experienced by the user at a given location is uniform from all direction.
  • the calibration logic 1 18 may implement adjustments 1 19 in the form of delays in individual audio output devices 120, 122, 124, and 126 so that the arrival time of audio transmissions from each of the respective audio output devices 120, 122, 124, 126 is near simultaneous with respect to the user, even though the user may be closer to one audio output device 120, 122, 124, 126 as compared to another. Still further, the calibration logic 1 18 may implement adjustments 1 19 in the form of volume adjustment for the individual audio output devices 120, 122, 124, 126 so that the volume experience by the user from each of the audio output devices 120, 122, 124, 126 is the same, even when the user is closer to one audio output device as compared to another.
  • FIG. 2 illustrates an audio output device that is capable of being selected and operated as a leader, according to various embodiments.
  • An audio output device 200 such as shown and described with an example of FIG. 2 may operate as the audio output device 120 in the example of FIG. 1 .
  • the audio output device 200 includes an audio receiver 210, control logic 220, an audio output resource 230, and a device interface 240.
  • the control logic 220 may be coupled with, or include, channel filter 222 and/or channel augmentation 226.
  • the audio receiver 210 may receive audio content 201 from the controller device 1 10. Alternatively, the audio receiver 210 may receive the audio content 201 from another source, such as from an online source or from another device. The audio content 201 may be received either directly or indirectly (e.g., via an access point 102 or from the controller device 1 10).
  • the audio output device 200 may also receive channel configuration data 221 from the controller device 1 10 (shown via the device interface 240).
  • the audio output device 200 includes channel configuration logic 244 for determining channel configuration data 221 independently of any communication from another device.
  • the channel configuration logic 244 may determine channel configuration data 221 from, for example, user input 243, such as provided through the user's interaction with a user interface of the audio output device 200.
  • the channel configuration logic 244 may also determine channel configuration data 221 based on settings 245 or preferences of the user or device.
  • the audio receiver 210 may communicate the full stream of audio content ("full stream AC") 212 to the channel filter 222 of control logic 220.
  • the channel filter 222 filters the full stream of audio content 212 into channeled portions based on channel assignments defined by the channel configuration data 221 .
  • audio output resource 230 receives the channel portion 215 for the channel assigned to the audio output device 200.
  • the portion of the outgoing audio content (AC) 217 for the channels assigned to the other audio output devices 122, 124, 126 may be transmitted to the other audio output devices via the device interface 240.
  • the audio output device 200 may implement channel augmentation 226.
  • Channel augmentation 226 may structure the audio content 212 into an augmented stream 219 that may be transmitted to the other audio output devices 122, 124, 126 via the device interface 240.
  • the augmented stream 219 may be filtered for an appropriate channel at the corresponding audio output device 122, 124, 126, which coincides with the point of output for the particular channel output.
  • the device interface 240 may communicate augmented stream 219, which may be filtered for a given channel. In this way, the channel augmentation 226 may provide an alternative to filtering the audio content in advance of transmission.
  • the device interface 240 may include programming or logic to enable audio output device 200 to be interconnected and operable with multiple other devices of different kinds on the network 101 .
  • the device interface 240 includes an application program interface provided through, for example, ALLPLAY, manufactured by QUALCOMM CONNECTED EXPERIENCES.
  • the audio output device 200 includes functionality for triggering or implementing calibration control 250.
  • the calibration control 250 receives calibration input 249 from another device, such as from controller device 1 10.
  • controller device 1 10 includes resources and logic for receiving input that is indicative of calibration variations, and further includes resources and logic to determine calibration actions that may be taken on one or more of the audio output devices 120, 122, 124, 126 in order to calibrate the audio output for the location of the user.
  • the calibration actions serve to affect an audio output experienced by the user, with specific consideration for a relative proximity of the user to individual audio output devices 120, 122, 124, 126 of the network 101 .
  • the calibration actions of the calibration control 250 may include delay control 251 .
  • the control logic 220 may process and communicate the delay control 251 to other audio output devices 122, 124, 126 via the device interface 240.
  • Another example of calibration actions of calibration control 250 includes volume control 253.
  • the control logic 220 may communicate the volume control 253 to the other audio output devices via the device interface 240.
  • FIG. 3 illustrates an example of a controller device 300, according to various embodiments.
  • the controller device 300 (which may correspond to the controller device 1 10) may be implemented using software that executes on a mobile computing device, such as a device that may be carried by a person within the space or physical region of the network 101 .
  • the controller device 300 may correspond to a device such as a cellular telephony/messaging device (e.g., feature phone), tablet or hybrid device, wearable computing device, or laptop.
  • the controller device 300 operates to receive input information 301 for determining (i) a number of audio output devices 120, 122, 124, 126 that are connected on the network 101 , and (ii) the location of each audio output device 120, 122, 124, 126 with respect to a given space of coverage within the network 101 .
  • the software that is implemented on the controller device 300 may correspond to, for example, an application, a suite of applications, or alternatively to an operating system level functionality.
  • the controller device 300 may share an application framework or interface with other devices of the network.
  • each of the controller device 300 and the various audio output devices 120, 122, 124, 126 that are employed on the network 101 may implement a media platform, such as provided by
  • the controller device 300 operates to detect and process transmissions for purpose of estimating a proximity of the controller device to individual audio output devices 120, 122, 124, 126 that are operating on the network 101 . With such proximity information, the controller device 300 may operate to calibrate an output of one or more of the audio output devices 120, 122, 124, 126 on the network 101 .
  • the controller device 300 includes a user interface 310, audio output device control logic ("AOD control logic") 320, device position logic 330, and an audio output interface 340.
  • the user interface 310 may display prompts that guide the user into providing input that identifies basic input information 301 about the audio output devices 120, 122, 124, 126 employed on the network 101 .
  • the user interface 310 may display a virtualized room or space within the dwelling, and provide features that enable the user to indicate, among other information, (i) a number of audio output devices 120, 122, 124, 126 employed on the network 101 , and (ii) a general location for a given audio output device 120, 122, 124, 126 which may be labeled.
  • the user interface 310 may also execute to prompt the user to provide input information 301 that identifies additional information about the audio output devices, such as a manufacturer, capability, or connectivity status.
  • the user interface 310 may output device position information 31 1 , which may identify the number of audio output devices and their relative position in a space represented through the user interface 310.
  • the device position logic 330 may receive the position information 31 1 , and optionally generate one or more response queries 313 that may configure content on the user interface 310 to, for example, prompt the user to provide additional input information 301 .
  • the response queries 313 may prompt the user to provide additional input information 301 that may approximate the length or total distance between the audio output devices 120, 122, 124, 126 on the network 101 , so as to provide dimensionality to the virtualized representation of the space within the network. Still further, the response query 313 may prompt the user to specify audio output devices 120, 122, 124, 126 for different rooms of a dwelling of the network 101 . More generally, the response query 313 may prompt the user interface 310 to display content for enabling the user to define different rooms or spaces of the dwelling covered by the network 101 .
  • the input information 301 may prompt the user into entering information corresponding to (i) group size information 309, corresponding to a number of audio output devices on the network 101 , and (ii) device position information 31 1 , which identifies a general or relative location of audio output devices 120, 122, 124, 126 within the space of the network 101 (e.g. , within the individual rooms). Still further, while some embodiments provide for the user interface 310 to prompt the user for input information 301 , other embodiments provide for the user interface 310 to guide the user into selecting one or more configurations affecting the audio output devices 120, 122, 124, 126 including input for selecting channel configuration 333.
  • the device position logic 330 may operate to determine a set of the channel configurations 333 based at least in part on the group size information 309 and the device position information 31 1 of the individual audio output devices 120, 122, 124, 126.
  • the channel configuration 333 may specify a speaker configuration layout ("C. Lay") 337, such as 3, 5, 7, (or more) Surround Sound layout, or Dolby 5.1 or 7.1 speaker layout.
  • the channel configurations 333 for the audio output devices 120, 122, 124, 126 may include channel assignments 339 ("Chan. Ass. 339") for individual audio output devices.
  • the configuration layout 337 may be based on one or more criterion, such as the number of audio output devices 120, 122, 124, 126 (e.g., provided with group size information 309) and/or the positioning of the audio output devices 120, 122, 124, 126 (e.g., as specified from device position information 31 1 ).
  • configuration layout 337 may be selected by default.
  • the user may be provided a selection feature via the user interface 310 in order to make selection of a particular configuration layout 337.
  • a configuration library 329 may retain information about different possible configuration layouts 337, and provide a mechanism for selecting one or more configuration layouts 337 based on the group size information 309 and/or the device position information 31 1 of each audio output devices 120, 122, 124, 126.
  • the device position information 31 1 of each audio output device 120, 122, 124, 126 may be also indicated by input information 301 received via the user interface 310), as well as other input from the user (e.g., input that is indicative of a preference of the user).
  • the channel assignments 339 may be made programmatically, based on, for example, the configuration layout 337, the group size information 309, and/or device position information 31 1 of the audio output devices 120, 122, 124, 126 in the space of the dwelling.
  • the channel configuration 333 may be communicated to the audio output interface 340.
  • the audio output interface 340 may provide an application programming interface that enables the controller device 300 to communicate with other connected devices of the network 101 .
  • the audio output interface 340 may be used for wireless peer-to-peer communications, such as provided through a Wi-Fi Direct medium.
  • the audio output interface 340 communicates the channel configurations 333 to the audio output device 120, 200 that is selected to be the leader for a particular session on the network.
  • the controller device 300 includes functionality for calibrating an output of the audio output devices 120, 122, 124, 126 on the network 101 based on a location of the user at a given moment. As the location of the user changes, the controller device 300 may implement functionality to dynamically control an output of individual audio output devices 120, 122, 124, 126 on the network 101 , so that the audio experience of the user equally reflects the output from individual audio output devices.
  • the controller device 300 includes an acoustic input interface 306, a timing analysis component 312, and the audio output device control logic 320.
  • the audio output device control logic 320 may include a delay (or latency) control 322 and volume control 324.
  • the acoustic input interface 306 may include a programming component that interfaces with a microphone of a mobile computing device on which controller device 300 is implemented.
  • the acoustic input interface 306 may be configured to detect reference acoustic reference transmissions ("AREFTR") 361 from each of the active audio output devices 120, 122, 124, 126 on the network 101 .
  • the acoustic input interface 306 may include logic that recognizes, for example, a predetermined characteristic of the acoustic reference transmissions 361 , such as a signal pattern.
  • each audio output device 120, 122, 124, 126 transmits a locally unique acoustic reference transmission 361 , signaling an identifier for the transmitting device.
  • the acoustic reference transmission 361 of each audio output device 120, 122, 124, 126 may be in the audible or inaudible range.
  • the acoustic reference transmission 361 of the each audio output device 120, 122, 124, 126 is communicated at a frequency range that is detectable to a microphone of the mobile computing device on which the controller device 300 is provided. Additionally, each of the audio output devices 120, 122, 124, 126 communicates a corresponding acoustic reference transmission 361 , representing a portion (e.g., a frame or series of frames) of an audio content (e.g., song) that is outputted from each of the respective audio output devices.
  • the acoustic input interface 306 may include logic to detect the acoustic reference transmission 361 from each of the audio output devices 120, 122, 124, 126. The acoustic input interface 306 may also compare the arrival time 363 of each of the acoustic reference transmissions 361 in order to determine a delay or other difference between the arrival times of the acoustic reference transmissions from different audio output devices 120, 122, 124, 126 on the network 101 .
  • embodiments recognize that it takes sound slightly less than 1 millisecond to travel 1 foot, and that if the user moves by relatively small amounts (e.g., one foot), a detectable delay may result that affects the quality of the user experience in listening to the collective audio output from the audio output system 100.
  • the timing analysis component 312 may analyze the arrival time 363 of each of the acoustic reference transmissions 361 in order to detect sufficiently significant variations amongst the arrival times 363 that are attributed to the individual audio output devices 120, 122, 124, 126.
  • the difference in arrival times 363 may be indicative of user location, and more specifically, of a relative location or proximity of the user to individual audio output devices 120, 122, 124, 126 of the system.
  • a contextual analysis component 314 may also be
  • the contextual analysis component 314 may determine contextual information from timing differentials (as identified by arrival times 363) of the acoustic reference transmissions 361 from the different audio output devices 120, 122, 124, 126. In some variations, the contextual analysis component 314 may detect a trend or event from the movement of the user within a network space or region. For example, the contextual analysis component 314 may reference known information about the location of individual audio output devices 120, 122, 124, 126 (which may be approximated from input information 301 and/or from location detection technology) in order to determine that the user has switched rooms.
  • one determination that may be made from the contextual analysis component 314 includes the determination to power down or up selected audio output devices 120, 122, 124, 126 based on the determined location of the user.
  • the contextual analysis component 314 may signal a contextual determination ("CD") 315 to the audio output device control logic 320, which in turn may send control commands ("CC") 321 to select audio output devices 120, 122, 124, 126 for purpose of powering those audio output devices up or down based on contextual determinations 315.
  • the contextual determinations 315 may include information that locates a particular audio output device in one room or floor and the user in another room or floor of the dwelling.
  • timing analysis component 312 may generate a timing parameter ("TP") 317 which is indicative of a difference in the arrival times 363 of one or more acoustic reference transmissions 361 .
  • the delay control 322 of the audio output device control logic 320 may utilize the timing parameter 317 to generate a delay command ("DC") 323 for one or more of the audio output devices 120, 122, 124, 126.
  • DC delay command
  • the proximate audio output device may be provided the delay command 323.
  • the delay command 323 may serve to slow down or delay the output of the proximate audio output device 120, 122, 124, 126.
  • the delay caused to the proximate audio output device 120, 122, 124, 126 may be based on the detected difference in the arrival times 363 of the acoustic reference transmissions 361 from the distal and proximate audio output devices 120, 122, 124, 126.
  • the delay command 323 may generate a delay that substantially equalizes the arrival times 363 of the proximate and distal audio output devices 120, 122, 124, 126.
  • the volume control 324 of the audio output device control logic 320 may use the timing parameter 317 to determine an adjustment to the volume of one or more of the audio output devices 120, 122, 124, 126 with the purpose of having the user experience a same volume from all of the audio output devices 120, 122, 124, 126 regardless of the fact that the user may move or otherwise become close to one or more of the audio output devices at the expense of another.
  • the volume control 324 may generate a volume command ("VC") 325 to cause one of (i) a decreasing adjustment to the volume of a proximate audio output device 120, 122, 124, 126 in response to user movement, and (ii) an increasing adjustment to the volume of a distal audio output device 120, 122, 124, 126 in response to the user movement, or (iii) a combination of increasing and decreasing volume of the distal and proximate audio output device 120, 122, 124, 126 respectively, in response to user movement.
  • the particular volume command 325 that is selected may be based on, for example, a default setting or a user preference.
  • the audio output interface 340 may communicate one or more of the control command 321 , delay command 323, and/or volume command 325 to the connected audio output devices 120, 122, 124, 126 of the network 101 .
  • the delay command 323 and/or volume command 325 may be generated in response to continued polling or checking of user location as determined from the mobile computing device of controller device 300. In this way, the delay commands 323 and/or volume commands 325 may provide a mechanism to calibrate output characteristics of individual audio output devices 120, 122, 124, 126 on the network 101 .
  • the calibration functionality enables the user to experience audio content as equal contributions from multiple audio output devices 120, 122, 124, 126 of the network 101 that are spaced non-equidistantly.
  • the calibration functionality also enables the user to experience audio content from multiple contributing audio output devices 120, 122, 124, 126 equally even when the user is in motion, or when the user is measurably closer to one audio output device over another.
  • the calibration functionality such as described may also enable the collective audio output to be equalized in contributions from the different audio output devices 120, 122, 124, 126 that are generating output on the network 101 , despite differences existing in the manufacturing, quality, or capability of the individual audio output devices.
  • FIG. 4 illustrates a mobile computing device on which various embodiments may be implemented.
  • a mobile computing device 400 of FIG. 4 may be used to implement controller device 1 10, 300, such as described with an example of FIG. 1 and FIG. 3.
  • the mobile computing device 400 may include a microphone 410, a processor 420, a display 430, a memory 440, and a network interface 450.
  • the memory 440 may store instructions for implementing various functionality described with, for example, controller device 1 10, 300.
  • the memory 440 stores device control instructions ("Device Control Instruct.") 441 , which may be executed by the processor 420 in connection with control and calibration functionality (e.g., as described with an example of FIG. 3).
  • the microphone 410 of the mobile computing device 400 receives the acoustic reference transmissions ("AREFTR") 361 from the individual audio output devices 120, 122, 124, 126.
  • the acoustic reference transmissions 361 may be received as encoded signals 467 ("Enc.
  • the processor 420 may execute the device control instructions 441 in order to (i) collect the acoustic reference transmissions 361 from the different audio output devices 120, 122, 124, 126 for a given point in time, and (ii) implement timing analysis component 312 to determine timing parameters 317 reflecting differences in the arrival times 363 of the acoustic reference transmissions 361 .
  • the processor 420 may execute the device control instructions 441 in order to determine calibration commands based at least in part on the determined timing parameters 317. Furthermore, the processor 420 may use the network interface 450 to communicate calibration commands to one or more audio output devices 120, 122, 124, 126 on the network 101 of the mobile computing device 400.
  • the calibration commands may include, for example, delay commands ("DC") 323, which cause specific audio output devices 120, 122, 124, 126 to selectively delay or otherwise adjust timing of their respective outputs in order to calibrate the arrival time of a given segment of audio content to the user.
  • the calibration commands may include volume
  • VC volume commands
  • the processor 420 may also execute the device control instructions 441 in order to implement contextual analysis component 314 (as described with an example of FIG. 3) and make contextual determinations 315.
  • control commands CC
  • CC control commands
  • the contextual analysis component 314 may make the contextual determinations 315 based on contextual information, such as, for example, information defining the spacing, leveling, or segmentation (e.g., rooms) of the dwelling of network 101 .
  • the memory 440 may also store user interface instructions ("Ul Instruct.") 443.
  • the processor 420 may execute the user interface instructions 443 in order to generate a user interface ("Ul") 431 on the display 430.
  • the user interface 431 may provide the user with prompts and other interfaces to facilitate the user in providing input information 301 about the audio output devices 120, 122, 124, 126 that are in use on the network 101 .
  • the input information 301 received through the user interface 431 may include configuration input (“Config. Input") 433, including (i) the group size information 309 (FIG.
  • the mobile computing device 400 determines the channel configurations 453 based at least in part on a configuration input of the user.
  • the configuration input may be determined through user interaction with the user interface 431 provided on the display 430.
  • the memory 440 may include position logic instructions ("Position Logic Instruct.") 445, which when executed by the processor 420, result in the processor 420 generating channel configurations 453.
  • channel configurations 453 may include one or more the following: (i) an audio output device layout or scheme, and/or (ii) a channel assignment for each audio output device 120, 122, 124, 126 on the network 101 , based on the selected device layout.
  • the position logic instructions 445 may determine channel configurations 453 based on additional information, such as input information 301 provided from the user, and/or information known about a particular type or model of one or more of the audio output devices 120, 122, 124, 126. For example, a user may enter information about a specific audio output device using the user interface 431 , and the capability known for the particular audio output device may favor use of that device for a particular location are channel assignment.
  • FIG. 5 illustrates an audio output device on which various embodiments may be implemented.
  • an example of FIG. 5 illustrates an audio output device 500 that may also optionally operate as a leader device (e.g., 120), such as described in the example of FIG. 1 .
  • a leader device e.g. 120
  • the audio output device 500 includes a buffer 508, a processor 510, an audio output component 530, a network interface 540, and a memory 550.
  • the audio output device 500 includes a digital signal processor (DSP) 512.
  • the memory 550 may store instructions for execution by the processor 510, including interface instructions 551 and/or leader device instructions 553.
  • the processor 510 may execute interface instructions 551 in order to receive an incoming audio stream 505 at the buffer 508 via the network interface 540.
  • the audio output component 530 which generates an audio content output (“ACO") 535, and (ii) transmit at least portions of the audio stream 505 to other audio output devices 120, 122, 124, 126.
  • the DSP 512 processes the audio stream 505 into audio output data 515, which may, for example, structure the audio stream 505 into delineable channeled portions that may be readily filtered at the playback location.
  • the audio output component 530 may receive audio output data 515 from the DSP 512. In variations, the audio output component 530 receives the audio stream 505 from the buffer 508.
  • the audio output component 530 may receive a channel portion 573 of the audio stream 505, based on the channel assignment as determined by the processor 510.
  • the audio output component 530 may transform the audio output data 515 (or audio stream 505) into sound which is emitted from the audio output device 500 onto the physical space of the network 101 .
  • the processor 510 of the audio output device 500 may execute leader device instructions 553 in order to (i) determine and communicate channel assignments 555 to other audio output devices 120, 122, 124, 126 on the network 101 , (ii) distribute the audio stream 505 (or portions thereof) to the other audio output devices 120, 122, 124, 126, and/or (iii) implement or otherwise communicate calibration actions 557 that affect the generation of audio output on the other audio output devices 120, 122, 124, 126.
  • the processor 510 may execute the leader device instructions 553 to utilize and distribute the enhanced form of the audio stream 505 from the DSP 512, shown as the audio output data 515.
  • the audio output device 500 may also execute the leader device instructions 553 to receive input information 501 from the controller device 1 10, 300.
  • the input information 501 may include group size information ("GS") 509, channel layout information (“CL”) 517 (e.g., positioning of the individual audio output devices about a dwelling in accordance with Dolby 5.1 /7.1 etc.), and configuration input (“CI”) 559.
  • the input information 501 may be received by, for example, user input provided through an interaction with the user interface 310.
  • the channel assignments 555 may be determined by the controller device 1 10, 300 and received by the audio output device 500 through the network interface 540. In some variations, the channel assignments 555 may be determined by channel selection instructions 561 executing on the audio output device 500.
  • the channel selection instructions 561 may utilize input information 501 , including (i) group size information 509, corresponding to a number of audio output devices 120, 122, 124, 126, (ii) the channel layout information 517, and (iii) a general configuration of the audio output devices 120, 122, 124, 126, provided as configuration input 559.
  • the channel selection instructions 561 utilize the various inputs in order to determine the channel assignments 555 for individual audio output devices 120, 122, 124, 126.
  • the inputs for the channel selection instructions 561 may be received over the network interface 540 from, for example, the mobile computing device 400 as the controller device 1 10, 300.
  • the audio output device 500 may distribute, as the leader, audio transmission data ("ATD") 525 to other audio output devices 120, 122, 124, 126 using the network interface 540.
  • the audio transmission data 525 may correspond to (i) the full audio stream 505, which may be filtered by the other audio output devices 120, 122, 124, 126 which receive the audio stream 505; (ii) the audio output data 515, which structures the full audio stream 505 into pre-determined and delineable channeled portions that may be readily filtered at the playback location; and/or (iii) separated channel portions 573, which may be individually transmitted to specific audio output devices based on the channel assignment of the audio output devices 120, 122, 124, 126.
  • the selection of a leader amongst the audio output devices 120, 122, 124, 126 may be a modal implementation, which may be dynamically implemented by the controller device 1 10, 300.
  • the audio output device 120, 122, 124, 126 that is the leader may be replaced by, for example, the source of the audio stream, the access point 102, the mobile computing device 400 acting as the controller device 1 10, 300 (which may also act as the source of the content), or another one of the audio output devices 120, 122, 124, 126.
  • the designation of one audio output device 120, 122, 124, 126 as the leader may be subject to change based on selection logic on the controller device 1 10, 300.
  • the controller device 1 10, 300 may execute selection logic to change the leader in response to an event or condition, such as presence of low bandwidth at the originally selected leader device.
  • the audio stream 505 may be received over the network interface 540, then buffered at buffer 508 and processed.
  • the input audio stream 505 may represent a full stream, without any delineation or segmentation of channels from the greater content.
  • the processor 510 (or DSP 512 if used) may execute filtering logic ("filter") 571 in order to create multiple channel portions 573 of the audio stream 505.
  • filter filtering logic
  • Each of the channel portions 573 may correspond to one of the channels of the determined channel configuration.
  • the audio stream 505 may be filtered into multiple channel portions 573, with each channel portion 573 being designated for a particular channel that is assigned to one of the audio output devices 120, 122, 124, 126 on the network 101 .
  • the channel portions 573 of the audio stream 505 may then be transmitted to the other audio output devices 122, 124, 126 using the network interface 540.
  • the audio output device 500 may receive calibration commands ("Cal. Comm.") 552 from the mobile computing device 400, and then implement the calibration commands 552 as calibration actions 557.
  • the calibration actions 557 may correspond to or be based on the calibration commands 552.
  • the calibration actions 557 may be implemented directly through distribution of the audio transmission data 525 or through communication with the other audio output devices 120, 122, 124, 126 via the network interface 540.
  • the audio output device 500 receives calibration related measurements and data from the mobile computing device 400, such as the timing parameter 317.
  • the audio output device 500 may also include logic to determine calibration actions 557 that include or correspond to calibration commands 552 (delay, volume, etc.), based on the measurements and data of the mobile computing device (e.g., different in arrival times for a common audio segment, timing parameters, etc.).
  • FIG. 6 illustrates a method 600 for dynamically determining and implementing channel configurations for a network-based audio system, according to various embodiments.
  • FIG. 7 illustrates a method 700 for operating an audio output device as a leader device when distributing audio content to other audio output devices on a network, according to various embodiments.
  • FIG. 8 illustrates a method 800 for calibrating an output of multiple audio output components on a network based on a relative position of a user, according to various embodiments.
  • FIG. 9 illustrates a method 900 for calibrating an audio output device based on a position of a user, in accordance with various embodiments.
  • FIG. 10 illustrates a method 1000 for implementing a user interface to initiate dynamic configuration of a network-based audio system, according to various.
  • Example methods such as provided by FIG. 6 through FIG. 10 may be performed using components such as described with examples of FIG. 1 through FIG. 5. Accordingly, reference may be made to elements of FIG. 1 through FIG. 5 for purpose of describing suitable components for performing a step or sub-step being described.
  • a set of audio output devices 120, 122, 124, 126 for a given network 101 may be identified by a controller device 1 10, 300 (610).
  • a controller device 1 10, 300 610.
  • the audio output devices 120, 122, 124, 126 may be identified by input information from a user.
  • input information 301 may be provided through the user interface 310 of the controller device 1 10, which may be provided on a mobile computing device 400.
  • the audio output devices 120, 122, 124, 126 that are connected on the network 101 may be identified programmatically, using, for example, object tracking and detection technology.
  • the audio output devices 120, 122, 124, 126 of the network 101 may be equipped with a receiver for receiving transmissions of ultrasonic acoustic waves.
  • the controller device 1 10, 300 may transmit the ultrasonic acoustic waves to the individual audio output devices 120, 122, 124, 126, and the audio output devices 120, 122, 124, 126 may include programming or logic to detect the ultrasonic acoustic waves.
  • the ultrasonic acoustic waves may provide for use of a dimensional parameter based on the received transmission.
  • Additional configuration information may also be determined for the identified audio output devices 120, 122, 124, 126, 200, 500 of the network 101 (612).
  • the additional configuration information may include a selected device layout (e.g., 5.1 arrangement, 7.1 arrangement etc.), as well as a relative location of the individual audio output devices 120, 122, 124, 126, 200, 500 about a physical region of the network 101 .
  • a user may specify the approximate location of individual audio output devices 120, 122, 124, 126, 200, 500 using a virtual interface of a generic room, provided through the user interface 310 of the controller device 1 10, 300.
  • the channel configuration for the audio output devices 120, 122, 124, 126 may be determined (620). As described with other examples, the channel configuration may specify channel assignment for identified audio output devices 120, 122, 124, 126. In some examples, the channel configuration may be determined from, for example, the mobile computing device 400 on which the controller device 1 10, 300 is implemented. In a variation, the channel configuration may be determined from the audio output device 120, 122, 124 or 126 that is selected as the leader by the user and/or controller device 1 10, 300. Still further, in another variation, the channel configuration may be determined from multiple components, including the controller device 1 10, 300 or audio output device 120, 122, 124 or 126 that operates as the leader.
  • an event or condition may be detected requiring a dynamic or on-the-fly change to the configuration of the audio output devices (630).
  • the occurrence of the condition or event may correspond to a new audio output device being introduced to the network 101 (632).
  • the condition or event may correspond to one of the existing audio output devices 120, 122, 124, 126 being removed or taken down from the network 101 (634).
  • there may be a change in a network bandwidth (636), resulting in some audio output devices 120, 122, 124, 126 having their bandwidth changed for better or worse as compared to other audio output devices 120, 122, 124, 126.
  • the audio content being played by the various audio output devices 120, 122, 124, 126 may change.
  • the channel configuration may merit change if the audio content shifts from having a relatively normal or low bit count to having a relatively high bit count.
  • the network condition or event may correspond to the user moving about a region where the audio output devices 120, 122, 124, 126 are in use and present (638).
  • some embodiments provide that when the user moves about, the movement of the user is detected, and one or more calibration actions may take place to equalize the experience of audio generated by the audio output devices 120, 122, 124, 126 on the network 101 .
  • one response to the user moving in the physical region of the audio output devices 120, 122, 124, 126 may be that the channel configuration is altered to accommodate the movement of the user.
  • the controller device 1 10, 300 and/or audio output device 120, 122, 124 or 126 that is the leader may respond by changing the channel configuration (640). More specifically, in some implementations, the channel configuration may be changed by altering the various channel assignments (642) to accommodate more or fewer audio output devices 120, 122, 124, 126 (in the event that an audio output device is added or subtracted from the network 101 ). Additionally the channel configuration may be changed by altering a layout so as to favor the change to, for example, the number of the audio output devices 120, 122, 124, 126 (644). Still further, the change in channel configuration may be responsive to the addition or deletion of the channel assignment (646).
  • a leader of the audio output devices 120, 122, 124 or 126 is selected (710).
  • the selection of the audio output device 120, 122, 124 or 126that is the leader may also be dynamic, in that some variations provide that the audio output device that is the leader may be selected and/or changed by the controller device 1 10, 300.
  • the audio output device 120, 122, 124 or 126 that is selected as the leader may change as a result of variations to the bandwidth available to that device (712), particularly as compared to the other audio output devices 120, 122, 124, 126 on the network 101 .
  • some, or all, of the channel configurations may be implemented through the audio output device 120, 122, 124 or 126 that is the leader (720). Still further, the audio output device 120, 122, 124 or 126 that is the leader and/or controller device 1 10, 300 may combine to implement the various channel configurations for all of the audio output devices 120, 122, 124, 126. The channel configurations may also be determined from the controller device 1 10, 300 and then communicated to the audio output device 120, 122, 124 or 126 that operates as the leader. As described with other examples, the channel configurations may include channel assignments for each of the audio output devices 120, 122, 124, 126. In some variations, the channel configurations may also include other information, such as a presumed layout for the audio output devices 120, 122, 124, 126.
  • audio content may be received on the audio output device 120, 122, 124 or 126 that is the leader for distribution to other audio output devices 120, 122, 124, 126 of the network 101 (730). While receiving and distributing the audio content, the leader audio output device 120, 122, 124 or 126 may also output a portion of the audio content that is assigned to its own channel (732).
  • the audio content is received on the audio output device 120, 122, 124, 126 and then sent to the other audio output devices 120, 122, 124, 126 that are on the network 101 in accordance with the determined channel configuration (740).
  • the audio output device 120, 122, 124 or 126 that acts as the leader operates to filter the audio content for individual channels, and then sends the portion of the filtered audio to each of the other audio output devices 120, 122, 124, 126 based on the channel assignment (742).
  • the full audio content may be sent from the audio output device 120, 122, 124, 126 to other audio output devices 120, 122, 124, 126 of the network 101 .
  • the audio output devices 120, 122, 124, 126 which receive the full audio content from the leader perform the filtering at the point of output, and further at the time just proceeding output (744). Further along the lines, some variations provide for the audio content to be augmented, and more specifically, processed on either the controller device 1 10, 300 or audio output device 120, 122, 124 or 126 that is the leader for purpose of generating structure in the audio content (746). The added structure may facilitate the other audio output devices 120, 122, 124, 126 in performing filtering operations on a full audio content.
  • an event or condition is detected which initiates a change in the channel configuration and or other selections (e.g., selection of the particular leader device, or motive implementation etc.) (750).
  • the event or condition may correspond to a change in the bandwidth of some or all of the audio output devices 120, 122, 124, 126, a change in the content being outputted (e.g., the bit value of the content), the addition or subtraction of an audio output device from the network 101 , and/or movement by the user sufficient to trigger calibration actions.
  • one or more processes may be triggered to dynamically adjust the channel configurations and other selections made by either the controller device 1 10, 300 or audio output device 120, 122, 124 or 126 operating as the leader (760).
  • the controller device 1 10, 300 and/or audio output device 120, 122, 124 or 126 that is the leader may respond by adjusting the channel configurations of the respective audio output devices while the output continues on the network (762).
  • the change in the channel configurations may include (i) changing the channel assignment of a given output device 120, 122, 124, 126, (ii) creating or eliminating a channel assignment based on the addition or subtraction of an audio output device 120, 122, 124, 126 to the network 101 , and/or (iii) changing a selected layout for the audio output device 120, 122, 124, 126 based on any one or more of user input, a change in the number of audio output devices 120, 122, 124, 126, or other criteria.
  • the channel configurations may be changed dynamically, so that the change to the channel configurations is relatively seamless and not interruptive to the listening experience of the user. For example, one or more changes may be made to the channel configurations while at least one or more of the audio output devices 120, 122, 124, 126 continue to output audio content.
  • Other changes that may be implemented dynamically include the selection of the audio output device 120, 122, 124 or 126 that is to operate as the leader (764).
  • the audio output device 120, 122, 124 or 126 that operates as the leader may implement a mode change so that the other audio output devices 120, 122, 124, 126 receive the audio content from the controller device 1 10, 300 or source, and not from the leader audio output device.
  • another mode change may be made to select a new audio output device 120, 122, 124 or 126 as the leader, based on criteria such the amount of bandwidth available to the selected audio output device.
  • the selection of the audio output device 120, 122, 124 or 126 that acts as the leader may be dynamic and made on the fly.
  • selections that may be made dynamically include: (i) the selection of the mode of operation, such as whether any one of the audio output device 120, 122, 124, 126 may be used as leader after having been leader in the same session, (ii) whether the audio content is filtered or structured (e.g. with or without leader device), and/or (iii) whether the audio content is to be filtered or augmented for the other audio output devices 120, 122, 124, 126 before
  • a location of a user may be tracked within the network environment based on measurements made by a mobile computing device 400 of the user when audio is being outputted by the audio output devices 120, 122, 124, 126 (810). More specifically, a relative proximity of the mobile computing device 400 (which presumably is carried by the user) to one or more audio output devices 120, 122, 124, 126 on the network 101 may be approximated (812). Based on the determined relative position of the user, as indicated by the user's mobile computing device, one or more output characteristics of the audio content may be calibrated to accommodate the presumed relative proximity of the user to the audio output devices 120, 122, 124, 126 of the network 101 (820).
  • the calibration may include controlling or otherwise adjusting the volume of one or more audio output devices 120, 122, 124, 126 (822).
  • the calibration may include adjusting or inserting delays into the output of audio content from one or more audio output devices 120, 122, 124, 126 (824). The insertion of delays may be based on, for example, a proximity determination as between select audio output devices 120, 122, 124, 126 and the user as compared to other devices connected to the same network 101 .
  • each audio output device 120, 122, 124, 126 is triggered to send an acoustic identification signal to the controller device 1 10, 300 (e.g. , mobile computing device 400) (910).
  • the acoustic identification signal may be an audible and encoded transmission that identifies the source of the acoustic transmission (912).
  • the acoustic identification signal may be an inaudible and encoded transmission that is detectable to resources (e.g. microphone) of the mobile computing device on which the controller device 1 10, 300 is implemented (914).
  • the mobile computing device 400 may perform a comparison of arrival times for the acoustic identification signal transmitted from each audio output device 120, 122, 124, 126 (920).
  • Each acoustic identification signal may include a particular segment of the audio content being played back.
  • each acoustic identification signal may represent one or two frames of the audio content.
  • Each audio output device 120, 122, 124, 126 may transmit an acoustic identification signal for a common portion of the audio content being outputted on that device.
  • the acoustic identification signal may provide a mechanism for the mobile computing device 400 of the user to make measurements that are indicative of a relative position of the mobile computing device to one or more other audio output devices 120, 122, 124, 126.
  • the mobile computing device 400 includes software or other programmatic functionality to time stamp the incoming audio signal, extract the encoded identifier, and store the time stamp and identifier of the incoming audio signal for subsequent analysis.
  • Each audio transmission may be encoded to coincide with a particular instance in time in the audio content. For example, a particular audio frame in a song may be selected for encoding by each audio output device 120, 122, 124, 126, and each audio output device 120, 122, 124, 126 may then output its portion of the audio frame when the song is being played.
  • the microphone on the mobile computing device 400 may detect the encoded audio signals from each audio output device 120, 122, 124, 126 and then record the arrival times and the identifier for each signal.
  • a comparison of arrival times may be performed.
  • the comparison may identify variation in the audio output device's arrival time, with the assumption that sound travels about 1 foot in 1 millisecond. If the arrival times reflect a discrepancy of more than 1 millisecond, then the arrival times indicate the mobile computing device 400 has moved a correlated amount. More specifically, the comparison of arrival times may indicate a proximity of the mobile computing device 400 of the user (on which the control device 1 10, 300 is implemented) relative to one or more of the audio output devices 120, 122, 124, 126 that are connected to the network 101 .
  • An output from one or more of the audio output devices 120, 122, 124, 126 may be controlled in order to calibrate the audio output from all of the audio output devices, as well as to harmonize the user's experience (930).
  • some embodiments provide for the calibration actions to include (i) adjusting the timing for individual audio output devices 120, 122, 124, 126 so that the arrival time of multiple audio output devices is substantially the same, at least from the perspective of the user (932); and (ii) adjusting the volume of an individual audio output device 120, 122, 124, 126 so that the user experiences each of the device as being equal in volume, regardless of the distance between the user and the particular audio output device 120, 122, 124, 126 (934).
  • a user interface 310 may be generated on a mobile computing device 400 on which the controller device 1 10, 300 is implemented, in order to enable the user to provide some or all of the configuration inputs for determining the channel configurations, as well as various other dynamic determinations (e.g., mode of operation, selection of the leader device, etc.).
  • the audio output devices 120, 122, 124, 126 of the network 101 may be located and linked (1010).
  • each audio output device 120, 122, 124, 126 may be capable of network communications, such as wireless communication (e.g., peer-to-peer wireless communications such as provided by Wi-Fi Direct).
  • the audio output devices 120, 122, 124, 126 may be linked, regardless of manufacturer or primary purpose.
  • the audio output devices 120, 122, 124, 126 may be heterogeneous, in terms of manufacturer, functionality, programmatic resources, and/or primary resource.
  • the user interface 310 may be generated to prompt or otherwise guide the user into providing information about the audio output devices 120, 122, 124, 126 that are connected on the network 101 (1020). For example, a number of audio output devices 120, 122, 124, 126 that are connected to the network 101 may be specified by user input provided through the user interface 310. Furthermore, the user may identify each audio output device 120, 122, 124, 126, and further identify a relative location of each audio output device 120, 122, 124, 126 in the user's dwelling or network space. For example, the user may be provided with the user interface 310 that depicts a general outline of a room (e.g., FIG. 1 1 ).
  • the outline may be generic or include user-specified features (e.g., extra wall, rounded walls, etc.)
  • the user may identify specific audio output devices 120, 122, 124, 126 in the user's set, and then further indicate a location in the space or dwelling where the specific audio output devices are positioned.
  • functionality provided by the audio output devices 120, 122, 124, 126 may trigger determination of the channel assignments (1030). As described with other
  • configuration may serve as inputs for determining the channel assignments.
  • the calibration may be performed based on the relative location of the user (1040).
  • An initial calibration may, for example, calibrate the arrival time and volume level of the media content output from each audio output device 120, 122, 124, 126 based on an initial location of the user relative to the audio output devices.
  • the user may elect to have calibration performed periodically or repeatedly so to track the steps of the user in the dwelling or space.
  • FIG. 1 1 illustrates a user interface 1 100 for enabling speaker selection and assignment according to various embodiments.
  • the user interface 1 100 may be generated from an application or programming component executing on the mobile computing device 400.
  • the user interface 1 100 may, for example, include input functionality, including (i) number select feature 1 106 for enabling the user to specify a number of audio output devices 120, 122, 124, 126 that are to be in use, and (ii) a layout selection 1 109 feature to enable the user to select a preferred layout. Additionally, the user may be provided with placement
  • the room representation 1 1 12 may be a graphic representation of a room.
  • the user may, for example, click and drag device representations 1 1 1 1 onto the room representation 1 1 12 to approximate the general location and orientation of the audio output devices 120, 122, 124, 126.
  • the user may select the calibration feature 1 120 to initiate a calibration process such as described with the method 1000.
  • the calibration feature 1 120 may be triggered once to locate the user relative to the audio output devices 120, 122, 124, 126.
  • the calibration feature 1 120 may correct any imprecision or error by the user in specifying the location of individual audio output devices 120, 122, 124, 126.
  • the calibration feature may be implemented in a track mode, where the calibration is performed repeatedly to track whether the user moves.

Abstract

A set of audio output devices may be established and configured to output channel specific audio. Once established, the channel configuration may be changed and updated in response to events such as changes to user preference, or the addition or subtraction of audio output devices to the network. In some embodiments, the reconfiguration may be performed on the fly while audio content is being outputted by the audio output devices.

Description

SYSTEM AND METHOD FOR CHANGING A CHANNEL CONFIGURATION OF A SET OF
AUDIO OUTPUT DEVICES
BACKGROUND
[0001 ] Audio systems exist that utilize network connected audio output devices (e.g., speakers). In such systems, multiple connected speakers may be used to output the same content.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] FIG. 1 illustrates a network-based audio output system that is capable of dynamic configuration and/or calibration, according to various embodiments.
[0003] FIG. 2 illustrates an audio output device that is capable of being selected and operated as a leader device according to various embodiments.
[0004] FIG. 3 illustrates an example of a controller device for use with various embodiments.
[0005] FIG. 4 illustrates a mobile computing device on which various embodiments may be implemented.
[0006] FIG. 5 illustrates an audio output device on which various embodiments may be implemented.
[0007] FIG. 6 illustrates a method for dynamically determining and implementing channel configurations for a network-based audio system, according to various embodiments.
[0008] FIG. 7 illustrates a method for operating an audio output device as a leader device when distributing audio content to other audio output devices on a network, according to various embodiments.
[0009] FIG. 8 illustrates a method for calibrating an output of multiple audio output components on a network based on a relative position of a user, according to various embodiments.
[0010] FIG. 9 illustrates a method for calibrating an audio output device based on a position of a user, in accordance with various embodiments.
[0011 ] FIG. 10 illustrates a method for implementing a user interface to initiate dynamic configuration of a network-based audio system, according to various embodiments. [0012] FIG. 1 1 illustrates a user interface for enabling speaker selection and
assignment, according to various embodiments.
DETAILED DESCRIPTION
[0013] According to some embodiments, a set of audio output devices may be established and configured to output channel specific audio. Once established, the channel configuration may be changed and updated in response to events such as changes to user preference, or the addition or subtraction of audio output devices to the network. In some embodiments, the reconfiguration may be performed on the fly while audio content is being outputted by the devices.
[0014] In some embodiments, the audio output devices may be controlled so that the output of the device is calibrated to the position of the user. In particular, the arrival time and/or volume of the audio may be calibrated so that the user experiences the output from perspective of being equally separated from each audio output device, with each audio output device providing a uniform audio output.
[0015] Embodiments described herein provide for a system, method, and device for outputting audio content over a network. In some embodiments, multiple audio output devices that are connected on a network to form an audio output set for receiving and outputting at least a portion of an audio content originating from a source. A controller device may determine a channel configuration for the audio output set. The channel configuration may include a channel assignment for each audio output device that is connected on the network to form the audio output set. When the audio content is being outputted, the controller device may respond to an event or condition by changing the channel configuration.
[0016] In some embodiments, a controller device determines a channel configuration for the audio output set. The channel configuration may include a channel assignment for each audio output device that is connected on the network to form the audio output set. The controller device receives audio content from a source, and outputs a channel portion of the audio content based on a channel assignment of the given audio output device. For each of the other audio output devices, the controller device communicates at least another portion of the audio content to the other audio output device. Additionally, the controller responds to an event or condition by changing the channel configuration and then outputting the channel portion of the audio content based on the new channel assignments.
[0017] In some embodiments, each of multiple audio output devices is triggered to generate an acoustic identification signal. A controller device may perform a comparison of the acoustic identification signal from each of the multiple audio output devices. The output from one or multiple audio output devices is controlled based on the comparison.
[0018] As used herein, a speaker is intended to mean an audio output device, such as a network-connected audio output device. One example of a speaker includes a dedicated device that outputs audio such as music. Another non-limiting example of a speaker includes a multifunctional device, such as a mobile device or tablet, which may output video, capture and store audio content, enable user interaction and/or perform numerous other actions.
[0019] Various embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer- implemented method. Programmatically means through the use of code, or computer- executable instructions. A programmatically performed step may or may not be automatic.
[0020] Various embodiments described herein may be implemented using
programmatic modules or components. A programmatic module or component may include a program, a subroutine, a portion of a program, or software or a hardware component capable of performing one or more stated tasks or functions. As used herein, a module or component may exist on a hardware component independently of other modules or components.
Alternatively, a module or component may be a shared element or process of other modules, programs, or machines.
[0021 ] Furthermore, various embodiments described herein may be implemented through instructions that are executable by one or more processors. These instructions may be carried on a computer-readable medium. Machines shown or described with figures below provide examples of processing resources and computer-readable mediums on which instructions for implementing embodiments of the invention may be carried and/or executed. In particular, the numerous machines shown with embodiments of the invention include processor(s) and various forms of memory for holding data and instructions. Examples of computer-readable mediums include permanent memory storage devices, such as hard drives on personal computers or servers. Other examples of computer storage mediums include portable storage units, such as CD or DVD units, flash or solid state memory (such as carried on many cell phones and consumer electronic devices), and magnetic memory. Computers, terminals, network enabled devices (e.g., mobile devices such as cell phones) are all examples of machines and devices that utilize processors, memory, and instructions stored on computer-readable mediums. Additionally, embodiments may be implemented in the form of computer-programs, or a computer usable carrier medium capable of carrying such a program. SYSTEM DESCRIPTION
[0022] FIG 1 illustrates a network-based audio output system 100 that is capable of dynamic configuration and/or calibration, according to various embodiments. The audio output system 100 may be implemented in a local or closed network 101 , such as provided by a home or local area network. The network 101 may include multiple connected devices, including a controller device 1 10 and multiple network enabled audio output devices 120, 122, 124, and 126. In some variations, the network 101 includes an access point 102 for providing a wireless connectivity medium. By way of example, each of the controller device 1 10 and the audio output devices 120, 122, 124, 126 may operate under IEEE Specifications of 802.1 1 (a), 802.1 1 (b), 802.1 1 (g), 802.1 1 (n), 802.1 1 (ac), or the like (collectively "Wi-Fi," "Wi-Fi network," or "802.1 1 protocol"). Still further, in some implementations, the controller device 1 10 and/or some or all of the audio output devices 120, 122, 124, 126 are capable of wireless peer-to- peer communications, such as provided by Wi-Fi Direct. Still further, some or all of the audio output devices 120, 122, 124, and 126 may be able to communicate directly with other devices on the network as peers. By way of example, the individual audio output devices 120, 122, 124, and 126 may communicate using a direct, wireless peer-to-peer communication protocol, such as provided by Wi-Fi Direct. Still further, in some variations, one or more of the audio output devices 120, 122, 124, and 126 may utilize a connectivity medium such as provided through an Ethernet connection or other network-based wired connection.
[0023] The audio output devices 120, 122, 124, and 126 may be connected and positioned in a physical region of the network 101 , based on preference of a user. A physical region of the network 101 may correspond to a dwelling, or alternatively, to a room or space within the dwelling. By way of example, an environment of the network 101 may correspond to a home network in which multiple speakers or other audio output devices are provided with network connectivity for purposes of outputting audio content selected by the user. In this context, the user may selectively position individual connected speakers about a room to enhance the user's enjoyment of rendered audio content.
[0024] In some embodiments, the audio output devices 120, 122, 124, and 126 may be heterogeneous in nature, meaning that the audio output devices 120, 122, 124, and 126 may have different manufacturers, capabilities, resources and/or purposes. For example, one or more of the audio output devices 120, 122, 124, and 126 may correspond to a television, for which audio output is not a primary purpose. One or more of the audio output devices 120, 122, 124, and 126 may also include programming or other logic to enable that audio output device to communicate with other devices on the network. An example of such programming or logic includes ALLPLAY platform, manufactured by QUALCOMM CONNECTED EXPERIENCES, which may be installed or otherwise provided through firmware on wireless speakers. While some examples describe audio output devices 120, 122, 124, and 126 as speakers (or dedicated audio output devices), other variations provide for audio output devices 120, 122, 124, and 126 which have multi-purposes, including televisions, desktop computers, or other multifunction audio output devices.
[0025] The controller device 1 10 operates to execute an application, software platform, or other programming logic in order to communicate with and control the audio output devices 120, 122, 124, and 126. By way of example, the controller device 1 10 may correspond to a mobile computing device, such as a multifunction cellular telephony/messaging device, tablet, hybrid device (so called "phablet"), or wearable computing device.
[0026] The controller device 1 10 may operate to control and configure the output of audio using the audio output devices 120, 122, 124, and 126. Any one of multiple audio distribution configurations may be used for purposes of outputting the audio content on multiple audio output devices 120, 122, 124, and 126 in accordance with a dynamically selected channel configuration. In some embodiments, the controller device 1 10 may be operated modally in order to select from multiple possible audio distribution configurations.
[0027] The controller device 1 10 distributes audio content ("AC") 1 13 directly or indirectly to each of the multiple audio output devices 120, 122, 124, or 126. In some implementations, the controller device 1 10 is the source of the audio content 1 13 being distributed. For example, the audio content 1 13 may correspond to media files ("MF") 103 that are accessed from a media library 105 of the user. Depending on implementation, the media library 105 may be local to the controller device 1 10, distributed amongst multiple devices on the network 101 , or remote to the controller device 1 10. For example, some or all of the media library 105 may be stored on other devices (including one or more of the audio output devices 120, 122, 124, or 126) or resources of the network 101 , and the controller device 1 10 may communicate with another device on the network 101 (e.g., home computer, cable box, etc.) in order to retrieve media files 103 from the media library 105. Still further, the controller device 1 10 may access network services ("NS") 107 for the audio content 1 13, such as online media sites (e.g., PANDORA, SPOTIFY, GOOGLE PLUS, etc.). The controller device 1 10 may also generate audio content 1 13 from other content sources ("CS") 109, such as cable, satellite or over-the-air broadcasts. Additionally, the controller device 1 10 may distribute the audio content 1 13 originating from multimedia content that is rendered on the device. For example, the controller device 1 10 may execute different applications which generate multimedia content (e.g., games), and audio from these active applications may be distributed as the audio content 1 13. In other variations, the controller device 1 10 may access another device or resource on the network 101 , such as a device that communicates with one or more of the audio output devices 120, 122, 124, or 126 through the access point 102. Depending on the capabilities of the respective devices, the controller device 1 10 may use peer-to-peer wireless communications (e.g., Wi-Fi Direct) in order directly transmit the audio content 1 13 to each of the desired audio output devices 120, 122, 124, and 126 on the network 101 .
[0028] In some implementations, the controller device 1 10 distributes the audio content 1 13 through one of the audio output devices 120, 122, 124, 126 that implement functionality for operating as the leader of the active output devices on the network 101 . The controller device 1 10 may select one of the audio output devices 120, 122, 124, 126 to serve as the leader device. In an example of FIG. 1 , the audio output device 120 that is selected as the leader may receive the audio content 1 13 from the controller device 1 10 (which may access the media library 105, network service 107 or content source 109) for distribution to the other audio output devices 122, 124, 126. In variations, the audio output device 120 may receive the audio content 1 13 from another source (e.g., another device of network 101 ), under direction or control of the controller device 1 10, for distribution to the other audio output devices 122, 124, 126.
[0029] In alternative variations or modes, either the controller device 1 10 or the audio output device 120 that operates as the leader may channel-filter or augment the audio content 1 13 for transmission to the respective audio output devices. When channel-filtered, the audio content 1 13 may be delineated into multiple channel portions 121 , and each channel portion 121 of the audio content 1 13 is communicated to an assigned audio output device 120, 122, 124, and 126. When augmented, the audio content 1 13 may be pre-structured into channeled components, and the augmented audio ("aug. audio") 133 may be transmitted to the other audio output devices 122, 124, 126 where the augmented audio 133 is filtered into a corresponding channel portion 121 .
[0030] In an example of FIG. 1 , the controller device 1 10 includes an audio distribution logic 1 12, a dynamic selection logic 1 14, a channel configuration logic 1 16, and a calibration logic 1 18. Furthermore, in an example of FIG. 1 , one or more of the audio output devices 120, 122, 124, and 126 may be selected to implement the functionality of the leader, which may include components and functionality (e.g., as described with an example of FIG. 2). The functionality, shown to be described with either the controller device 1 10 or the audio output device 120 that is selected as the leader, may be interchangeable amongst the two devices (or amongst another device that may be substituted as the leader for the audio output device 120). For example, in some variations, the controller device 1 10 may include functionality for implementing channel filtering or channel augmentation (e.g., as shown in FIG. 2). Likewise, in some variations, the audio output device 120 may operate as the leader and also include one or more of the components of the controller device 1 10, such as one or more of the dynamic selection logic 1 14, channel configuration logic 1 16, or calibration logic 1 18.
[0031 ] According to some embodiments, the controller device 1 10 includes the channel configuration logic 1 16 for performing operations to determine a channel configuration 1 15 of the set of audio output devices 120, 122, 124, and 126. The channel configuration 1 15 may be determined by (i) a number of available audio output devices 120, 122, 124, and 126, (ii) a configuration scheme 1 17 or layout that is based on preference and/or the number of available audio output devices 120, 122, 124, and 126, and/or (iii) the relative positioning of each audio output device 120, 122, 124, and 126 within the space or environment of the network 101 . Accordingly, the channel configuration 1 15 may specify channel assignments 123 for each audio output device 120, 122, 124, and 126, given a desired configuration scheme 1 17 and the relative positioning of the audio output devices. Once determined, channel assignments 123 may be communicated to the audio output devices 122, 124, 126 as control or command data. Depending on implementation or mode of operation, the channel assignments 123 may be communicated directly from the controller device 1 10 or from the audio output device 120 that is acting as the leader. As described with various examples, the channel configuration logic 1 16 may dynamically re-determine and implement the channel configuration 1 15 based on the occurrence of conditions and events that affect usage of the audio output devices 120, 122, 124, and 126 on the network 101 .
[0032] Still further, in some variations, the controller device 1 10 may have different modes of operation in order to implement an audio distribution configuration in which the audio distribution logic 1 12 directly distributes the audio content 1 13 to each of the audio output devices 120, 122, 124, and 126. The audio distribution logic 1 12 of the controller device 1 10 may communicate either a full or partial stream to multiple audio output devices.
[0033] According to variations, in an alternative mode, the controller device 1 10 may use the dynamic selection logic 1 14 to select one of the multiple audio output devices 120, 122, 124, 126 as a leader. In some variations, the determination to use the particular audio output device 120 as the leader may be made programmatically, based on, for example, available resources of the controller device 1 10 and/or preferences of the user. Various criteria may be used to select one audio output device 120 as the leader for the other audio output devices 122, 124, or 126 of the network 101 . Among the criteria, the audio output device 120, 122, 124, and 126 that is selected to be the leader may be required to have a minimum set of resources, such as a minimum processing capability and/or the ability to establish multiple simultaneous peer-to-peer connections with other devices on the network 101 . Alternatively, the audio output device 120 that is selected as the leader may have the most or best of a desired resource or capability. For example, the audio output device 120 may be selected as the leader because the audio output device 120 satisfies a criterion of containing digital signal processor ("DSP"), or because the audio output 120 device is deemed to have the greatest amount of available bandwidth as compared to the other audio output devices.
[0034] In some variations, the control device 1 10 may communicate a leader selection 1 1 1 to the selected audio output device 120, 122, 124, or 126. In some embodiments, the controller device 1 10 makes the leader selection 1 1 1 programmatically using for example, the dynamic selection logic 1 14.
[0035] In some implementations, the audio output device 120 receives the audio content 1 13 from a content source (CS) 109, and then distributes the audio content 1 13 as the channel portions 121 to each of the other audio output devices 122, 124, 126 of the network 101 . The source of the audio content 1 13 may, for example, correspond to controller device 1 10. For example, controller device 1 10 may generate the audio content 1 13 (e.g., gaming content) and/or store portions of the media library 105, such as a library of songs or albums, and the audio content 1 13 may correspond to a media file 103 from the media library 105. Alternatively, controller device 1 10 may also serve as a source for audio content retrieved from both local network and remote sources. For example, the controller device 1 10 may access other media resource devices (e.g., home computer, cable box, etc.) on the network 101 in order to retrieve the media files 103 of the user's media library. Still further, the controller device 1 10 may access commercially available third party network services 107 for the audio content 1 13 (e.g., PANDORA, SPOTIFY, GOOGLE PLUS, etc.). In other variations, the content source 109 for the audio content 1 13 may be another device on the network 101 , such as a device that communicates with the controller device 1 10 and/or output device 120 through the wireless access point 102. Still further, in other variations, the source of the audio content 1 13 may be another content source 109 (e.g., cable or over-the-air broadcast) available through the network 101 .
[0036] According to some variations, the audio output device 120 processes the audio content 1 13 (audio data) to delineate the channel portions 121 from the audio content 1 13. Each channel portion 121 may then be communicated to corresponding audio output device 122, 124, 126. The channel portion 121 for the audio output device 120 may be played using a local audio output resource, in concert with the playback of the channel portions 121 of the other audio output devices 122, 124, 126. [0037] According to some embodiments, the channel configuration 1 15 may be dynamically determined on the fly, based on conditions or events detected on the network 101 . For example, the controller device 1 10 may detect a particular network condition (e.g., limited bandwidth) and then output the channel configuration 1 15 to include an alternative set of channel assignments 123 for the respective audio output devices 120, 122, 124, and 126. Still further, the controller device 1 10 may receive input, or otherwise detect the addition or subtraction of an audio output device 122, 124, or 126, so as to affect a number of audio output devices 120, 122, 124, and 126 that are in use. In some cases, a change in the number of audio output devices 120, 122, 124, and 126 that are in use may also change the configuration scheme 1 17 (e.g., from 7.1 to 5.1 ) and/or require further changes to the channel assignment 123, in order to accommodate a different number of audio output devices 120, 122, 124, and 126 that are in use (or available for use) on the network 101 . The ability of the controller device 1 10 to dynamically determine and implement channel configurations may enable, for example, playback of the audio content from some or all of the audio output devices 120, 122, 124, and 126 to continue substantially uninterrupted while one or more channel assignments 123 takes place. In addition to dynamically determining the channel configuration 1 15, the controller device 1 10 may dynamically select the audio output device 120 that is the leader. The determination of which audio output device 120 serves as the leader may be based on, for example, the available bandwidth for each of audio output device 120, 122, 124, or 126 that satisfy one or more criteria for being the leader.
[0038] As still another example, the modal operation of the controller device 1 10 in distributing the audio content 1 13 may also be dynamically changed. For example, the controller device 1 10 may switch from using one audio output device 120 as the leader to directly transmitting the audio content 1 13 (or channel portions 121 thereof) to each audio output device 120, 122, 124, and 126. Still further, the selection of which audio output device 120, 122, 124, 126 serves as the leader may also be dynamic, based on factors such as the available bandwidth to the respective audio output devices 120, 122, 124, 126.
[0039] In some variations, the controller device 1 10 includes the calibration logic 1 18. The calibration logic 1 18 may operate to adjust output of the audio output devices 120, 122, 124, 126 to accommodate a relative position of the user in the physical space of the environment of the network 101 . The calibration logic 1 18 may operate to accommodate the proximity of the user to one or more of the audio output devices 120, 122, 124, and 126. The calibration logic 1 18 may implement operations so that the audio experienced by the user at a given location is uniform from all direction. In particular, the calibration logic 1 18 may implement adjustments 1 19 in the form of delays in individual audio output devices 120, 122, 124, and 126 so that the arrival time of audio transmissions from each of the respective audio output devices 120, 122, 124, 126 is near simultaneous with respect to the user, even though the user may be closer to one audio output device 120, 122, 124, 126 as compared to another. Still further, the calibration logic 1 18 may implement adjustments 1 19 in the form of volume adjustment for the individual audio output devices 120, 122, 124, 126 so that the volume experience by the user from each of the audio output devices 120, 122, 124, 126 is the same, even when the user is closer to one audio output device as compared to another.
[0040] FIG. 2 illustrates an audio output device that is capable of being selected and operated as a leader, according to various embodiments. An audio output device 200 such as shown and described with an example of FIG. 2 may operate as the audio output device 120 in the example of FIG. 1 . With reference to FIGS. 1 -2, in more detail, the audio output device 200 includes an audio receiver 210, control logic 220, an audio output resource 230, and a device interface 240. The control logic 220 may be coupled with, or include, channel filter 222 and/or channel augmentation 226.
[0041 ] The audio receiver 210 may receive audio content 201 from the controller device 1 10. Alternatively, the audio receiver 210 may receive the audio content 201 from another source, such as from an online source or from another device. The audio content 201 may be received either directly or indirectly (e.g., via an access point 102 or from the controller device 1 10).
[0042] The audio output device 200 may also receive channel configuration data 221 from the controller device 1 10 (shown via the device interface 240). In variations, the audio output device 200 includes channel configuration logic 244 for determining channel configuration data 221 independently of any communication from another device. The channel configuration logic 244 may determine channel configuration data 221 from, for example, user input 243, such as provided through the user's interaction with a user interface of the audio output device 200. The channel configuration logic 244 may also determine channel configuration data 221 based on settings 245 or preferences of the user or device.
[0043] In some implementations or modes of operation, the audio receiver 210 may communicate the full stream of audio content ("full stream AC") 212 to the channel filter 222 of control logic 220. The channel filter 222 filters the full stream of audio content 212 into channeled portions based on channel assignments defined by the channel configuration data 221 . Once channels are delineated from the audio content 212, audio output resource 230 receives the channel portion 215 for the channel assigned to the audio output device 200. The portion of the outgoing audio content (AC) 217 for the channels assigned to the other audio output devices 122, 124, 126 may be transmitted to the other audio output devices via the device interface 240.
[0044] In a variation, the audio output device 200 may implement channel augmentation 226. Channel augmentation 226 may structure the audio content 212 into an augmented stream 219 that may be transmitted to the other audio output devices 122, 124, 126 via the device interface 240. The augmented stream 219 may be filtered for an appropriate channel at the corresponding audio output device 122, 124, 126, which coincides with the point of output for the particular channel output. The device interface 240 may communicate augmented stream 219, which may be filtered for a given channel. In this way, the channel augmentation 226 may provide an alternative to filtering the audio content in advance of transmission.
[0045] The device interface 240 may include programming or logic to enable audio output device 200 to be interconnected and operable with multiple other devices of different kinds on the network 101 . In some implementations, the device interface 240 includes an application program interface provided through, for example, ALLPLAY, manufactured by QUALCOMM CONNECTED EXPERIENCES.
[0046] In some embodiments, the audio output device 200 includes functionality for triggering or implementing calibration control 250. In some implementations, the calibration control 250 receives calibration input 249 from another device, such as from controller device 1 10. In one example, controller device 1 10 includes resources and logic for receiving input that is indicative of calibration variations, and further includes resources and logic to determine calibration actions that may be taken on one or more of the audio output devices 120, 122, 124, 126 in order to calibrate the audio output for the location of the user. As mentioned with other examples, the calibration actions serve to affect an audio output experienced by the user, with specific consideration for a relative proximity of the user to individual audio output devices 120, 122, 124, 126 of the network 101 .
[0047] In some embodiments, the calibration actions of the calibration control 250 may include delay control 251 . The control logic 220 may process and communicate the delay control 251 to other audio output devices 122, 124, 126 via the device interface 240. Another example of calibration actions of calibration control 250 includes volume control 253. The control logic 220 may communicate the volume control 253 to the other audio output devices via the device interface 240. CONTROLLER DEVICE
[0048] FIG. 3 illustrates an example of a controller device 300, according to various embodiments. With reference to FIGS. 1 -3, according to various embodiments, the controller device 300 (which may correspond to the controller device 1 10) may be implemented using software that executes on a mobile computing device, such as a device that may be carried by a person within the space or physical region of the network 101 . By way of example, the controller device 300 may correspond to a device such as a cellular telephony/messaging device (e.g., feature phone), tablet or hybrid device, wearable computing device, or laptop. In some embodiments, the controller device 300 operates to receive input information 301 for determining (i) a number of audio output devices 120, 122, 124, 126 that are connected on the network 101 , and (ii) the location of each audio output device 120, 122, 124, 126 with respect to a given space of coverage within the network 101 . The software that is implemented on the controller device 300 may correspond to, for example, an application, a suite of applications, or alternatively to an operating system level functionality. The controller device 300 may share an application framework or interface with other devices of the network. For example, each of the controller device 300 and the various audio output devices 120, 122, 124, 126 that are employed on the network 101 may implement a media platform, such as provided by
QUALCOMM ALLPLAY media platform.
[0049] As an addition or alternative, in some embodiments, the controller device 300 operates to detect and process transmissions for purpose of estimating a proximity of the controller device to individual audio output devices 120, 122, 124, 126 that are operating on the network 101 . With such proximity information, the controller device 300 may operate to calibrate an output of one or more of the audio output devices 120, 122, 124, 126 on the network 101 .
[0050] In some embodiments, the controller device 300 includes a user interface 310, audio output device control logic ("AOD control logic") 320, device position logic 330, and an audio output interface 340. The user interface 310 may display prompts that guide the user into providing input that identifies basic input information 301 about the audio output devices 120, 122, 124, 126 employed on the network 101 . For example, the user interface 310 may display a virtualized room or space within the dwelling, and provide features that enable the user to indicate, among other information, (i) a number of audio output devices 120, 122, 124, 126 employed on the network 101 , and (ii) a general location for a given audio output device 120, 122, 124, 126 which may be labeled. The user interface 310 may also execute to prompt the user to provide input information 301 that identifies additional information about the audio output devices, such as a manufacturer, capability, or connectivity status. The user interface 310 may output device position information 31 1 , which may identify the number of audio output devices and their relative position in a space represented through the user interface 310. The device position logic 330 may receive the position information 31 1 , and optionally generate one or more response queries 313 that may configure content on the user interface 310 to, for example, prompt the user to provide additional input information 301 .
[0051 ] By way of example, the response queries 313 may prompt the user to provide additional input information 301 that may approximate the length or total distance between the audio output devices 120, 122, 124, 126 on the network 101 , so as to provide dimensionality to the virtualized representation of the space within the network. Still further, the response query 313 may prompt the user to specify audio output devices 120, 122, 124, 126 for different rooms of a dwelling of the network 101 . More generally, the response query 313 may prompt the user interface 310 to display content for enabling the user to define different rooms or spaces of the dwelling covered by the network 101 . In some variations, the input information 301 may prompt the user into entering information corresponding to (i) group size information 309, corresponding to a number of audio output devices on the network 101 , and (ii) device position information 31 1 , which identifies a general or relative location of audio output devices 120, 122, 124, 126 within the space of the network 101 (e.g. , within the individual rooms). Still further, while some embodiments provide for the user interface 310 to prompt the user for input information 301 , other embodiments provide for the user interface 310 to guide the user into selecting one or more configurations affecting the audio output devices 120, 122, 124, 126 including input for selecting channel configuration 333.
[0052] In some embodiments, the device position logic 330 may operate to determine a set of the channel configurations 333 based at least in part on the group size information 309 and the device position information 31 1 of the individual audio output devices 120, 122, 124, 126. The channel configuration 333 may specify a speaker configuration layout ("C. Lay") 337, such as 3, 5, 7, (or more) Surround Sound layout, or Dolby 5.1 or 7.1 speaker layout. The channel configurations 333 for the audio output devices 120, 122, 124, 126 may include channel assignments 339 ("Chan. Ass. 339") for individual audio output devices. In some variations, the configuration layout 337 may be based on one or more criterion, such as the number of audio output devices 120, 122, 124, 126 (e.g., provided with group size information 309) and/or the positioning of the audio output devices 120, 122, 124, 126 (e.g., as specified from device position information 31 1 ). In some variations, configuration layout 337 may be selected by default. In another variation, the user may be provided a selection feature via the user interface 310 in order to make selection of a particular configuration layout 337. A configuration library 329 may retain information about different possible configuration layouts 337, and provide a mechanism for selecting one or more configuration layouts 337 based on the group size information 309 and/or the device position information 31 1 of each audio output devices 120, 122, 124, 126. The device position information 31 1 of each audio output device 120, 122, 124, 126 may be also indicated by input information 301 received via the user interface 310), as well as other input from the user (e.g., input that is indicative of a preference of the user). The channel assignments 339 may be made programmatically, based on, for example, the configuration layout 337, the group size information 309, and/or device position information 31 1 of the audio output devices 120, 122, 124, 126 in the space of the dwelling.
[0053] The channel configuration 333 may be communicated to the audio output interface 340. As mentioned with other examples, the audio output interface 340 may provide an application programming interface that enables the controller device 300 to communicate with other connected devices of the network 101 . For example, the audio output interface 340 may be used for wireless peer-to-peer communications, such as provided through a Wi-Fi Direct medium. In some variations, the audio output interface 340 communicates the channel configurations 333 to the audio output device 120, 200 that is selected to be the leader for a particular session on the network.
[0054] As mentioned, in some embodiments, the controller device 300 includes functionality for calibrating an output of the audio output devices 120, 122, 124, 126 on the network 101 based on a location of the user at a given moment. As the location of the user changes, the controller device 300 may implement functionality to dynamically control an output of individual audio output devices 120, 122, 124, 126 on the network 101 , so that the audio experience of the user equally reflects the output from individual audio output devices.
[0055] In some embodiments, the controller device 300 includes an acoustic input interface 306, a timing analysis component 312, and the audio output device control logic 320. The audio output device control logic 320 may include a delay (or latency) control 322 and volume control 324. The acoustic input interface 306 may include a programming component that interfaces with a microphone of a mobile computing device on which controller device 300 is implemented. In particular, the acoustic input interface 306 may be configured to detect reference acoustic reference transmissions ("AREFTR") 361 from each of the active audio output devices 120, 122, 124, 126 on the network 101 . The acoustic input interface 306 may include logic that recognizes, for example, a predetermined characteristic of the acoustic reference transmissions 361 , such as a signal pattern.
[0056] In some embodiments, each audio output device 120, 122, 124, 126 transmits a locally unique acoustic reference transmission 361 , signaling an identifier for the transmitting device. Depending on implementation, the acoustic reference transmission 361 of each audio output device 120, 122, 124, 126 may be in the audible or inaudible range. In some
embodiments, the acoustic reference transmission 361 of the each audio output device 120, 122, 124, 126 is communicated at a frequency range that is detectable to a microphone of the mobile computing device on which the controller device 300 is provided. Additionally, each of the audio output devices 120, 122, 124, 126 communicates a corresponding acoustic reference transmission 361 , representing a portion (e.g., a frame or series of frames) of an audio content (e.g., song) that is outputted from each of the respective audio output devices.
[0057] The acoustic input interface 306 may include logic to detect the acoustic reference transmission 361 from each of the audio output devices 120, 122, 124, 126. The acoustic input interface 306 may also compare the arrival time 363 of each of the acoustic reference transmissions 361 in order to determine a delay or other difference between the arrival times of the acoustic reference transmissions from different audio output devices 120, 122, 124, 126 on the network 101 . By way of example, embodiments recognize that it takes sound slightly less than 1 millisecond to travel 1 foot, and that if the user moves by relatively small amounts (e.g., one foot), a detectable delay may result that affects the quality of the user experience in listening to the collective audio output from the audio output system 100.
[0058] The timing analysis component 312 may analyze the arrival time 363 of each of the acoustic reference transmissions 361 in order to detect sufficiently significant variations amongst the arrival times 363 that are attributed to the individual audio output devices 120, 122, 124, 126. The difference in arrival times 363 may be indicative of user location, and more specifically, of a relative location or proximity of the user to individual audio output devices 120, 122, 124, 126 of the system.
[0059] In some variations, a contextual analysis component 314 may also be
implemented in connection with the timing analysis component 312. The contextual analysis component 314 may determine contextual information from timing differentials (as identified by arrival times 363) of the acoustic reference transmissions 361 from the different audio output devices 120, 122, 124, 126. In some variations, the contextual analysis component 314 may detect a trend or event from the movement of the user within a network space or region. For example, the contextual analysis component 314 may reference known information about the location of individual audio output devices 120, 122, 124, 126 (which may be approximated from input information 301 and/or from location detection technology) in order to determine that the user has switched rooms. Accordingly, one determination that may be made from the contextual analysis component 314 includes the determination to power down or up selected audio output devices 120, 122, 124, 126 based on the determined location of the user. The contextual analysis component 314 may signal a contextual determination ("CD") 315 to the audio output device control logic 320, which in turn may send control commands ("CC") 321 to select audio output devices 120, 122, 124, 126 for purpose of powering those audio output devices up or down based on contextual determinations 315. By way of example, the contextual determinations 315 may include information that locates a particular audio output device in one room or floor and the user in another room or floor of the dwelling.
[0060] Additionally, timing analysis component 312 may generate a timing parameter ("TP") 317 which is indicative of a difference in the arrival times 363 of one or more acoustic reference transmissions 361 . The delay control 322 of the audio output device control logic 320 may utilize the timing parameter 317 to generate a delay command ("DC") 323 for one or more of the audio output devices 120, 122, 124, 126. By way of example, when output provided from the acoustic input interface 306 indicates that the user has become proximate to one of the audio output devices 120, 122, 124, 126 and distal to another of the audio output devices 120, 122, 124, 126 the proximate audio output device may be provided the delay command 323. The delay command 323 may serve to slow down or delay the output of the proximate audio output device 120, 122, 124, 126. The delay caused to the proximate audio output device 120, 122, 124, 126 may be based on the detected difference in the arrival times 363 of the acoustic reference transmissions 361 from the distal and proximate audio output devices 120, 122, 124, 126. The delay command 323 may generate a delay that substantially equalizes the arrival times 363 of the proximate and distal audio output devices 120, 122, 124, 126.
[0061 ] Still further, the volume control 324 of the audio output device control logic 320 may use the timing parameter 317 to determine an adjustment to the volume of one or more of the audio output devices 120, 122, 124, 126 with the purpose of having the user experience a same volume from all of the audio output devices 120, 122, 124, 126 regardless of the fact that the user may move or otherwise become close to one or more of the audio output devices at the expense of another. In some implementations, the volume control 324 may generate a volume command ("VC") 325 to cause one of (i) a decreasing adjustment to the volume of a proximate audio output device 120, 122, 124, 126 in response to user movement, and (ii) an increasing adjustment to the volume of a distal audio output device 120, 122, 124, 126 in response to the user movement, or (iii) a combination of increasing and decreasing volume of the distal and proximate audio output device 120, 122, 124, 126 respectively, in response to user movement. The particular volume command 325 that is selected may be based on, for example, a default setting or a user preference. [0062] The audio output interface 340 may communicate one or more of the control command 321 , delay command 323, and/or volume command 325 to the connected audio output devices 120, 122, 124, 126 of the network 101 . In particular, the delay command 323 and/or volume command 325 may be generated in response to continued polling or checking of user location as determined from the mobile computing device of controller device 300. In this way, the delay commands 323 and/or volume commands 325 may provide a mechanism to calibrate output characteristics of individual audio output devices 120, 122, 124, 126 on the network 101 . Among other benefits, the calibration functionality enables the user to experience audio content as equal contributions from multiple audio output devices 120, 122, 124, 126 of the network 101 that are spaced non-equidistantly. The calibration functionality also enables the user to experience audio content from multiple contributing audio output devices 120, 122, 124, 126 equally even when the user is in motion, or when the user is measurably closer to one audio output device over another. The calibration functionality such as described may also enable the collective audio output to be equalized in contributions from the different audio output devices 120, 122, 124, 126 that are generating output on the network 101 , despite differences existing in the manufacturing, quality, or capability of the individual audio output devices.
[0063] FIG. 4 illustrates a mobile computing device on which various embodiments may be implemented. A mobile computing device 400 of FIG. 4 may be used to implement controller device 1 10, 300, such as described with an example of FIG. 1 and FIG. 3. The mobile computing device 400 may include a microphone 410, a processor 420, a display 430, a memory 440, and a network interface 450.
[0064] With reference to FIGS. 1 -4, the memory 440 may store instructions for implementing various functionality described with, for example, controller device 1 10, 300. In some variations, the memory 440 stores device control instructions ("Device Control Instruct.") 441 , which may be executed by the processor 420 in connection with control and calibration functionality (e.g., as described with an example of FIG. 3). The microphone 410 of the mobile computing device 400 receives the acoustic reference transmissions ("AREFTR") 361 from the individual audio output devices 120, 122, 124, 126. The acoustic reference transmissions 361 may be received as encoded signals 467 ("Enc. Signal"), and may include data that identifies the particular audio output device 120, 122, 124, 126 from which the acoustic reference transmission 361 originated. The processor 420 may execute the device control instructions 441 in order to (i) collect the acoustic reference transmissions 361 from the different audio output devices 120, 122, 124, 126 for a given point in time, and (ii) implement timing analysis component 312 to determine timing parameters 317 reflecting differences in the arrival times 363 of the acoustic reference transmissions 361 .
[0065] According to some embodiments, the processor 420 may execute the device control instructions 441 in order to determine calibration commands based at least in part on the determined timing parameters 317. Furthermore, the processor 420 may use the network interface 450 to communicate calibration commands to one or more audio output devices 120, 122, 124, 126 on the network 101 of the mobile computing device 400. The calibration commands may include, for example, delay commands ("DC") 323, which cause specific audio output devices 120, 122, 124, 126 to selectively delay or otherwise adjust timing of their respective outputs in order to calibrate the arrival time of a given segment of audio content to the user. As an addition or variation, the calibration commands may include volume
commands ("VC") 325 which adjust the volume of individual audio output devices 120, 122, 124, 126 up or down based on, for example, a proximity of the user to one audio output device 120, 122, 124, 126 as opposed to another.
[0066] According to some variations, the processor 420 may also execute the device control instructions 441 in order to implement contextual analysis component 314 (as described with an example of FIG. 3) and make contextual determinations 315. From the contextual determinations 315, control commands ("CC") 321 may be communicated to selectively power audio output devices 120, 122, 124, 126 on or off based on the location of the user relative to individual audio output devices. The contextual analysis component 314 may make the contextual determinations 315 based on contextual information, such as, for example, information defining the spacing, leveling, or segmentation (e.g., rooms) of the dwelling of network 101 .
[0067] As an addition or alternative, the memory 440 may also store user interface instructions ("Ul Instruct.") 443. The processor 420 may execute the user interface instructions 443 in order to generate a user interface ("Ul") 431 on the display 430. The user interface 431 may provide the user with prompts and other interfaces to facilitate the user in providing input information 301 about the audio output devices 120, 122, 124, 126 that are in use on the network 101 . In particular, the input information 301 received through the user interface 431 may include configuration input ("Config. Input") 433, including (i) the group size information 309 (FIG. 3), which identifies a number of audio output devices 120, 122, 124, 126 on the network 101 , (ii) device position information 31 1 , including a location indication for one or more of the audio output devices 120, 122, 124, 126, and/or (iii) a selected or preferred layout. In one example, the mobile computing device 400 determines the channel configurations 453 based at least in part on a configuration input of the user. The configuration input may be determined through user interaction with the user interface 431 provided on the display 430.
[0068] Still further, the memory 440 may include position logic instructions ("Position Logic Instruct.") 445, which when executed by the processor 420, result in the processor 420 generating channel configurations 453. As described with some other examples, channel configurations 453 may include one or more the following: (i) an audio output device layout or scheme, and/or (ii) a channel assignment for each audio output device 120, 122, 124, 126 on the network 101 , based on the selected device layout. The position logic instructions 445 may determine channel configurations 453 based on additional information, such as input information 301 provided from the user, and/or information known about a particular type or model of one or more of the audio output devices 120, 122, 124, 126. For example, a user may enter information about a specific audio output device using the user interface 431 , and the capability known for the particular audio output device may favor use of that device for a particular location are channel assignment.
[0069] FIG. 5 illustrates an audio output device on which various embodiments may be implemented. In particular, an example of FIG. 5 illustrates an audio output device 500 that may also optionally operate as a leader device (e.g., 120), such as described in the example of FIG. 1 .
[0070] With reference to FIGS. 1 -5, in more detail, the audio output device 500 includes a buffer 508, a processor 510, an audio output component 530, a network interface 540, and a memory 550. In variations, the audio output device 500 includes a digital signal processor (DSP) 512. The memory 550 may store instructions for execution by the processor 510, including interface instructions 551 and/or leader device instructions 553. When operating on the network 101 , the processor 510 may execute interface instructions 551 in order to receive an incoming audio stream 505 at the buffer 508 via the network interface 540. In some implementations, (i) at least a portion of the audio stream 505 is directed to the audio output component 530, which generates an audio content output ("ACO") 535, and (ii) transmit at least portions of the audio stream 505 to other audio output devices 120, 122, 124, 126. In some embodiments, the DSP 512 processes the audio stream 505 into audio output data 515, which may, for example, structure the audio stream 505 into delineable channeled portions that may be readily filtered at the playback location. The audio output component 530 may receive audio output data 515 from the DSP 512. In variations, the audio output component 530 receives the audio stream 505 from the buffer 508. Still further, the audio output component 530 may receive a channel portion 573 of the audio stream 505, based on the channel assignment as determined by the processor 510. The audio output component 530 may transform the audio output data 515 (or audio stream 505) into sound which is emitted from the audio output device 500 onto the physical space of the network 101 .
[0071 ] Additionally, as a leader, the processor 510 of the audio output device 500 may execute leader device instructions 553 in order to (i) determine and communicate channel assignments 555 to other audio output devices 120, 122, 124, 126 on the network 101 , (ii) distribute the audio stream 505 (or portions thereof) to the other audio output devices 120, 122, 124, 126, and/or (iii) implement or otherwise communicate calibration actions 557 that affect the generation of audio output on the other audio output devices 120, 122, 124, 126. In variations, the processor 510 may execute the leader device instructions 553 to utilize and distribute the enhanced form of the audio stream 505 from the DSP 512, shown as the audio output data 515.
[0072] The audio output device 500 may also execute the leader device instructions 553 to receive input information 501 from the controller device 1 10, 300. Among other items, the input information 501 may include group size information ("GS") 509, channel layout information ("CL") 517 (e.g., positioning of the individual audio output devices about a dwelling in accordance with Dolby 5.1 /7.1 etc.), and configuration input ("CI") 559. The input information 501 may be received by, for example, user input provided through an interaction with the user interface 310.
[0073] In some implementations, the channel assignments 555 may be determined by the controller device 1 10, 300 and received by the audio output device 500 through the network interface 540. In some variations, the channel assignments 555 may be determined by channel selection instructions 561 executing on the audio output device 500. The channel selection instructions 561 may utilize input information 501 , including (i) group size information 509, corresponding to a number of audio output devices 120, 122, 124, 126, (ii) the channel layout information 517, and (iii) a general configuration of the audio output devices 120, 122, 124, 126, provided as configuration input 559. The channel selection instructions 561 utilize the various inputs in order to determine the channel assignments 555 for individual audio output devices 120, 122, 124, 126. The inputs for the channel selection instructions 561 may be received over the network interface 540 from, for example, the mobile computing device 400 as the controller device 1 10, 300.
[0074] Some embodiments provide for the audio output device 500 to distribute, as the leader, audio transmission data ("ATD") 525 to other audio output devices 120, 122, 124, 126 using the network interface 540. Depending on implementation, the audio transmission data 525 may correspond to (i) the full audio stream 505, which may be filtered by the other audio output devices 120, 122, 124, 126 which receive the audio stream 505; (ii) the audio output data 515, which structures the full audio stream 505 into pre-determined and delineable channeled portions that may be readily filtered at the playback location; and/or (iii) separated channel portions 573, which may be individually transmitted to specific audio output devices based on the channel assignment of the audio output devices 120, 122, 124, 126.
[0075] In some embodiments, the selection of a leader amongst the audio output devices 120, 122, 124, 126 may be a modal implementation, which may be dynamically implemented by the controller device 1 10, 300. In alternative modes, the audio output device 120, 122, 124, 126 that is the leader may be replaced by, for example, the source of the audio stream, the access point 102, the mobile computing device 400 acting as the controller device 1 10, 300 (which may also act as the source of the content), or another one of the audio output devices 120, 122, 124, 126. In other variations, the designation of one audio output device 120, 122, 124, 126 as the leader may be subject to change based on selection logic on the controller device 1 10, 300. For example, the controller device 1 10, 300 may execute selection logic to change the leader in response to an event or condition, such as presence of low bandwidth at the originally selected leader device.
[0076] According to some embodiments, the audio stream 505 may be received over the network interface 540, then buffered at buffer 508 and processed. The input audio stream 505 may represent a full stream, without any delineation or segmentation of channels from the greater content. The processor 510 (or DSP 512 if used) may execute filtering logic ("filter") 571 in order to create multiple channel portions 573 of the audio stream 505. Each of the channel portions 573 may correspond to one of the channels of the determined channel configuration. Specifically, the audio stream 505 may be filtered into multiple channel portions 573, with each channel portion 573 being designated for a particular channel that is assigned to one of the audio output devices 120, 122, 124, 126 on the network 101 . The channel portions 573 of the audio stream 505 may then be transmitted to the other audio output devices 122, 124, 126 using the network interface 540.
[0077] With regard to the calibration actions, the audio output device 500 may receive calibration commands ("Cal. Comm.") 552 from the mobile computing device 400, and then implement the calibration commands 552 as calibration actions 557. The calibration actions 557 may correspond to or be based on the calibration commands 552. The calibration actions 557 may be implemented directly through distribution of the audio transmission data 525 or through communication with the other audio output devices 120, 122, 124, 126 via the network interface 540. In some variations, the audio output device 500 receives calibration related measurements and data from the mobile computing device 400, such as the timing parameter 317. In variations, the audio output device 500 may also include logic to determine calibration actions 557 that include or correspond to calibration commands 552 (delay, volume, etc.), based on the measurements and data of the mobile computing device (e.g., different in arrival times for a common audio segment, timing parameters, etc.).
METHODOLOGY
[0078] FIG. 6 illustrates a method 600 for dynamically determining and implementing channel configurations for a network-based audio system, according to various embodiments. FIG. 7 illustrates a method 700 for operating an audio output device as a leader device when distributing audio content to other audio output devices on a network, according to various embodiments. FIG. 8 illustrates a method 800 for calibrating an output of multiple audio output components on a network based on a relative position of a user, according to various embodiments. FIG. 9 illustrates a method 900 for calibrating an audio output device based on a position of a user, in accordance with various embodiments. FIG. 10 illustrates a method 1000 for implementing a user interface to initiate dynamic configuration of a network-based audio system, according to various. Example methods such as provided by FIG. 6 through FIG. 10 may be performed using components such as described with examples of FIG. 1 through FIG. 5. Accordingly, reference may be made to elements of FIG. 1 through FIG. 5 for purpose of describing suitable components for performing a step or sub-step being described.
[0079] With reference to FIG. 1 , a set of audio output devices 120, 122, 124, 126 for a given network 101 may be identified by a controller device 1 10, 300 (610). In some
implementations, the audio output devices 120, 122, 124, 126 may be identified by input information from a user. In some implementations, input information 301 may be provided through the user interface 310 of the controller device 1 10, which may be provided on a mobile computing device 400. In a variation, the audio output devices 120, 122, 124, 126 that are connected on the network 101 may be identified programmatically, using, for example, object tracking and detection technology. For example, the audio output devices 120, 122, 124, 126 of the network 101 may be equipped with a receiver for receiving transmissions of ultrasonic acoustic waves. The controller device 1 10, 300 may transmit the ultrasonic acoustic waves to the individual audio output devices 120, 122, 124, 126, and the audio output devices 120, 122, 124, 126 may include programming or logic to detect the ultrasonic acoustic waves. The ultrasonic acoustic waves may provide for use of a dimensional parameter based on the received transmission.
[0080] Additional configuration information may also be determined for the identified audio output devices 120, 122, 124, 126, 200, 500 of the network 101 (612). The additional configuration information may include a selected device layout (e.g., 5.1 arrangement, 7.1 arrangement etc.), as well as a relative location of the individual audio output devices 120, 122, 124, 126, 200, 500 about a physical region of the network 101 . For example, a user may specify the approximate location of individual audio output devices 120, 122, 124, 126, 200, 500 using a virtual interface of a generic room, provided through the user interface 310 of the controller device 1 10, 300.
[0081 ] Once the audio output devices 120, 122, 124, 126 are identified and other configuration information is determined, the channel configuration for the audio output devices 120, 122, 124, 126 may be determined (620). As described with other examples, the channel configuration may specify channel assignment for identified audio output devices 120, 122, 124, 126. In some examples, the channel configuration may be determined from, for example, the mobile computing device 400 on which the controller device 1 10, 300 is implemented. In a variation, the channel configuration may be determined from the audio output device 120, 122, 124 or 126 that is selected as the leader by the user and/or controller device 1 10, 300. Still further, in another variation, the channel configuration may be determined from multiple components, including the controller device 1 10, 300 or audio output device 120, 122, 124 or 126 that operates as the leader.
[0082] According to some embodiments, when the audio output devices 120, 122, 124, 126 are in use, an event or condition may be detected requiring a dynamic or on-the-fly change to the configuration of the audio output devices (630). In some implementations, the occurrence of the condition or event may correspond to a new audio output device being introduced to the network 101 (632). Alternatively, the condition or event may correspond to one of the existing audio output devices 120, 122, 124, 126 being removed or taken down from the network 101 (634). Still further, there may be a change in a network bandwidth (636), resulting in some audio output devices 120, 122, 124, 126 having their bandwidth changed for better or worse as compared to other audio output devices 120, 122, 124, 126. As another variation, the audio content being played by the various audio output devices 120, 122, 124, 126 may change. For example, the channel configuration may merit change if the audio content shifts from having a relatively normal or low bit count to having a relatively high bit count.
[0083] Still further, the network condition or event may correspond to the user moving about a region where the audio output devices 120, 122, 124, 126 are in use and present (638). As described, some embodiments provide that when the user moves about, the movement of the user is detected, and one or more calibration actions may take place to equalize the experience of audio generated by the audio output devices 120, 122, 124, 126 on the network 101 . As an addition or variation, one response to the user moving in the physical region of the audio output devices 120, 122, 124, 126 may be that the channel configuration is altered to accommodate the movement of the user.
[0084] In response to detecting the event or condition, the controller device 1 10, 300 and/or audio output device 120, 122, 124 or 126 that is the leader may respond by changing the channel configuration (640). More specifically, in some implementations, the channel configuration may be changed by altering the various channel assignments (642) to accommodate more or fewer audio output devices 120, 122, 124, 126 (in the event that an audio output device is added or subtracted from the network 101 ). Additionally the channel configuration may be changed by altering a layout so as to favor the change to, for example, the number of the audio output devices 120, 122, 124, 126 (644). Still further, the change in channel configuration may be responsive to the addition or deletion of the channel assignment (646).
[0085] With reference to FIG. 1 , a leader of the audio output devices 120, 122, 124 or 126 is selected (710). The selection of the audio output device 120, 122, 124 or 126that is the leader may also be dynamic, in that some variations provide that the audio output device that is the leader may be selected and/or changed by the controller device 1 10, 300. By way of example, the audio output device 120, 122, 124 or 126 that is selected as the leader may change as a result of variations to the bandwidth available to that device (712), particularly as compared to the other audio output devices 120, 122, 124, 126 on the network 101 .
[0086] According to some embodiments, some, or all, of the channel configurations may be implemented through the audio output device 120, 122, 124 or 126 that is the leader (720). Still further, the audio output device 120, 122, 124 or 126 that is the leader and/or controller device 1 10, 300 may combine to implement the various channel configurations for all of the audio output devices 120, 122, 124, 126. The channel configurations may also be determined from the controller device 1 10, 300 and then communicated to the audio output device 120, 122, 124 or 126 that operates as the leader. As described with other examples, the channel configurations may include channel assignments for each of the audio output devices 120, 122, 124, 126. In some variations, the channel configurations may also include other information, such as a presumed layout for the audio output devices 120, 122, 124, 126.
[0087] In operation, audio content may be received on the audio output device 120, 122, 124 or 126 that is the leader for distribution to other audio output devices 120, 122, 124, 126 of the network 101 (730). While receiving and distributing the audio content, the leader audio output device 120, 122, 124 or 126 may also output a portion of the audio content that is assigned to its own channel (732).
[0088] In some variations, the audio content is received on the audio output device 120, 122, 124, 126 and then sent to the other audio output devices 120, 122, 124, 126 that are on the network 101 in accordance with the determined channel configuration (740). In some implementations, the audio output device 120, 122, 124 or 126 that acts as the leader operates to filter the audio content for individual channels, and then sends the portion of the filtered audio to each of the other audio output devices 120, 122, 124, 126 based on the channel assignment (742). As an addition or variation, the full audio content may be sent from the audio output device 120, 122, 124, 126 to other audio output devices 120, 122, 124, 126 of the network 101 . In such an implementation, the audio output devices 120, 122, 124, 126, which receive the full audio content from the leader perform the filtering at the point of output, and further at the time just proceeding output (744). Further along the lines, some variations provide for the audio content to be augmented, and more specifically, processed on either the controller device 1 10, 300 or audio output device 120, 122, 124 or 126 that is the leader for purpose of generating structure in the audio content (746). The added structure may facilitate the other audio output devices 120, 122, 124, 126 in performing filtering operations on a full audio content.
[0089] As mentioned with respect to the method 600, an event or condition is detected which initiates a change in the channel configuration and or other selections (e.g., selection of the particular leader device, or motive implementation etc.) (750). By way of example, the event or condition may correspond to a change in the bandwidth of some or all of the audio output devices 120, 122, 124, 126, a change in the content being outputted (e.g., the bit value of the content), the addition or subtraction of an audio output device from the network 101 , and/or movement by the user sufficient to trigger calibration actions.
[0090] In response to a detected event or condition, one or more processes may be triggered to dynamically adjust the channel configurations and other selections made by either the controller device 1 10, 300 or audio output device 120, 122, 124 or 126 operating as the leader (760). In some implementations, the controller device 1 10, 300 and/or audio output device 120, 122, 124 or 126 that is the leader may respond by adjusting the channel configurations of the respective audio output devices while the output continues on the network (762). The change in the channel configurations may include (i) changing the channel assignment of a given output device 120, 122, 124, 126, (ii) creating or eliminating a channel assignment based on the addition or subtraction of an audio output device 120, 122, 124, 126 to the network 101 , and/or (iii) changing a selected layout for the audio output device 120, 122, 124, 126 based on any one or more of user input, a change in the number of audio output devices 120, 122, 124, 126, or other criteria. The channel configurations may be changed dynamically, so that the change to the channel configurations is relatively seamless and not interruptive to the listening experience of the user. For example, one or more changes may be made to the channel configurations while at least one or more of the audio output devices 120, 122, 124, 126 continue to output audio content.
[0091 ] Other changes that may be implemented dynamically include the selection of the audio output device 120, 122, 124 or 126 that is to operate as the leader (764). For example, the audio output device 120, 122, 124 or 126 that operates as the leader may implement a mode change so that the other audio output devices 120, 122, 124, 126 receive the audio content from the controller device 1 10, 300 or source, and not from the leader audio output device. Likewise another mode change may be made to select a new audio output device 120, 122, 124 or 126 as the leader, based on criteria such the amount of bandwidth available to the selected audio output device. Thus, for example, the selection of the audio output device 120, 122, 124 or 126 that acts as the leader may be dynamic and made on the fly. Likewise, other selections that may be made dynamically include: (i) the selection of the mode of operation, such as whether any one of the audio output device 120, 122, 124, 126 may be used as leader after having been leader in the same session, (ii) whether the audio content is filtered or structured (e.g. with or without leader device), and/or (iii) whether the audio content is to be filtered or augmented for the other audio output devices 120, 122, 124, 126 before
transmission.
[0092] With reference to FIGS. 1 -8, a location of a user may be tracked within the network environment based on measurements made by a mobile computing device 400 of the user when audio is being outputted by the audio output devices 120, 122, 124, 126 (810). More specifically, a relative proximity of the mobile computing device 400 (which presumably is carried by the user) to one or more audio output devices 120, 122, 124, 126 on the network 101 may be approximated (812). Based on the determined relative position of the user, as indicated by the user's mobile computing device, one or more output characteristics of the audio content may be calibrated to accommodate the presumed relative proximity of the user to the audio output devices 120, 122, 124, 126 of the network 101 (820). As mentioned with other examples, the calibration may include controlling or otherwise adjusting the volume of one or more audio output devices 120, 122, 124, 126 (822). As an addition or variation, the calibration may include adjusting or inserting delays into the output of audio content from one or more audio output devices 120, 122, 124, 126 (824). The insertion of delays may be based on, for example, a proximity determination as between select audio output devices 120, 122, 124, 126 and the user as compared to other devices connected to the same network 101 .
[0093] With reference to FIGS. 1 -9, each audio output device 120, 122, 124, 126 is triggered to send an acoustic identification signal to the controller device 1 10, 300 (e.g. , mobile computing device 400) (910). The acoustic identification signal may be an audible and encoded transmission that identifies the source of the acoustic transmission (912). In variations, the acoustic identification signal may be an inaudible and encoded transmission that is detectable to resources (e.g. microphone) of the mobile computing device on which the controller device 1 10, 300 is implemented (914).
[094] The mobile computing device 400 may perform a comparison of arrival times for the acoustic identification signal transmitted from each audio output device 120, 122, 124, 126 (920). Each acoustic identification signal may include a particular segment of the audio content being played back. For example, each acoustic identification signal may represent one or two frames of the audio content. Each audio output device 120, 122, 124, 126 may transmit an acoustic identification signal for a common portion of the audio content being outputted on that device. The acoustic identification signal may provide a mechanism for the mobile computing device 400 of the user to make measurements that are indicative of a relative position of the mobile computing device to one or more other audio output devices 120, 122, 124, 126.
[0095] In some implementations, the mobile computing device 400 includes software or other programmatic functionality to time stamp the incoming audio signal, extract the encoded identifier, and store the time stamp and identifier of the incoming audio signal for subsequent analysis. Each audio transmission may be encoded to coincide with a particular instance in time in the audio content. For example, a particular audio frame in a song may be selected for encoding by each audio output device 120, 122, 124, 126, and each audio output device 120, 122, 124, 126 may then output its portion of the audio frame when the song is being played. The microphone on the mobile computing device 400 may detect the encoded audio signals from each audio output device 120, 122, 124, 126 and then record the arrival times and the identifier for each signal. Once all the transmissions for a given instant are recorded, a comparison of arrival times may be performed. The comparison may identify variation in the audio output device's arrival time, with the assumption that sound travels about 1 foot in 1 millisecond. If the arrival times reflect a discrepancy of more than 1 millisecond, then the arrival times indicate the mobile computing device 400 has moved a correlated amount. More specifically, the comparison of arrival times may indicate a proximity of the mobile computing device 400 of the user (on which the control device 1 10, 300 is implemented) relative to one or more of the audio output devices 120, 122, 124, 126 that are connected to the network 101 .
[0096] An output from one or more of the audio output devices 120, 122, 124, 126 may be controlled in order to calibrate the audio output from all of the audio output devices, as well as to harmonize the user's experience (930). As described, some embodiments provide for the calibration actions to include (i) adjusting the timing for individual audio output devices 120, 122, 124, 126 so that the arrival time of multiple audio output devices is substantially the same, at least from the perspective of the user (932); and (ii) adjusting the volume of an individual audio output device 120, 122, 124, 126 so that the user experiences each of the device as being equal in volume, regardless of the distance between the user and the particular audio output device 120, 122, 124, 126 (934).
[0097] With reference to FIGS. 1 -10, a user interface 310 may be generated on a mobile computing device 400 on which the controller device 1 10, 300 is implemented, in order to enable the user to provide some or all of the configuration inputs for determining the channel configurations, as well as various other dynamic determinations (e.g., mode of operation, selection of the leader device, etc.).
[0098] According to various embodiments, the audio output devices 120, 122, 124, 126 of the network 101 may be located and linked (1010). As mentioned with other examples, each audio output device 120, 122, 124, 126 may be capable of network communications, such as wireless communication (e.g., peer-to-peer wireless communications such as provided by Wi-Fi Direct). The audio output devices 120, 122, 124, 126 may be linked, regardless of manufacturer or primary purpose. Still further, in variations, the audio output devices 120, 122, 124, 126 may be heterogeneous, in terms of manufacturer, functionality, programmatic resources, and/or primary resource.
[0099] The user interface 310 may be generated to prompt or otherwise guide the user into providing information about the audio output devices 120, 122, 124, 126 that are connected on the network 101 (1020). For example, a number of audio output devices 120, 122, 124, 126 that are connected to the network 101 may be specified by user input provided through the user interface 310. Furthermore, the user may identify each audio output device 120, 122, 124, 126, and further identify a relative location of each audio output device 120, 122, 124, 126 in the user's dwelling or network space. For example, the user may be provided with the user interface 310 that depicts a general outline of a room (e.g., FIG. 1 1 ). The outline may be generic or include user-specified features (e.g., extra wall, rounded walls, etc.) The user may identify specific audio output devices 120, 122, 124, 126 in the user's set, and then further indicate a location in the space or dwelling where the specific audio output devices are positioned.
[0100] Once the number of audio output devices and their respective location are generally identified, functionality provided by the audio output devices 120, 122, 124, 126 may trigger determination of the channel assignments (1030). As described with other
embodiments, in determining channel assignments, the number of audio output devices 120, 122, 124, 126, the location of each audio output device, and the selected layout or
configuration may serve as inputs for determining the channel assignments.
[0101] Once channel assignments and locations are determined, the calibration may be performed based on the relative location of the user (1040). An initial calibration may, for example, calibrate the arrival time and volume level of the media content output from each audio output device 120, 122, 124, 126 based on an initial location of the user relative to the audio output devices. Subsequently the user may elect to have calibration performed periodically or repeatedly so to track the steps of the user in the dwelling or space.
[0102] FIG. 1 1 illustrates a user interface 1 100 for enabling speaker selection and assignment according to various embodiments. The user interface 1 100 may be generated from an application or programming component executing on the mobile computing device 400. The user interface 1 100 may, for example, include input functionality, including (i) number select feature 1 106 for enabling the user to specify a number of audio output devices 120, 122, 124, 126 that are to be in use, and (ii) a layout selection 1 109 feature to enable the user to select a preferred layout. Additionally, the user may be provided with placement
functionality 1 108 to enable the user to specify the location of individual audio output devices 120, 122, 124, 126 within a room representation 1 1 12. (For example, the room representation 1 1 12 may be a graphic representation of a room). The user may, for example, click and drag device representations 1 1 1 1 onto the room representation 1 1 12 to approximate the general location and orientation of the audio output devices 120, 122, 124, 126.
[0103] Once the audio output devices 120, 122, 124, 126 are positioned, the user may select the calibration feature 1 120 to initiate a calibration process such as described with the method 1000. The calibration feature 1 120 may be triggered once to locate the user relative to the audio output devices 120, 122, 124, 126. The calibration feature 1 120 may correct any imprecision or error by the user in specifying the location of individual audio output devices 120, 122, 124, 126. Additionally, the calibration feature may be implemented in a track mode, where the calibration is performed repeatedly to track whether the user moves. [0104] Although illustrative embodiments have been described in detail herein with reference to the accompanying drawings, variations to specific embodiments and details are encompassed by this disclosure. It is intended that the scope of embodiments described herein be defined by claims and their equivalents. Furthermore, it is contemplated that a particular feature described, either individually or as part of various embodiments, may be combined with other individually described features, or parts of other embodiments. Thus, absence of describing combinations should not preclude the inventor(s) from claiming rights to such combinations.

Claims

CLAIMS What is claimed is:
1 . A method for outputting audio content over a network, the method being implemented by one or more processors and comprising:
(a) identifying multiple audio output devices that are connected on the network to form an audio output set for receiving and outputting at least a portion of an audio content originating from a source;
(b) determining a channel configuration for the audio output set, the channel configuration including a channel assignment for each audio output device that is connected on the network to form the audio output set; and
(c) when the audio content is being outputted, responding to an event or condition by changing the channel configuration.
2. The method of claim 1 , wherein changing the channel configuration includes changing the channel assignment for one or more audio output devices of the audio output set.
3. The method of claim 1 , wherein changing the channel configuration includes identifying any change in the audio output set.
4. The method of claim 3, wherein identifying any change in the audio output set includes detecting an addition of a new audio output device that is connected on the network, and determining a channel assignment for the new audio output device.
5. The method of claim 4, further comprising determining the channel assignment for each audio output device of the audio output set after connection of the new audio output device to the network.
6. The method of claim 4, wherein determining the channel assignment for each audio output device of the audio output set includes identifying a configuration scheme for the audio output set, and determining an alternative configuration scheme upon detecting the new audio output device.
7. The method of claim 3, wherein identifying any change in the audio output set includes detecting one of the audio output devices of the audio output set as being removed or failing on the network.
8. The method of claim 7, wherein updating the channel configuration in response to the detecting one of the audio output devices of the audio output set as being removed or failing, wherein updating the channel configuration includes changing the channel assignment for one or more of the audio output devices of the audio output set.
9. The method of claim 1 , wherein changing the channel configuration includes reassigning multiple audio output devices of the audio output set to a new or different channel.
10. The method of claim 9, wherein responding to the event includes detecting one or more of the audio output devices of the audio output set having less bandwidth than one or more other audio output devices of the audio output set.
1 1 . The method of claim 1 , further comprising transmitting, to each of the audio output devices of the audio output set, a portion of the audio content corresponding to the channel assignment for that audio output device.
12. The method of claim 1 , wherein transmitting the portion of the audio content corresponding to the channel assignment for that audio output device is performed on one of the audio output devices of the audio output set.
13. The method of claim 12, wherein at least (b) and (c) are performed on one of the audio output devices of the audio output set.
14. The method of claim 13, further comprising selecting the one of the audio output devices of the audio output set as a controller for transmitting the portion of the audio content and for performing at least (b) and (c).
15. The method of claim 14, wherein selecting the one of the audio output devices of the audio output set as the controller includes selecting the audio output device as the controller based on an available bandwidth of that device.
16. The method of claim 14, further comprising detecting a second event or condition, and then re-selecting the one of the audio output devices of the audio output set as the controller based on the detected second event or condition.
17. The method of claim 1 , further comprising transmitting the audio content to one or more of the audio output devices of the audio output set and instructing the one or more audio output devices of the audio output set to filter the transmitted audio content for a portion of the audio content that corresponds to the channel assignment for that audio output device.
18. The method of claim 1 , further comprising tracking a location of a user when the audio content is being outputted, and wherein the event or condition corresponds to the location of the user changing.
19. A system for outputting audio content over a network, the system comprising:
one or more memory resources;
one or more processors that use instructions stored in the one or more memory resources to:
(a) identify multiple audio output devices that are connected on the network to form an audio output set for receiving and outputting at least a portion of an audio content originating from a source;
(b) determine a channel configuration for the audio output set, the channel configuration including a channel assignment for each audio output device that is connected on the network to form the audio output set; and
(c) when the audio content is being outputted, respond to an event or condition by changing the channel configuration.
20. A non-transitory computer-readable medium that stores instructions, which when executed by one or more processors, cause a computing device of the one or more processors to perform operations that comprise:
(a) identifying multiple audio output devices that are connected on a network to form an audio output set for receiving and outputting at least a portion of an audio content originating from a source;
(b) determining a channel configuration for the audio output set, the channel configuration including a channel assignment for each audio output device that is connected on the network to form the audio output set; and
(c) when the audio content is being outputted, responding to an event or condition by changing the channel configuration.
PCT/US2016/012088 2015-01-21 2016-01-04 System and method for changing a channel configuration of a set of audio output devices WO2016118314A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP16705322.2A EP3248398A1 (en) 2015-01-21 2016-01-04 System and method for changing a channel configuration of a set of audio output devices
CN201680006508.XA CN107211211A (en) 2015-01-21 2016-01-04 For the system and method for the channel configuration for changing audio output apparatus collection

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US14/601,585 2015-01-21
US14/601,510 2015-01-21
US14/601,510 US9723406B2 (en) 2015-01-21 2015-01-21 System and method for changing a channel configuration of a set of audio output devices
US14/601,585 US9578418B2 (en) 2015-01-21 2015-01-21 System and method for controlling output of multiple audio output devices

Publications (1)

Publication Number Publication Date
WO2016118314A1 true WO2016118314A1 (en) 2016-07-28

Family

ID=55168499

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2016/012088 WO2016118314A1 (en) 2015-01-21 2016-01-04 System and method for changing a channel configuration of a set of audio output devices
PCT/US2016/012430 WO2016118327A1 (en) 2015-01-21 2016-01-07 System and method for controlling output of multiple audio output devices

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/US2016/012430 WO2016118327A1 (en) 2015-01-21 2016-01-07 System and method for controlling output of multiple audio output devices

Country Status (4)

Country Link
EP (2) EP3248398A1 (en)
CN (2) CN107211211A (en)
TW (2) TWI600330B (en)
WO (2) WO2016118314A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3565279A4 (en) * 2016-12-28 2020-01-08 Sony Corporation Audio signal reproducing device and reproducing method, sound collecting device and sound collecting method, and program

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084058B2 (en) 2011-12-29 2015-07-14 Sonos, Inc. Sound field calibration using listener localization
US9106192B2 (en) 2012-06-28 2015-08-11 Sonos, Inc. System and method for device playback calibration
US9743207B1 (en) * 2016-01-18 2017-08-22 Sonos, Inc. Calibration using multiple recording devices
US10299061B1 (en) 2018-08-28 2019-05-21 Sonos, Inc. Playback device calibration
CN108924700B (en) * 2018-09-28 2023-12-29 出门问问信息科技有限公司 Watch loudspeaker
US11126224B2 (en) * 2018-10-31 2021-09-21 Snap Inc. Alternating sampling method for non-echo duplex conversations on a wearable device with multiple speakers and microphones

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100260348A1 (en) * 2009-04-14 2010-10-14 Plantronics, Inc. Network Addressible Loudspeaker and Audio Play
WO2013022483A1 (en) * 2011-08-05 2013-02-14 Thomson Licensing Methods and apparatus for automatic audio adjustment
US20140146970A1 (en) * 2012-11-28 2014-05-29 Qualcomm Incorporated Collaborative sound system
EP2753095A2 (en) * 2013-01-07 2014-07-09 Samsung Electronics Co., Ltd Audio content playback method and apparatus for portable terminal

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8577048B2 (en) * 2005-09-02 2013-11-05 Harman International Industries, Incorporated Self-calibrating loudspeaker system
FR2895869B1 (en) * 2005-12-29 2008-05-23 Henri Seydoux WIRELESS DISTRIBUTION SYSTEM OF AN AUDIO SIGNAL BETWEEN A PLURALITY OF ACTICAL SPEAKERS
CN101282592A (en) * 2008-05-15 2008-10-08 华硕电脑股份有限公司 Acoustics system with audio field adjusting function as well as method for adjusting audio field
JP2011259097A (en) * 2010-06-07 2011-12-22 Sony Corp Audio signal processing device and audio signal processing method
US9084058B2 (en) * 2011-12-29 2015-07-14 Sonos, Inc. Sound field calibration using listener localization
CN106375921B (en) * 2012-04-26 2020-07-14 搜诺思公司 Multi-channel pairing in a media system
KR20130137905A (en) * 2012-06-08 2013-12-18 삼성전자주식회사 Audio output apparatus and method for outputting audio
CN104604257B (en) * 2012-08-31 2016-05-25 杜比实验室特许公司 For listening to various that environment is played up and the system of the object-based audio frequency of playback
US11140502B2 (en) * 2013-03-15 2021-10-05 Jawbone Innovations, Llc Filter selection for delivering spatial audio
CN104135709A (en) * 2013-04-30 2014-11-05 深圳富泰宏精密工业有限公司 Audio processing system and audio processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100260348A1 (en) * 2009-04-14 2010-10-14 Plantronics, Inc. Network Addressible Loudspeaker and Audio Play
WO2013022483A1 (en) * 2011-08-05 2013-02-14 Thomson Licensing Methods and apparatus for automatic audio adjustment
US20140146970A1 (en) * 2012-11-28 2014-05-29 Qualcomm Incorporated Collaborative sound system
EP2753095A2 (en) * 2013-01-07 2014-07-09 Samsung Electronics Co., Ltd Audio content playback method and apparatus for portable terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3565279A4 (en) * 2016-12-28 2020-01-08 Sony Corporation Audio signal reproducing device and reproducing method, sound collecting device and sound collecting method, and program

Also Published As

Publication number Publication date
TW201640920A (en) 2016-11-16
TWI600330B (en) 2017-09-21
CN107211211A (en) 2017-09-26
EP3248398A1 (en) 2017-11-29
EP3248397A1 (en) 2017-11-29
CN107211212A (en) 2017-09-26
TWI619395B (en) 2018-03-21
TW201640913A (en) 2016-11-16
WO2016118327A1 (en) 2016-07-28

Similar Documents

Publication Publication Date Title
US9578418B2 (en) System and method for controlling output of multiple audio output devices
EP3248398A1 (en) System and method for changing a channel configuration of a set of audio output devices
US20220167113A1 (en) Wireless Multi-Channel Headphone Systems and Methods
JP6657353B2 (en) Multi-household support
JP6114882B2 (en) Intelligent amplifier activation
JP6290265B2 (en) How to prepare audio content in advance
JP6088051B2 (en) System, method, apparatus and product for automatically performing wireless construction
JP2015513832A (en) Audio playback system and method
JP2007514350A (en) Dynamic sweet spot tracking
US9723406B2 (en) System and method for changing a channel configuration of a set of audio output devices
CN113168850B (en) Distributed synchronous playback apparatus and method therefor
US11728780B2 (en) Audio calibration of a portable playback device
US11943594B2 (en) Automatically allocating audio portions to playback devices
CN104937919A (en) Common event-based multidevice media playback
US20200059748A1 (en) Augmented reality for directional sound
US20210329330A1 (en) Techniques for Clock Rate Synchronization
WO2021050546A1 (en) Synchronizing playback of audio information received from other networks
US11809778B2 (en) Techniques for extending the lifespan of playback devices
US20230046698A1 (en) Techniques for dynamic routing
US20230362570A1 (en) Playback Device Self-Calibration Using PCA-Based Room Response Estimation
KR20170058094A (en) Module for controlling speaker

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16705322

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2016705322

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE