EP3485655B1 - Correction spectrale à l'aide d'un étalonnage spatial - Google Patents

Correction spectrale à l'aide d'un étalonnage spatial Download PDF

Info

Publication number
EP3485655B1
EP3485655B1 EP17754501.9A EP17754501A EP3485655B1 EP 3485655 B1 EP3485655 B1 EP 3485655B1 EP 17754501 A EP17754501 A EP 17754501A EP 3485655 B1 EP3485655 B1 EP 3485655B1
Authority
EP
European Patent Office
Prior art keywords
playback
sound
calibration
audio
configuration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP17754501.9A
Other languages
German (de)
English (en)
Other versions
EP3485655A1 (fr
Inventor
Timothy Sheen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonos Inc
Original Assignee
Sonos Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/211,822 external-priority patent/US9794710B1/en
Priority claimed from US15/211,835 external-priority patent/US9860670B1/en
Application filed by Sonos Inc filed Critical Sonos Inc
Priority to EP23212793.6A priority Critical patent/EP4325895A3/fr
Publication of EP3485655A1 publication Critical patent/EP3485655A1/fr
Application granted granted Critical
Publication of EP3485655B1 publication Critical patent/EP3485655B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/005Audio distribution systems for home, i.e. multi-room use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/007Electronic adaptation of audio signals to reverberation of the listening space for PA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/007Monitoring arrangements; Testing arrangements for public address systems

Definitions

  • the disclosure is related to consumer goods and, more particularly, to methods, systems, products, features, services, and other elements directed to media playback or some aspect thereof.
  • the Sonos Wireless HiFi System enables people to experience music from many sources via one or more networked playback devices. Through a software control application installed on a smartphone, tablet, or computer, one can play what he or she wants in any room that has a networked playback device. Additionally, using the controller, for example, different songs can be streamed to each room with a playback device, rooms can be grouped together for synchronous playback, or the same song can be heard in all rooms synchronously.
  • US 2016/011850 A1 relates to a method of detecting a trigger condition that initiates calibration of a media playback system capable of multi-channel audio playback, emitting periodic calibration audio from playback devices, detecting the emitted calibration sound via a microphone, analyzing the calibration audio and calibrating the media playback system accordingly
  • WO 2011/139502 A1 relates to a calibration process for a surround sound system triggered by a user via a user interface as to determine filter settings for room equalisation.
  • US 2014/003635 A1 shows to the concept of microphone array beamforming in the context of speaker calibration of a surround sound system.
  • Embodiments described herein involve, inter alia, techniques to facilitate calibration of a media playback system.
  • Calibration procedures contemplated herein involve a recording device (e.g. , a networked microphone device (NMD)) detecting sound waves ⁇ e.g. , one or more calibration sounds) that were emitted by one or more playback devices of a media playback system.
  • a processing device such as a recording device, a playback device or another device that is communicatively coupled to the media playback system, analyzes the detected sound waves to determine one or more calibrations for the one or more playback devices of the media playback system. When applied, such calibrations configure the one or more playback devices to a given listening area (i.e., the environment in which the playback device(s) were positioned while emitting the sound waves).
  • the processing device determines a spatial calibration that configures the one or more playback devices to a given listening area spatially. Such a calibration configures the one or more playback devices to one or more particular locations within the environment (e.g. , one or more preferred listening positions, such as favorite seating location), perhaps by adjusting time-delay and/or loudness for those particular locations.
  • a spatial calibration includes one or more filters that may include delay and/or phase adjustment, gain adjustment, and/or any other adjustment to correct for the spatial placement of the playback device(s) relative to the one or more particular locations within the environment.
  • the media playback systems include multiple audio drivers, which are divided among the playback device(s) of a media playback system in various arrangements.
  • an example media playback system may include a soundbar-type playback device with multiple audio drivers (e.g. , nine audio drivers).
  • Another playback device might include multiple audio drivers of different types (e.g., tweeters and woofers, perhaps of varying size).
  • Other example playback devices may include a single audio driver (e.g., a single full-range woofer in a playback device, or a large low-frequency woofer in a subwoofer-type device).
  • multiple audio drivers of a media playback system form one or more "sound axes.”
  • Each such "sound axis" corresponds to a respective input channel of audio content.
  • two or more audio drivers are arrayed to form a sound axis.
  • a sound-bar type device might include nine audio drivers which form multiple sound axes (e.g., front, left, and right surround sound channels). Any audio driver contributes to any number of sound axes.
  • a left axis of a surround sound system may be formed by contributions from all nine audio drivers in the example sound-bar type device.
  • an axis may be formed by a single audio driver.
  • Example media playback systems described herein adopt various playback configurations representing respective sets of sound axes.
  • Example playback configurations may include respective configurations based on the number of input channels (e.g., mono, stereo, surround, or any of the above in combination with a subwoofer).
  • Other example playback configurations may be based on the content type. For instance, a first set of axes may be formed by audio drivers of a media playback system when playing music and a second set of axes formed by the audio drivers when playing audio that is paired with video (e.g. , television audio).
  • Other playback configurations may be invoked by various groupings of playback devices within the media playback system. Many examples are possible.
  • the multiple audio drivers of the media playback system form the one or more sound axes, such that each sound axis outputs sound during the calibration procedure.
  • calibration audio emitted by multiple audio drivers is divided into constituent frames.
  • Each frame is in turn divided into slots.
  • a respective sound axis is formed by outputting audio.
  • an NMD that is recording the audio output of the audio drivers can obtain samples from each sound axis.
  • the frames may repeat, so as to produce multiple samples for each sound axis when recorded by the NMD.
  • a spectral calibration may configure the playback device(s) of a media playback system across a given listening area spectrally. Such a calibration may help offset acoustic characteristics of the environment generally instead of being relatively more directed to particular listening locations like the spatial calibrations.
  • a spectral calibration may include one or more filters that adjust the frequency response of the playback devices. In operation, one of the two or more calibrations may be applied to playback by the one or more playback devices, perhaps for different use cases. Example uses cases might include music playback or surround sound (i. e., home theater), among others.
  • a media playback system may perform a first calibration to determine a spatial calibration for playback device(s) of the media playback system. The media playback system then applies the spatial calibration while the playback devices are emitting audio during a second calibration to determine a spectral calibration. Such a calibration procedure may yield a calibration that includes both spatial and spectral correction.
  • Example techniques may involve performing aspects of a spatial calibration.
  • a first implementation may include detecting a trigger condition that initiates calibration of a media playback system including multiple audio drivers that form multiple sound axes, each sound axis corresponding to a respective channel of multi-channel audio content
  • the first implementation may also include causing the multiple audio drivers to emit calibration audio that is divided into constituent frames, the multiple sound axes emitting calibration audio during respective slots of each constituent frame.
  • the first implementation may further include recording, via a microphone, the emitted calibration audio.
  • the implementation may include receiving data representing one or more spatial filters that correspond to respective playback configurations. Each playback configuration represents a particular set of sound axes formed via one or more audio drivers and each sound axis corresponds to a respective input channel of audio content.
  • the implementation also involves causing the one or more audio drivers to output calibration audio that is divided into a repeating set of frames, the set of frames including a respective frame for each playback configuration. Causing the one or more audio drivers to output the calibration audio involves causing an audio stage to apply, during each frame, the spatial filter corresponding to the respective playback configuration.
  • the second implementation may also include receiving data representing one or more spectral filters that correspond to respective playback configurations, the one or more spectral filters based on the calibration audio output by the one or more audio drivers. When playing back audio content in a given playback configuration, the audio stage may apply a particular spectral filter corresponding to the given playback configuration.
  • the implementation includes detecting a trigger condition that initiates calibration of a media playback system for multiple playback configurations.
  • Each playback configuration represents a particular set of sound axes formed via multiple audio drivers of the media playback system and each sound axis corresponds to a respective channel of audio content.
  • the implementation also involves causing the multiple audio drivers to output calibration audio that is divided into a repeating set of frames, the set of frames including a respective frame for each playback configuration.
  • Causing the multiple audio drivers to output the calibration audio involves causing, during each frames of the set of frames, a respective set of spatial filters to be applied to the multiple audio drivers, each set of spatial filters including a respective spatial filter for each sound axis.
  • the implementation further involves recording, via the microphone, the calibration audio output by the multiple audio drivers and causing a processing device to determine respective sets of spectral filters for the multiple playback configurations based on the recorded calibration audio, each set of spectral filters including a respective spectral filter for each sound axis.
  • the above implementation may be embodied as a method, a device configured to carry out the implementation, a system of devices configured to cany out the implementation, or a non-transitory computer-readable medium containing instructions that are executable by one or more processors to cany out the implementation, among other examples. It will be understood by one of ordinary skill in the art that this disclosure includes numerous other embodiments, including combinations of the example features described herein. Further, any example operation described as being performed by a given device to illustrate a technique may be performed by any suitable devices, including the devices described herein. Yet further, any device may cause another device to perform any of the operations described herein.
  • Figure 1 illustrates an example configuration of a media playback system 100 in which one or more embodiments disclosed herein may be practiced or implemented.
  • the media playback system 100 as shown is associated with an example home environment having several rooms and spaces, such as for example, a master bedroom, an office, a dining room, and a living room.
  • the media playback system 100 includes playback devices 102-124, control devices 126 and 128, and a wired or wireless network router 130.
  • FIG 2 shows a functional block diagram of an example playback device 200 that may be configured to be one or more of the playback devices 102-124 of the media playback system 100 of Figure 1 .
  • the playback device 200 may include a processor 202, software components 204, memory 206, audio processing components 208, audio amplifier(s) 210, speaker(s) 212, and a network interface 214 including wireless interface(s) 216 and wired interface(s) 218.
  • the playback device 200 may not include the speaker(s) 212, but rather a speaker interface for connecting the playback device 200 to external speakers.
  • the playback device 200 may include neither the speaker(s) 212 nor the audio amplifier(s) 210, but rather an audio interface for connecting the playback device 200 to an external audio amplifier or audio-visual receiver.
  • the processor 202 may be a clock-driven computing component configured to process input data according to instructions stored in the memory 206.
  • the memory 206 may be a tangible computer-readable medium configured to store instructions executable by the processor 202.
  • the memory 206 may be data storage that can be loaded with one or more of the software components 204 executable by the processor 202 to achieve certain functions.
  • the functions may involve the playback device 200 retrieving audio data from an audio source or another playback device.
  • the functions may involve the playback device 200 sending audio data to another device or playback device on a network.
  • the functions may involve pairing of the playback device 200 with one or more playback devices to create a multi-channel audio environment.
  • Certain functions may involve the playback device 200 synchronizing playback of audio content with one or more other playback devices.
  • a listener will preferably not be able to perceive time-delay differences between playback of the audio content by the playback device 200 and the one or more other playback devices.
  • U.S. Patent No. 8,234,395 entitled, "System and method for synchronizing operations among a plurality of independently clocked digital data processing devices," provides in more detail some examples for audio playback synchronization among playback devices.
  • the memory 206 may further be configured to store data associated with the playback device 200, such as one or more zones and/or zone groups the playback device 200 is a part of, audio sources accessible by the playback device 200, or a playback queue that the playback device 200 (or some other playback device) may be associated with.
  • the data maybe stored as one or more state variables that are periodically updated and used to describe the state of the playback device 200.
  • the memory 206 may also include the data associated with the state of the other devices of the media system, and shared from time to time among the devices so that one or more of the devices have the most recent data associated with the system. Other embodiments are also possible.
  • the audio processing components 208 may include one or more digital-to-analog converters (DAC), an audio preprocessing component, an audio enhancement component or a digital signal processor (DSP), and so on. In one embodiment, one or more of the audio processing components 208 may be a subcomponent of the processor 202. In one example, audio content may be processed and/or intentionally altered by the audio processing components 208 to produce audio signals. The produced audio signals may then be provided to the audio amplifier(s) 210 for amplification and playback through speaker(s) 212. Particularly, the audio amplifier(s) 210 may include devices configured to amplify audio signals to a level for driving one or more of the speakers 212.
  • DAC digital-to-analog converters
  • DSP digital signal processor
  • the speaker(s) 212 may-include an individual transducer (e.g., a "driver") or a complete speaker system involving an enclosure with one or more drivers.
  • a particular driver of the speaker(s) 212 may include, for example, a subwoofer (e.g., for low frequencies), a mid-range driver (e.g., for middle frequencies), and/or a tweeter (e.g., for high frequencies).
  • each transducer in the one or more speakers 212 may be driven by an individual corresponding audio amplifier of the audio amplifiers) 210.
  • the audio processing components 208 may be configured to process audio content to be sent to one or more other playback devices for playback.
  • Audio content to be processed and/or played back by the playback device 200 may be received from an external source, such as via an audio line-in input connection (e.g., an auto-detecting 3.5mm audio line-in connection) or the network interface 214.
  • an audio line-in input connection e.g., an auto-detecting 3.5mm audio line-in connection
  • the network interface 214 e.g., the Internet
  • the network interface 214 may be configured to facilitate a data flow between the playback device 200 and one or more other devices on a data network.
  • the playback device 200 may be configured to receive audio content over the data network from one or more other playback devices in communication with the playback device 200, network devices within a local area network, or audio content sources over a wide area network such as the Internet.
  • the audio content and other signals transmitted and received by the playback device 200 may be transmitted in the form of digital packet data containing an Internet Protocol (IP)-based source address and IP-based destination addresses.
  • IP Internet Protocol
  • the network interface 214 may be configured to parse the digital packet data such that the data destined for the playback device 200 is properly received and processed by the playback device 200.
  • the network interface 214 may include wireless interface(s) 216 and wired interface(s) 218.
  • the wireless interface(s) 216 may provide network interface functions for the playback device 200 to wirelessly communicate with other devices (e . g ., other playback device(s), speaker(s), receiver(s), network device(s), control device(s) within a data network the playback device 200 is associated with) in accordance with a communication protocol (e . g ., any wireless standard including IEEE 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.15, 4G mobile communication standard, and so on).
  • a communication protocol e . g ., any wireless standard including IEEE 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.15, 4G mobile communication standard, and so on.
  • the wired interface(s) 218 may provide network interface functions for the playback device 200 to communicate over a wired connection with other devices in accordance with a communication protocol (e . g ., IEEE 802.3). While the network interface 214 shown in Figure 2 includes both wireless interface(s) 216 and wired interface(s) 218, the network interface 214 may in some embodiments include only wireless interface(s) or only wired interface(s).
  • a communication protocol e . g ., IEEE 802.3
  • the playback device 200 and one other playback device may be paired to play two separate audio components of audio content.
  • playback device 200 may be configured to play a left channel audio component, while the other playback device may be configured to play a right channel audio component, thereby producing or enhancing a stereo effect of the audio content.
  • the paired playback devices (also referred to as "bonded playback devices”) may further play audio content in synchrony with other playback devices.
  • the playback device 200 may be sonically consolidated with one or more other playback devices to form a single, consolidated playback device.
  • a consolidated playback device may be configured to process and reproduce sound differently than an unconsolidated playback device or playback devices that are paired, because a consolidated playback device may have additional speaker drivers through which audio content may be rendered. For instance, if the playback device 200 is a playback device designed to render low frequency range audio content ( i . e . a subwoofer), the playback device 200 may be consolidated with a playback device designed to render full frequency range audio content.
  • the full frequency range playback device when consolidated with the low frequency playback device 200, may be configured to render only the mid and high frequency components of audio content, while the low frequency range playback device 200 renders the low frequency component of the audio content.
  • the consolidated playback device may further be paired with a single playback device or yet another consolidated playback device.
  • a playback device is not limited to the example illustrated in Figure 2 or to the SONOS product offerings.
  • a playback device may include a wired or wireless headphone.
  • a playback device may include or interact with a docking station for personal mobile media playback devices.
  • a playback device may be integral to another device or component such as a television, a lighting fixture, or some other device for indoor or outdoor use.
  • the environment may have one or more playback zones, each with one or more playback devices.
  • the media playback system 100 may be established with one or more playback zones, after which one or more zones may be added, or removed to arrive at the example configuration shown in Figure 1 .
  • Each zone may be given a name according to a different room or space such as an office, bathroom, master bedroom, bedroom, kitchen, dining room, living room, and/or balcony.
  • a single playback zone may include multiple rooms or spaces.
  • a single room or space may include multiple playback zones.
  • the balcony, dining room, kitchen, bathroom, office, and bedroom zones each have one playback device, while the living room and master bedroom zones each have multiple playback devices.
  • playback devices 104, 106, 108, and 110 may be configured to play audio content in synchrony as individual playback devices, as one or more bonded playback devices, as one or more consolidated playback devices, or any combination thereof.
  • playback devices 122 and 124 may be configured to play audio content in synchrony as individual playback devices, as a bonded playback device, or as a consolidated playback device.
  • one or more playback zones in the environment of Figure 1 may each be playing different audio content.
  • the user may be grilling in the balcony zone and listening to hip hop music being played by the playback device 102 while another user may be preparing food in the kitchen zone and listening to classical music being played by the playback device 114.
  • a playback zone may play the same audio content in synchrony with another playback zone.
  • the user may be in the office zone where the playback device 118 is playing the same rock music that is being playing by playback device 102 in the balcony zone.
  • playback devices 102 and 118 may be playing the rock music in synchrony such that the user may seamlessly (or at least substantially seamlessly) enjoy the audio content that is being played out-loud while moving between different playback zones. Synchronization among playback zones may be achieved in a manner similar to that of synchronization among playback devices, as described in previously referenced U.S. Patent No. 8,234,395 .
  • the zone configurations of the media playback system 100 may be dynamically modified, and in some embodiments, the media playback system 100 supports numerous configurations. For instance, if a user physically moves one or more playback devices to or from a zone, the media playback system 100 may be reconfigured to accommodate the change(s). For instance, if the user physically moves the playback device 102 from the balcony zone to the office zone, the office zone may now include both the playback device 118 and the playback device 102. The playback device 102 may be paired or grouped with the office zone and/or renamed if so desired via a control device such as the control devices 126 and 128. On the other hand, if the one or more playback devices are moved to a particular area in the home environment that is not already a playback zone, a new playback zone may be created for the particular area.
  • different playback zones of the media playback system 100 may be dynamically combined into zone groups or split up into individual playback zones.
  • the dining room zone and the kitchen zone 114 may be combined into a zone group for a dinner party such that playback devices 112 and 114 may render audio content in synchrony.
  • the living room zone may be split into a television zone including playback device 104, and a listening zone including playback devices 106, 108, and 110, if the user wishes to listen to music in the living room space while another user wishes to watch television.
  • Figure 3 shows a functional block diagram of an example control device 300 that may be configured to be one or both of the control devices 126 and 128 of the media playback system 100.
  • Control device 300 may also be referred to as a controller 300.
  • the control device 300 may include a processor 302, memory 304, a network interface 306, and a user interface 308.
  • the control device 300 may be a dedicated controller for the media playback system 100.
  • the control device 300 may be a network device on which media playback system controller application software may be installed, such as for example, an iPhone TM , iPad TM or any other smart phone, tablet or network device ( e . g ., a networked computer such as a PC or Mac TM ).
  • the processor 302 may be configured to perform functions relevant to facilitating user access, control, and configuration of the media playback system 100.
  • the memory 304 may be configured to store instructions executable by the processor 302 to perform those functions.
  • the memory 304 may also be configured to store the media playback system controller application software and other data associated with the media playback system 100 and the user.
  • the network interface 306 may be based on an industry standard (e . g ., infrared, radio, wired standards including IEEE 802.3, wireless standards including IEEE 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.15, 4G mobile communication standard, and so on).
  • the network interface 306 may provide a means for the control device 300 to communicate with other devices in the media playback system 100.
  • data and information e.g., such as a state variable
  • playback zone and zone group configurations in the media playback system 100 may be received by the control device 300 from a playback device or another network device, or transmitted by the control device 300 to another playback device or network device via the network interface 306.
  • the other network device may be another control device.
  • Playback device control commands such as volume control and audio playback control may also be communicated from the control device 300 to a playback device via the network interface 306.
  • changes to configurations of the media playback system 100 may also be performed by a user using the control device 300.
  • the configuration changes may include adding/removing one or more playback devices to/from a zone, adding/removing one or more zones to/from a zone group, forming a bonded or consolidated player, separating one or more playback devices from a bonded or consolidated player, among others.
  • the control device 300 may sometimes be referred to as a controller, whether the control device 300 is a dedicated controller or a network device on which media playback system controller application software is installed.
  • the user interface 308 of the control device 300 may be configured to facilitate user access and control of the media playback system 100, by providing a controller interface such as the controller interface 400 shown in Figure 4 .
  • the controller interface 400 includes a playback control region 410, a playback zone region 420, a playback status region 430, a playback queue region 440, and an audio content sources region 450.
  • the user interface 400 as shown is just one example of a user interface that may be provided on a network device such as the control device 300 of Figure 3 (and/or the control devices 126 and 128 of Figure 1 ) and accessed by users to control a media playback system such as the media playback system 100.
  • Other user interfaces of varying formats, styles, and interactive sequences may alternatively be implemented on one or more network devices to provide comparable control access to a media playback system.
  • the playback control region 410 may include selectable ( e . g ., by way of touch or by using a cursor) icons to cause playback devices in a selected playback zone or zone group to play or pause, fast forward, rewind, skip to next, skip to previous, enter/exit shuffle mode, enter/exit repeat mode, enter/exit cross fade mode.
  • selectable icons e . g ., by way of touch or by using a cursor icons to cause playback devices in a selected playback zone or zone group to play or pause, fast forward, rewind, skip to next, skip to previous, enter/exit shuffle mode, enter/exit repeat mode, enter/exit cross fade mode.
  • the playback control region 410 may also include selectable icons to modify equalization settings, and playback volume, among other possibilities.
  • the playback zone region 420 may include representations of playback zones within the media playback system 100.
  • the graphical representations of playback zones may be selectable to bring up additional selectable icons to manage or configure the playback zones in the media playback system, such as a creation of bonded zones, creation of zone groups, separation of zone groups, and renaming of zone groups, among other possibilities.
  • a "group” icon may be provided within each of the graphical representations of playback zones.
  • the "group” icon provided within a graphical representation of a particular zone may be selectable to bring up options to select one or more other zones in the media playback system to be grouped with the particular zone.
  • playback devices in the zones that have been grouped with the particular zone will be configured to play audio content in synchrony with the playback device(s) in the particular zone.
  • a "group” icon may be provided within a graphical representation of a zone group. In this case, the "group” icon may be selectable to bring up options to deselect one or more zones in the zone group to be removed from the zone group.
  • Other interactions and implementations for grouping and ungrouping zones via a user interface such as the user interface 400 are also possible.
  • the representations of playback zones in the playback zone region 420 may be dynamically updated as playback zone or zone group configurations are modified.
  • the playback status region 430 may include graphical representations of audio content that is presently being played, previously played, or scheduled to play next in the selected playback zone or zone group.
  • the selected playback zone or zone group may be visually distinguished on the user interface, such as within the playback zone region 420 and/or the playback status region 430.
  • the graphical representations may include track title, artist name, album name, album year, track length, and other relevant information that may be useful for the user to know when controlling the media playback system via the user interface 400.
  • the playback queue region 440 may include graphical representations of audio content in a playback queue associated with the selected playback zone or zone group.
  • each playback zone or zone group may be associated with a playback queue containing information corresponding to zero or more audio items for playback by the playback zone or zone group.
  • each audio item in the playback queue may comprise a uniform resource identifier (URI), a uniform resource locator (URL) or some other identifier that may be used by a playback device in the playback zone or zone group to find and/or retrieve the audio item from a local audio content source or a networked audio content source, possibly for playback by the playback device.
  • URI uniform resource identifier
  • URL uniform resource locator
  • a playlist may be added to a playback queue, in which case information corresponding to each audio item in the playlist may be added to the playback queue.
  • audio items in a playback queue may be saved as a playlist.
  • a playback queue may be empty, or populated but "not in use" when the playback zone or zone group is playing continuously streaming audio content, such as Internet radio that may continue to play until otherwise stopped, rather than discrete audio items that have playback durations.
  • a playback queue can include Internet radio and/or other streaming audio content items and be "in use" when the playback zone or zone group is playing those items. Other examples are also possible.
  • playback queues associated with the affected playback zones or zone groups may be cleared or re-associated. For example, if a first playback zone including a first playback queue is grouped with a second playback zone including a second playback queue, the established zone group may have an associated playback queue that is initially empty, that contains audio items from the first playback queue (such as if the second playback zone was added to the first playback zone), that contains audio items from the second playback queue (such as if the first playback zone was added to the second playback zone), or a combination of audio items from both the first and second playback queues.
  • the resulting first playback zone may be re-associated with the previous first playback queue, or be associated with a new playback queue that is empty or contains audio items from the playback queue associated with the established zone group before the established zone group was ungrouped.
  • the resulting second playback zone may be re-associated with the previous second playback queue, or be associated with a new playback queue that is empty, or contains audio items from the playback queue associated with the established zone group before the established zone group was ungrouped.
  • Other examples are also possible.
  • the graphical representations of audio content in the playback queue region 440 may include track titles, artist names, track lengths, and other relevant information associated with the audio content in the playback queue.
  • graphical representations of audio content may be selectable to bring up additional selectable icons to manage and/or manipulate the playback queue and/or audio content represented in the playback queue. For instance, a represented audio content may be removed from the playback queue, moved to a different position within the playback queue, or selected to be played immediately, or after any currently playing audio content, among other possibilities.
  • a playback queue associated with a playback zone or zone group may be stored in a memory on one or more playback devices in the playback zone or zone group, on a playback device that is not in the playback zone or zone group, and/or some other designated device. Playback of such a playback queue may involve one or more playback devices playing back media items of the queue, perhaps in sequential or random order.
  • the audio content sources region 450 may include graphical representations of selectable audio content sources from which audio content may be retrieved and played by the selected playback zone or zone group. Discussions pertaining to audio content sources may be found in the following section.
  • FIG. 5 depicts a smartphone 500 that includes one or more processors, a tangible computer-readable memory, a network interface, and a display.
  • Smartphone 500 might be an example implementation of control device 126 or 128 of Figure 1 , or control device 300 of Figure 3 , or other control devices described herein.
  • smartphone 500 and certain control interfaces, prompts, and other graphical elements that smartphone 500 may display when operating as a control device of a media playback system (e . g ., of media playback system 100).
  • a media playback system e . g ., of media playback system 100
  • such interfaces and elements may be displayed by any suitable control device, such as a smartphone, tablet computer, laptop or desktop computer, personal media player, or a remote control device.
  • smartphone 500 may display one or more controller interface, such as controller interface 400. Similar to playback control region 410, playback zone region 420, playback status region 430, playback queue region 440, and/or audio content sources region 450 of Figure 4 , smartphone 500 might display one or more respective interfaces, such as a playback control interface, a playback zone interface, a playback status interface, a playback queue interface, and/or an audio content sources interface.
  • Example control devices might display separate interfaces (rather than regions) where screen size is relatively limited, such as with smartphones or other handheld devices.
  • one or more playback devices in a zone or zone group may be configured to retrieve for playback audio content (e . g ., according to a corresponding URI or URL for the audio content) from a variety of available audio content sources.
  • audio content may be retrieved by a playback device directly from a corresponding audio content source (e.g., a line-in connection).
  • audio content may be provided to a playback device over a network via one or more other playback devices or network devices.
  • Example audio content sources may include a memory of one or more playback devices in a media playback system such as the media playback system 100 of Figure 1 , local music libraries on one or more network devices (such as a control device, a network-enabled personal computer, or a networked-attached storage (NAS), for example), streaming audio services providing audio content via the Internet (e.g. , the cloud), or audio sources connected to the media playback system via a line-in input connection on a playback device or network devise, among other possibilities.
  • a media playback system such as the media playback system 100 of Figure 1
  • network devices such as a control device, a network-enabled personal computer, or a networked-attached storage (NAS), for example
  • streaming audio services providing audio content via the Internet (e.g. , the cloud)
  • audio sources connected to the media playback system via a line-in input connection on a playback device or network devise, among other possibilities.
  • audio content sources may be regularly added or removed from a media playback system such as the media playback system 100 of Figure 1 .
  • an indexing of audio items may be performed whenever one or more audio content sources are added, removed or updated. Indexing of audio items may involve scanning for identifiable audio items in all folders/directory shared over a network accessible by playback devices in the media playback system, and generating or updating an audio content database containing metadata (e.g. , title, artist, album, track length, among others) and other associated information, such as a URI or URL for each identifiable audio item found. Other examples for managing and maintaining audio content sources may also be possible.
  • example calibration procedures involve one or more playback devices emitting a calibration sound, which may be detected by a recording device (or multiple recording devices).
  • the detected calibration sounds may be analyzed across a range of frequencies over which the playback device is to be calibrated (i.e. , a calibration range). Accordingly, the particular calibration sound that is emitted by a playback device covers the calibration frequency range.
  • the calibration frequency range may include a range of frequencies that the playback device is capable of emitting (e.g. , 15 - 30,000 Hz) and may be inclusive of frequencies that are considered to be in the range of human hearing (e.g. , 20 - 20,000 Hz).
  • a frequency response that is inclusive of that range may be determined for the playback device.
  • Such a frequency response may be representative of the environment in which the playback device emitted the calibration sound.
  • a playback device may repeatedly emit the calibration sound during the calibration procedure such that the calibration sound covers the calibration frequency range during each repetition.
  • repetitions of the calibration sound are continuously detected at different physical locations within the environment.
  • the playback device might emit a periodic calibration sound.
  • Each period of the calibration sound may be detected by the recording device at a different physical location within the environment thereby providing a sample (i.e., a frame representing a repetition) at that location.
  • a calibration sound may therefore facilitate a space-averaged calibration of the environment.
  • each microphone may cover a respective portion of the environment (perhaps with some overlap).
  • the recording devices may measure both moving and stationary samples. For instance, while the one or more playback devices output a calibration sound, a recording device may move within the environment. During such movement, the recording device may pause at one or more locations to measure stationary samples. Such locations may correspond to preferred listening locations.
  • a first recording device and a second recording device may include a first microphone and a second microphone respectively. While the playback device emits a calibration sound, the first microphone may- move and the second microphone may remain stationary, perhaps at a particular listening location within the environment (e.g., a favorite chair).
  • the one or more playback devices may be joining into a grouping, such as a bonded zone or zone group.
  • the calibration procedure may calibrate the one or more playback devices as a group.
  • Example groupings include zone groups or bonded pairs, among other example configurations.
  • the playback device(s) under calibration initiates the calibration procedure based on a trigger condition.
  • a recording device such as control device 126 of media playback system 100, may detect a trigger condition that causes the recording device to initiate calibration of one or more playback devices (e.g. , one or more of playback devices 102-124).
  • a playback device of a media playback system may detect such a trigger condition (and then perhaps relay an indication of that trigger condition to the recording device).
  • detecting the trigger condition may involve detecting input data indicating a selection of a selectable control.
  • a recording device such as control device 126, may display an interface (e.g. , control interface 400 of Figure 4 ), which includes one or more controls that, when selected, initiate calibration of a playback device, or a group of playback devices (e.g. , a zone).
  • Control interface 600 includes a graphical region 602 that prompts to tap selectable control 604 (Start) when ready. When selected, selectable control 604 may initiate the calibration procedure. As shown, selectable control 604 is a button control. While a button control is shown by way of example, other types of controls are contemplated as well.
  • Control interface 600 further includes a graphical region 606 that includes a video depicting how to assist in the calibration procedure.
  • Some calibration procedures may involve moving a microphone through an environment in order to obtain samples of the calibration sound at multiple physical locations.
  • the control device may display a video or animation depicting the step or steps to be performed during the calibration.
  • Figure 7 shows media playback system. 100 of Figure 1 .
  • Figure 7 shows a path 700 along which a recording device ⁇ e.g., control device 126) might be moved during calibration.
  • the recording device may indicate how to perform such a movement in various ways, such as byway of a video or animation, among other examples.
  • a recording device might detect iterations of a calibration sound emitted by one or more playback devices of media playback system 100 at different points along the path 700, which may facilitate a space-averaged calibration of those playback devices.
  • detecting the trigger condition may involve a playback device detecting that the playback device has become uncalibrated, which might be caused by-moving the playback device to a different position.
  • the playback device may-detect physical movement via one or more sensors that are sensitive to movement (e.g., an accelerometer).
  • the playback device may detect that it has been moved to a different zone (e.g., from a "Kitchen” zone to a "Living Room” zone), perhaps by receiving an instruction from a control device that causes the playback device to leave a first zone and join a second zone.
  • detecting the trigger condition may involve a recording device (e.g., a control device or playback device) detecting a new playback device in the system.
  • a recording device may detect a new playback device as part of a set-up procedure for a media playback system (e.g., a procedure to configure one or more playback devices into a media playback system).
  • the recording device may detect a new playback device by detecting input data indicating a request to configure the media playback system (e.g. , a request to configure a media playback system with an additional playback device).
  • the first recording device instructs the one or more playback devices to emit the calibration sound.
  • a recording device such as control device 126 of media playback system 100, may send a command that causes a playback device (e . g ., one of playback devices 102-124) to emit a calibration sound.
  • the control device may send the command via a network interface (e . g ., a wired or wireless network interface).
  • a playback device may receive such a command, perhaps via a network interface, and responsively emit the calibration sound.
  • Acoustics of an environment may vary from location to location within the environment. Because of this variation, some calibration procedures may be improved by positioning the playback device to be calibrated within the environment in the same way that the playback device will later be operated. In that position, the environment may affect the calibration sound emitted by a playback device in a similar manner as playback will be affected by the environment during operation.
  • some example calibration procedures may involve one or more recording devices detecting the calibration sound at multiple physical locations within the environment, which may further assist in capturing acoustic variability within the environment.
  • some calibration procedures involve a moving microphone. For example, a microphone that is detecting the calibration sound may be moved through the environment while the calibration sound is emitted. Such movement may facilitate detecting the calibration sounds at multiple physical locations within the environment, which may provide a better understanding of the environment as a whole.
  • the one or more playback devices may repeatedly emit the calibration sound during the calibration procedure such that the calibration sound covers the calibration frequency range during each repetition.
  • repetitions of the calibration sound are detected at different physical locations within the environment, thereby providing samples that are spaced throughout the environment.
  • the calibration sound may be periodic calibration signal in which each period covers the calibration frequency range.
  • the calibration sound should be emitted with sufficient energy at each frequency to overcome background noise.
  • a tone at that frequency may be emitted for a longer duration.
  • the spatial resolution of the calibration procedure is decreased, as the moving microphone moves further during each period (assuming a relatively constant velocity).
  • a playback device may increase the intensity of the tone.
  • attempting to emit sufficient energy in a short amount of time may damage speaker drivers of the playback device.
  • Some implementations may balance these considerations by instructing the playback device to emit a calibration sound having a period that is approximately 3/8th of a second in duration (e.g., in the range of 1/4 to 1 second in duration).
  • the calibration sound may repeat at a frequency of 2-4 Hz.
  • Such a duration may be long enough to provide a tone of sufficient energy at each frequency to overcome background noise in a typical environment (e.g., a quiet room) but also be short enough that spatial resolution is kept in an acceptable range ( e . g ., less than a few feet assuming normal walking speed).
  • the one or more playback devices may emit a hybrid calibration sound that combines a first component and a second component having respective waveforms.
  • an example hybrid calibration sound might include a first component that includes noises at certain frequencies and a second component that sweeps through other frequencies (e . g ., a swept-sine).
  • a noise component may cover relatively low frequencies of the calibration frequency range ( e . g ., 10-50 Hz) while the swept signal component covers higher frequencies of that range ( e . g ., above 50 Hz).
  • Such a hybrid calibration sound may combine the advantages of its component signals.
  • a swept signal (e . g ., a chirp or swept sine) is a waveform in which the frequency increases or decreases with time.
  • a waveform as a component of a hybrid calibration sound may facilitate covering a calibration frequency range, as a swept signal can be chosen that increases or decreases through the calibration frequency range (or a portion thereof).
  • a chirp emits each frequency within the chirp for a relatively short time period such that a chirp can more efficiently cover a calibration range relative to some other waveforms.
  • Figure 8 shows a graph 800 that illustrates an example chirp. As shown in Figure 8 , the frequency of the waveform increases over time (plotted on the X-axis) and a tone is emitted at each frequency for a relatively short period of time.
  • the amplitude (or sound intensity) of the chirp must be relatively high at low frequencies to overcome typical background noise. Some speakers might not be capable of outputting such high intensity tones without risking damage. Further, such high intensity tones might be unpleasant to humans within audible range of the playback device, as might be expected during a calibration procedure that involves a moving microphone. Accordingly, some embodiments of the calibration sound might not include a chirp that extends to relatively low frequencies ( e . g ., below 50 Hz). Instead, the chirp or swept signal may cover frequencies between a relatively low threshold frequency ( e . g ., a frequency around 50-100 Hz) and a maximum of the calibration frequency range. The maximum of the calibration range may correspond to the physical capabilities of the channel(s) emitting the calibration sound, which might be 20,000 Hz or above.
  • a swept signal might also facilitate the reversal of phase distortion caused by the moving microphone.
  • a moving microphone causes phase distortion, which may interfere with determining a frequency response from a detected calibration sound.
  • the phase of each frequency is predictable (as Doppler shift). This predictability facilitates reversing the phase distortion so that a detected calibration sound can be correlated to an emitted calibration sound during analysis. Such a correlation can be used to determine the effect of the environment on the calibration sound.
  • a swept signal may increase or decrease frequency over time.
  • the recording device may instruct the one or more playback devices to emit a chirp that descends from the maximum of the calibration range (or above) to the threshold frequency (or below).
  • a descending chirp may be more pleasant to hear to some listeners than an ascending chirp, due to the physical shape of the human ear canal. While some implementations may use a descending swept signal, an ascending swept signal may also be effective for calibration.
  • example calibration sounds may include a noise component in addition to a swept signal component.
  • Noise refers to a random signal, which is in some cases filtered to have equal energy per octave.
  • the noise component of a hybrid calibration sound might be considered to be pseudorandom.
  • the noise component of the calibration sound may be emitted for substantially the entire period or repetition of the calibration sound. This causes each frequency covered by the noise component to be emitted for a longer duration, which decreases the signal intensity typically required to overcome background noise.
  • the noise component may cover a smaller frequency range than the chirp component, which may increase the sound energy at each frequency within the range.
  • a noise component might cover frequencies between a minimum of the frequency range and a threshold frequency, which might be, for example around a frequency around 50-100 Hz.
  • the minimum of the calibration range may correspond to the physical capabilities of the channel(s) emitting the calibration sound, which might be 20 Hz or below.
  • FIG 9 shows a graph 900 that illustrates an example brown noise.
  • Brown noise is a type of noise that is based on Brownian motion.
  • the playback device may emit a calibration sound that includes a brown noise in its noise component.
  • Brown noise has a "soft" quality, similar to a waterfall or heavy rainfall, which may be considered pleasant to some listeners. While some embodiments may implement a noise component using brown noise, other embodiments may implement the noise component using other types of noise, such as pink noise or white noise.
  • the intensity of the example brown noise decreases by 6 dB per octave (20 dB per decade).
  • a hybrid calibration sound may include a transition frequency range in which the noise component and the swept component overlap.
  • the control device may instruct the playback device to emit a calibration sound that includes a first component (e . g ., a noise component) and a second component ( e . g ., a sweep signal component).
  • the first component may include noise at frequencies between a minimum of the calibration frequency range and a first threshold frequency
  • the second component may sweep through frequencies between a second threshold frequency and a maximum of the calibration frequency range.
  • the second threshold frequency may a lower frequency than the first threshold frequency.
  • the transition frequency range includes frequencies between the second threshold frequency and the first threshold frequency, which might be, for example, 50-100 Hz.
  • Figures 10A and 10B illustrate components of example hybrid calibration signals that cover a calibration frequency range 1000.
  • Figure 10A illustrates a first component 1002A (i.e. , a noise component) and a second component 1004A of an example calibration sound.
  • Component 1002A covers frequencies from a minimum 1008A of the calibration range 1000 to a first threshold frequency 1008A.
  • Component 1004A covers frequencies from a second threshold 1010A to a maximum of the calibration frequency range 1000.
  • the threshold frequency 1008A and the threshold frequency 1010A are the same frequency.
  • Figure 10B illustrates a first component 1002B (i.e. , a noise component) and a second component 1004B of another example calibration sound.
  • Component 1002B covers frequencies from a minimum 1008B of the calibration range 1000 to a first threshold frequency 1008A.
  • Component 1004A covers frequencies from a second threshold 1010B to a maximum 1012B of the calibration frequency range 1000.
  • the threshold frequency 1010B is a lower frequency than threshold frequency 1008B such that component 1002B and component 1004B overlap in a transition frequency range that extends from threshold frequency 1010B to threshold frequency 1008B.
  • Figure 11 illustrates one example iteration (e.g., a period or cycle) of an example hybrid calibration sound that is represented as a frame 1100.
  • the frame 1100 includes a swept signal component 1102 and noise component 1104.
  • the swept signal component 1102 is shown as a downward sloping line to illustrate a swept signal that descends through frequencies of the calibration range.
  • the noise component 1104 is shown as a region to illustrate low-frequency noise throughout the frame 1100. As shown, the swept signal component 1102 and the noise component overlap in a transition frequency range.
  • the period 1106 of the calibration sound is approximately 3/8ths of a second ( e.g. , in a range of 1/4 to 1/2 second), which in some implementation is sufficient time to cover the calibration frequency range of a single channel.
  • Figure 12 illustrates an example periodic calibration sound 1200.
  • Five iterations (e.g. , periods) of hybrid calibration sound 1100 are represented as a frames 1202, 1204, 1206, 1208, and 1210.
  • the periodic calibration sound 1200 covers a calibration frequency range using two components (e.g. , a noise component and a swept signal component).
  • a spectral adjustment may be applied to the calibration sound to give the calibration sound a desired shape, or roll off, which may avoid overloading speaker drivers.
  • the calibration sound may be filtered to roll off at 3 dB per octave, or 1/ ⁇ .
  • Such a spectral adjustment might not be applied to vary low frequencies to prevent overloading the speaker drivers.
  • the calibration sound may be pre-generated.
  • a pre-generated calibration sound might be stored on the control device, the playback device, or on a server ( e.g. , a server that provides a cloud service to the media playback system).
  • the control device or server may send the pre-generated calibration sound to the playback device via a network interface, which the playback device may retrieve via a network interface of its own.
  • a control device may send the playback device an indication of a source of the calibration sound (e.g. , a URI), which the playback device may use to obtain the calibration sound.
  • a source of the calibration sound e.g. , a URI
  • the control device or the playback device may generate the calibration sound. For instance, for a given calibration range, the control device may generate noise that covers at least frequencies between a minimum of the calibration frequency range and a first threshold frequency and a swept sine that covers at least frequencies between a second threshold frequency and a maximum of the calibration frequency range.
  • the control device may combine the swept sine and the noise into the periodic calibration sound by applying a crossover filter function.
  • the cross-over filter function may combine a portion of the generated noise that includes frequencies below the first threshold frequency and a portion of the generated swept sine that includes frequencies above the second threshold frequency to obtain the desired calibration sound.
  • the device generating the calibration sound may have an analog circuit and/or digital signal processor to generate and/or combine the components of the hybrid calibration sound.
  • Calibration may be facilitated via one or more control interfaces, as displayed by one or more devices.
  • Example interfaces are described in U.S. Patent Application No. 14/696,014 filed April 24, 2015 , entitled “Speaker Calibration," and U.S. Patent Application No. 14/826,873 filed August 14, 2015 , entitled “Speaker Calibration User Interface.”
  • implementations 1300, 1900, and 2000 shown in Figures 13 , 19 and 20 respectively present example embodiments of techniques described herein. These example embodiments that can be implemented within an operating environment including, for example, the media playback system 100 of Figure 1 , one or more of the playback device 200 of Figure 2 , or one or more of the control device 300 of Figure 3 , as well as other devices described herein and/or other suitable devices.
  • operations illustrated by way of example as being performed by a media playback system can be performed by any suitable device, such as a playback device or a control device of a media playback system
  • implementations 1300, 1900, and 2000 may include one or more operations, functions, or actions as illustrated by one or more of blocks shown in Figures 13 , 19 , and 20 .
  • the blocks are illustrated in sequential order, these blocks may also be performed in parallel, and/or in a different order than those described herein.
  • the various blocks may be combined into fewer blocks, divided into additional blocks, and/or removed based upon the desired implementation.
  • each block may represent a module, a segment, or a portion of program code, which includes one or more instructions executable by a processor for implementing specific logical functions or steps in the process.
  • the program code may be stored on any type of computer readable medium, for example, such as a storage device including a disk or hard drive.
  • the computer readable medium may include non-transitory computer readable medium, for example, such as computer-readable media that stores data for short periods of time like register memory, processor cache, and Random Access Memory (RAM).
  • the computer readable medium may also include non-transitory media, such as secondary or persistent long term storage, like read only memory- (ROM), optical or magnetic disks, compact-disc read only memory (CD-ROM), for example.
  • the computer readable media may also be any other volatile or non-volatile storage systems.
  • the computer readable medium may be considered a computer readable storage medium, for example, or a tangible storage device.
  • each block may represent circuitry that is wired to perform the specific logical functions in the process.
  • FIG. 13 illustrates an example implementation 1300 by which a media playback system facilitates such a calibration.
  • implementation 1300 involves detecting a trigger condition.
  • a networked microphone device detects a trigger condition that initiates calibration of a media playback system (or perhaps a set of playback devices in a media playback system).
  • Example networked microphone devices include any suitable device that includes a network interface and a microphone.
  • playback devices e.g., playback device 200
  • control devices e.g., control device 300
  • Other example networked microphone devices include control devices 126 and 128 of Figure 1 .
  • the trigger condition initiates calibration of multiple audio drivers.
  • the multiple audio drivers may be housed in a single playback device.
  • a soundbar-type playback device may include multiple audio drivers (e.g., nine audio drivers).
  • the multiple audio drivers may be divided among two or more playback devices.
  • a soundbar with multiple audio drivers may be calibrated with one or more oilier playback devices each with one or more respective audio drivers.
  • Some example playback devices include multiple audio drivers of different types (e.g., tweeters and woofers, perhaps of varying size).
  • the particular playback devices (and audio drivers) under calibration may-correspond to zones of a media playback system.
  • an example trigger condition may initiate calibration of a given zone of a media playback system (e.g., the Living Room zone of media playback system 100 shown in Figure 1 ).
  • the Living Room zone includes playback devices 104, 106, 108, and 110 that together include multiple audio drivers, and the example trigger condition may therefore initiate calibration of multiple audio drivers.
  • Some example triggers conditions include input data instructing the media playback system to initiate calibration. Such input data may be received via a user interface (e.g. , control interface 600 of Figure 6 ) of a networked microphone device, as illustrated in Figure 6 or perhaps via another device that relays the instruction to the networked microphone device and/or the playback devices under calibration.
  • a user interface e.g. , control interface 600 of Figure 6
  • a networked microphone device as illustrated in Figure 6 or perhaps via another device that relays the instruction to the networked microphone device and/or the playback devices under calibration.
  • trigger conditions might be based on sensor data. For instance, sensor data from an accelerometer or other suitable sensor may indicate that a given playback device has moved, which may prompt calibration of that playback device (and perhaps other playback devices associated with the given playback device, such as those in a bonded zone or zone group with the playback device).
  • Some trigger conditions may involve a combination of input data and sensor data.
  • sensor data may indicate a change in the operating environment of a media playback system, which may cause a prompt to initiate calibration to be displayed on a networked microphone device.
  • the media playback system might proceed with calibration after receiving input data at the prompt indicating confirmation to initiate calibration.
  • example trigger conditions may be based on changes in configuration of a media playback system. For instance, example trigger conditions include addition or removal of a playback device from a media playback system (or grouping thereof). Other example trigger conditions include receiving new types of input content (e.g., receiving multi-channel audio content).
  • multiple audio drivers form one or more sound axes.
  • two playback devices each with a respective audio driver may form respective sound axes.
  • two or more audio drivers may be arrayed to form a sound axis.
  • a playback device with multiple audio drivers e.g. , a soundbar with nine audio drivers
  • may form multiple sound axes e.g. , three sound axes.
  • Any audio driver may contribute to any number of sound axes.
  • a given sound axis may be formed by contributions from all nine audio drivers of a soundbar.
  • Each sound axis corresponds to a respective input channel of audio content.
  • audio drivers of a media playback system may form two sound axes corresponding, respectively, to left and right channels of stereo content.
  • the audio drivers may form sound axes corresponding to respective channels of surround sound content (e.g. , center, front left, front right, rear left, and rear right channels).
  • Arraying two or more audio drivers to form a given sound axis enables the two or more audio drivers to "direct" the sound output for the given sound axis in a certain direction. For instance, where nine audio drivers of a soundbar are each contributing a portion of a sound axis corresponding to a left channel of surround sound content, the nine audio drivers may be arrayed (i.e. , acoustically summed, perhaps using a DSP) in such a way that the net polar response of the nine audio drivers directs sound to the left. Concurrently with the sound axis corresponding to the left channel, the nine audio drivers also form sound axes corresponding to center and right channels of the surround sound content to direct sound to die center and to the right, respectively.
  • a particular set of sound axes formed by playback devices of a media playback system may be referred to as a playback configuration.
  • playback devices of a media playback system may be configured into a given one of multiple possible playback configurations. While in a given playback configuration, the audio drivers of the playback devices may form a particular set of sound axes.
  • configuration of playback devices into a new playback configuration may act as a trigger condition to initiate calibration of the playback devices.
  • playback devices 104, 106, 108, and 110 of the Living Room zone may be configurable into multiple playback configurations.
  • playback device 104 may form one or more sound axes (e.g. , front, left, and right channels) while playback devices 108 and 110 form respective sound axes (e.g. , left and right surround channels).
  • Playback device 110 being a subwoofer-type device, may contribute a separate low-frequency sound axis or a low-frequency portion of the sound axes formed by playback devices 104, 106, and/or 108.
  • the audio drivers of playback devices 104, 106, 108, and 110 may combine to form sound axes corresponding to left and right channels of stereo audio content.
  • Another playback configuration may involve the audio drivers forming a single sound axis corresponding to mono audio content.
  • playback devices may utilize a given playback configuration according to various factors. Such factors may include the zone configuration (e.g., whether the playback devices are in a 5.1, 5.0, or other surround sound configuration, a stereo pair configuration, a play bar-only configuration, among others). The factors may also include the specific types and capabilities of the playback devices. The factors may further include the specific type of content provided to the playback devices (or expected to be provided). For instance, playback devices may adopt a first playback configuration when playing surround sound content and another when playing stereo content. As another example, playback devices may use a given playback configuration when playing music and another when playing audio that is paired with video (e.g., television content).
  • zone configuration e.g., whether the playback devices are in a 5.1, 5.0, or other surround sound configuration, a stereo pair configuration, a play bar-only configuration, among others.
  • the factors may also include the specific types and capabilities of the playback devices.
  • the factors may further include the specific type of content provided to the playback devices
  • playback configurations include any of the above example configurations with (or without) a subwoofer-type playback device, as addition (or subtraction) of such a device from the playback configuration may change the acoustic characteristics and/or allocation of playback responsibilities in the playback configuration.
  • Calibration sequences involve calibrating playback devices for multiple playback configurations. Such calibration sequences may yield multiple calibration profiles that are applied the playback devices are in a given playback configuration. For instance, a given calibration procedure may calibrate the Living Room zone of media playback system 100 for a surround sound playback configuration and a music playback configuration. While in the surround sound playback configuration, the playback devices of the Living Room zone may apply a first calibration profile (e.g. , one or more filters that adjusts one or more of magnitude response, frequency response, phase, etc.) corresponding to the surround sound playback configuration. Likewise, while in the music playback configuration, the play back devices of the Living Room zone may apply a second calibration profile corresponding to the music playback configuration.
  • a first calibration profile e.g. , one or more filters that adjusts one or more of magnitude response, frequency response, phase, etc.
  • implementation 1300 involves causing the multiple audio drivers to emit calibration audio.
  • the NMD instructs the playback device(s) that include the multiple audio drivers to emit calibration audio via the multiple audio drivers.
  • control device 126 of media playback system 100 may send a command that causes a playback device (e.g., one of playback devices 102-124) to emit calibration audio.
  • the NMD sends the command via a network interface (e.g., a wired or wireless network interface).
  • a playback device may receive such a command, perhaps via a network interface, and responsively emit the calibration audio.
  • the calibration audio may include one or more calibration sounds, such as a frequency sweep ("chirp"), brown noise or other types of noise, or a song, among other example sounds. Additional details on example calibration sounds are noted above in connection with the example calibration sequence described in section II. e, as well as generally throughout the disclosure.
  • chirp frequency sweep
  • brown noise or other types of noise
  • song among other example sounds. Additional details on example calibration sounds are noted above in connection with the example calibration sequence described in section II. e, as well as generally throughout the disclosure.
  • frames may represent iterations (e.g., a period or cycle) of an example calibration sound.
  • frames may produce respective samples of the calibration sound as emitted by one or more audio drivers.
  • Example calibration audio to calibrate multiple sound axes may be divided into constituent frames, wherein each frame includes calibration audio for every sound axis under calibration. Accordingly, when recorded, each frame may include samples of the calibration audio produced by each sound axis. The frames may repeat to produce multiple samples for each sound axis.
  • each frame is further divided into slots.
  • Each slot includes the calibration audio for a respective sound axis under calibration.
  • an example frame for a playbar-type playback device e.g., playback device 104 shown in Figure 1
  • three sound axes such as left, right, and center channels
  • each frame might include four slots, one for each sound axis formed by the playbar-type playback device and one for the sound axis produced by the subwoofer.
  • each frame may include five slots (or six slots if calibrated with a subwoofer).
  • each slot includes the calibration audio for a respective sound axis under calibration.
  • the calibration audio in each slot may include a frequency sweep ("chirp"), brown noise or other types of noise, among other examples.
  • the calibration audio in each sound may include a hybrid calibration sound. Slots may occur sequentially in a known order, so as to facilitate matching slots within recorded calibration audio to respective sound axes. Each slot may have a known duration, which may also facilitate matching slots within recorded calibration audio to respective sound axis.
  • each slot and/or frame may include a watermark (e.g. , a particular pattern of sound) to identify the slot or frame, which may be used to match slots within recorded calibration audio to respective sound axes.
  • FIG 14 shows an example calibration audio 1400.
  • Calibration sound 1400 includes frames 1402, 1404, and 1406.
  • Frames 1402, 1404, and 1406 are divided into respective three respective slots.
  • frame 1402 includes slots 1402A, 1402B and 1402C.
  • frames 1404 and 1406 include slots 1404A, 1404B, & 1404C and 1406A, 1406B, & 1406C, respectively.
  • Each slot includes an iteration of hybrid calibration sound 1 100 of Figure 1 1 .
  • the calibration sound in each slot may be emitted by a respective sound axis (perhaps formed via multiple audio drivers).
  • slots 1402A, 1404A, and 1406A may correspond to a first sound axis (e.g., a left channel) while slots 1402B, 1404B, and 1406B correspond to a second sound axis (and slots 1402C, 1404C, and 1406C to a third sound axis).
  • calibration audio 1400 may produce three samples of each sound axis, provided that a sufficient portion of frames 1402, 1404 and 1406 are recorded.
  • Calibration audio to calibrate multiple playback configurations may include a repeating series of frames. Each frame in a series corresponds to a respective playback configuration.
  • example calibration audio to calibrate three playback configurations may include a series of three frames (e.g. , frames 1402, 1404, and 1406 of Figure 14 ).
  • each frame in the series is divided into slots corresponding to the sound axes of the playback configuration corresponding to that frame. Since different playback configurations might form different sets of sound axes perhaps with different numbers of total axes, frames in a series may have different numbers of slots. The series of frames may repeat so as to produce multiple samples for each sound axis of each playback configuration,
  • implementation 1300 involves recording the emitted calibration audio.
  • an NMD records calibration audio as emitted by playback devices of a media playback system (e.g. , media playback system 100) via a microphone.
  • example NMDs include control devices ( e . g ., control device 126 or 128 of Figure 1 ), playback devices, or any suitable device with a microphone or other sensor to record calibration audio.
  • multiple NMDs may record the calibration audio via respective microphones.
  • the NMD may measure a portion of the calibration sounds as emitted by playback devices of a media playback system.
  • the calibration audio may be any of the example calibration sounds described above with respect to the example calibration procedure, as well as any suitable calibration sound.
  • the NMD(s) may remain more or less stationary while recording the calibration audio.
  • the NMDs may be positioned at one or more particular locations (e.g., a preferred listening location). Such positioning may facilitate recording the calibration audio as would be perceived by a listener at that particular location.
  • Certain playback configurations may suggest particular preferred listening locations. For example, playback configurations corresponding to surround sound audio or audio that is coupled with video may suggest the location at which users will watch television while listening to the playback devices ( e . g ., on a couch or chair).
  • an NMD may prompt to move to a particular location (e.g., a preferred listening location) to begin the calibration.
  • the NMD may prompt to move to certain listening locations corresponding to each playback configurations.
  • smartphone 500 is displaying control interface 1500 which includes graphical region 1502.
  • Graphical region 1502 prompts to move to a particular location (i.e. , where the user will usually watch TV in the room). Such a prompt may be displayed to guide a user to begin the calibration sequence in a preferred location.
  • Control interface 1500 also includes selectable controls 1504 and 1506, which respectively advance and step backward in the calibration sequence.
  • Figure 16 depicts smartphone 500 displaying control interface 1600 which includes graphical region 1602.
  • Graphical region 1602 prompts the user to raise the recording device to eye level. Such a prompt may be displayed to guide a user to position the phone in a position that facilitates measurement of the calibration audio.
  • Control interface 1600 also includes selectable controls 1604 and 1606, which respectively advance and step backward in the calibration sequence.
  • Figure 17 depicts smartphone 500 displaying control interface 1700 which includes graphical region 1702.
  • Graphical region 1702 prompts the user to "set the sweet spot.” (i. e., a preferred location within the environment).
  • smartphone 500 may begin measuring the calibration sound at its current location (and perhaps also instruct one or more playback devices to output the calibration audio).
  • control interface 1700 also includes selectable control 1706, which advances the calibration sequence (e.g., by causing smartphone to begin measuring the calibration sound at its current location, as with selectable control 1704).
  • smartphone 500 is displaying control interface 1800 which includes graphical region 1802.
  • Graphical region 1802 indicates that smartphone 500 is recording the calibration audio.
  • Control interface 1800 also includes selectable control 1804, which steps backwards in the calibration sequence.
  • implementation 1300 involves causing the recorded calibration audio to be processed.
  • the NMD causes a processing device to process the recorded calibration audio.
  • the NMD may include the processing device.
  • the NMD may transmit the recorded audio to one or more other processing devices for processing.
  • Example processing devices include playback devices, control devices, a computing device connected to the media playback system via a local area network, a remote computing device such as a cloud server, or any combination of the above.
  • Processing of the calibration audio involves determining one or more calibrations for each of the one or more sound axes.
  • Each calibration of the multiple sound axes may involve modifying one or more of magnitude response, frequency response, phase adjustment, or any other acoustic characteristic. Such modifications may spatially calibrate the multiple sound axes to one or more locations (e.g., one or more preferred listening locations).
  • the calibration data may include the parameters to implement the filters (e.g., as the coefficients of a bi-quad filter). Filters may be applied per audio driver or per set of two or more drivers (e.g., two or more drivers that form a sound axis or two or more of the same type of audio driver, among other examples). In some cases, respective calibrations may be determined for the multiple playback configurations under calibration.
  • the recorded calibration audio may be processed as it is recorded or after recording is complete. For instance, where the calibration audio is divided into frames, the frames may be transmitted to the processing device as they are recorded, possibly in groups of frames. Alternatively, the recorded frames may be transmitted to the processing device after the playback devices finish emitting the calibration audio.
  • Processing may involve determining respective delays for each sound axis of the multiple sound axes. Ultimately, such delays may be used to align time-of-arrival of respective sound from each sound axis at a particular location ( e . g ., a preferred listening location).
  • a calibration profile for a given playback configuration may include filters that delay certain sound axes of the playback configuration to align time-of-arrival of the sound axes of the playback configuration at a preferred listening location. Sound axes may have different times-of-arrival at a particular location because they are formed by audio drivers at different distances from the particular location. Further, some sound axes may be directed away from the particular location ( e .
  • Such a sound path may increase the effective distance between the audio drivers forming a sound axis and the particular location, which may cause a later time-of-arrival as compared to sound axes that have a more direct path.
  • a preferred listening location might be a couch or chair for a surround sound playback configuration.
  • the processing device may separate the recorded audio into parts corresponding to the different sound axes and/or playback configurations that emitted each part. For instance, where the calibration sound emitted by the playback devices was divided into frames, the processing device may divide the recorded audio back into the constituent frames. Where the calibration sound included a series of frames, the processing device may attribute the frames from each series to the respective playback configuration corresponding to those frames. Further, the processing device may divide each frame into respective slots corresponding to each sound axis. As noted above, the playback devices may emit frames and slots in a known sequence and each slot may have a known duration to facilitate dividing the recorded audio into its constituent parts. In some examples, each slot and/or frame may include a watermark to identify the slot or frame, which may be used to match frames within recorded calibration audio to respective playback configurations and/or slots to respective sound axes.
  • the processing device may determine an impulse response for each sound axis.
  • Each impulse response may be further processed by generating frequency filtered responses so as to divide the impulse responses into frequency bands.
  • Audio drivers of different types may array better at different frequency bands.
  • mid-range woofers may array well to form a sound axis in a range from 300 Hz to 2.5 kHz.
  • tweeters may array well in a range from 8 kHz to 14 kHz.
  • the sound axis should be maximum on-axis and attenuated to the right and left.
  • each array should be attenuated ( e .
  • a processing device may determine three band-limited responses. Such responses might include a full-range response, a response covering a mid-range for woofers (e.g., 300 Hz to 2.5 kHz), and a response covering high frequencies for the tweeters (e.g., 3 kHz to 14 kHz). Such frequency-filtered responses may facilitate further processing by more clearly representing each sound axis.
  • Processing the recorded audio may involve comparisons between the responses for each sound axis.
  • the impulse responses for each slot may be time-aligned with one another (as they were emitted during different periods of time). For instance, the impulse responses may be aligned to a first reference point, such as the beginning of each slot. Such time-alignment of the impulse responses facilitates identification of particular reference points in each response.
  • identification of particular reference points in each response involves identifying a given second reference point in an impulse response of a reference sound axis.
  • the reference sound axis may be a sound axis corresponding to a center channel of a surround sound system ( e.g. , a 3.0, 3.1, 5.0, 5.1 or other multi-channel playback configuration). This sound axis may be used as the reference sound axis because sound from this axis travels more directly to typical preferred listening locations than other sound axes ( e . g ., sound axis that form left and right channels).
  • the given second reference point in this impulse response may be the first peak value.
  • the first peak can be assumed to correspond to the direct signal from the audio driver(s) to the NMD (rather than a reflection).
  • This given second reference point i.e ., the first peak
  • This given second reference point is used as a reference for subsequent times-of-arrival of other sound axes at the NMD.
  • the processing device may identify second reference points in the other impulse responses. These other second reference points correspond to the same second reference point as in the reference sound axis. For instance, if the first peak in the impulse response of the reference sound axis was used as the given second reference point, then the first peaks in the other impulse responses are identified as the second reference points.
  • a time window may be applied to limit the portion of each impulse response where the second reference points are to be identified.
  • the impulse responses for the sound axes forming the left and right channels can be limited to a time window subsequent to the peak value in the impulse response for the sound axis forming the center channels. Sound from the sound axes forming the left and right channels travels outward to the left and right (rather than on-axis) and thus the peak value of interest will be a reflection of the sound from these axes off the environment.
  • sound axes forming left and/or right surround channels and/or a subwoofer channel may have been physically closer to the NMD than the audio driver(s) forming the center channel.
  • a window for impulse responses corresponding to those axes may encompass time before and after the given reference point in the reference sound axis so as to account for the possibility of either positive or negative delay relative to that reference sound axis.
  • the respective times-of-arrival of sound from each sound axis at the NMD i . e . , the microphone of the NMD
  • the processing device may determine the respective times-of-arrival at the microphone by comparing respective differences from the first reference point to the second reference points in each impulse response.
  • the processing device may determine respective delays to be applied for each sound axis.
  • the processing device may determine the delays relative to a delay target.
  • This delay target may be the sound axis that has the latest time-of-arrival.
  • the sound axis acting as the delay target might not receive any delay.
  • Other sound axes may be assigned a delay to match the time-of-arrival of the sound axis acting as the delay target.
  • a sound axis that forms a center channel may not be used as the delay target in some instances because sound axes with later times-of-arrival cannot be assigned "negative" delay to match the time-of-arrival of the sound axis forming the center channel.
  • the delay for any given sound axis may be capped at a maximum delay threshold.
  • Such capping may prevent issues with large amounts of delay causing apparent mismatch between audio content output by the sound axes and video content that is coupled to that audio content (e.g. , lip-sync issues).
  • Such capping may be applied only to playback configurations that include audio paired with video, as large delays may not impact user experience when the audio is not paired with video.
  • the video display is synchronized with the playback device(s)
  • the video might be delayed to avoid apparent mismatch between audio content output by the sound axes and video content that is coupled to that audio content, which may eliminate the need for a maximum delay threshold
  • the NMD that recorded the calibration audio might not perform certain portions of the processing (or might not process the calibration audio at all). Rather, the NMD may transmit data representing the recorded calibration audio to a processing device, perhaps with one or more instructions on how to process the recorded calibration audio. In other cases, the processing device may be programmed to process recorded calibration audio using certain techniques. In such embodiments, transmitting data representing the recorded calibration audio (e.g. , data representing raw samples of calibration audio and/or data representing partially processed calibration audio) may cause the processing device to determine calibration profiles ( e . g ., filter parameters).
  • calibration profiles e . g ., filter parameters
  • implementation 1300 involves causing calibration of the multiple sound axes.
  • the NMD may send calibration data to the playback device(s) that form the multiple sound axes.
  • the NMD may instruct another processing device to transmit calibration data to the playback device.
  • Such calibration data may causes the playback device(s) to calibrate the multiple sound axes to a certain response.
  • calibration of the multiple sound axes may involve modifying one or more of magnitude response, frequency response, phase adjustment, or any other acoustic characteristic. Such modifications may be applied using one or more filters implemented in a DSP or as analog filters.
  • the calibration data may include the parameters to implement the filters ( e . g ., as the coefficients of a bi-quad filter). Filters may be applied per audio driver or per set of two or more drivers ( e.g. , two or more drivers that form a sound axis or two or more of the same type of audio driver, among other examples).
  • Calibrating the multiple sound axes may include causing audio output of the multiple sound axes to be delayed according to the respective determined delays for the sound axes.
  • Such delays may be formed by causing respective filters to delay audio output of the multiple audio drivers according to the respective determined delays for the multiple sound axes.
  • filters may implement a circular buffer delay line, among other examples.
  • the delays are dynamic. For instance, the response of one axis may overlap with the response of another in a given range, yet the sound axes may have different times-of-arrival (thus suggesting different delays). In such situations, the delays of each sound axis may be smoothed across the overlapping range. For instance, a delay curve maybe implemented across the range to smooth the delay. Such smoothing may improve user experience by avoiding possibly sharp differences in delay between sound axes in overlapping ranges.
  • Figure 19 illustrates an example implementation 1900 by which a playback device facilitates spectral calibration using applied spatial calibration.
  • implementation 1900 involves receiving data, representing one or spatial calibrations.
  • a playback device e.g. , any playback device of media playback system 100 in Figure 1 or playback device 300 in Figure 3
  • may receive data representing one or more spatial calibrations e.g., any of the multiple calibrations described above in connection with implementation 1300 of Figure 13
  • a device such as a processing device or a NMD, among other possible sources.
  • Each calibration may have been previously determined by way of a calibration sequence, such as the example calibration sequences described above.
  • a calibration includes one or more filters. Such filters may modify one or more of magnitude response, frequency response, phase adjustment, or any other acoustic characteristic. Further, such filters may calibrate the playback device(s) under calibration to one or more particular listening locations within a listening area. As noted above, the filters may be implemented in a DSP (e.g., as the coefficients of a bi-quad filter) or as an analog filter, or a combination thereof.
  • the received calibration data may include a filter for each audio channel, axis or device under calibration. Alternatively, a filter may be applied to more than one audio channel, axis or device.
  • multiple calibrations may correspond to respective playback configurations.
  • a playback configuration refers to a specific set of sound axes formed by multiple audio drivers.
  • a spatial calibration includes calibration of audio drivers in multiple playback configurations.
  • Each filter (or set of filters) corresponds to a different playback configuration.
  • playback configurations may involve variance in the assignment of audio drivers to form sound axes.
  • Each sound axis in a playback configuration may correspond to a respective input channel of audio content.
  • Example playback configurations may correspond to difference numbers of input channels, such as mono, stereo, surround (e.g. , 3.0, 5.0, 7.0) or any of the above in combination with a subwoofer (e.g. , 3.1 , 5.1, 7.1),
  • Other playback configurations may be based on input content type.
  • example playback configurations may correspond to input audio content including music, home theater (i. e., audio paired with video), surround sound audio content, spoken word, etc.
  • the received calibrations may include filter(s) corresponding to any individual playback configuration or any combination of playback configurations.
  • the playback device may maintain these calibrations in data storage. Alternatively, such calibrations may be maintained on a device or system that is communicatively coupled to the playback device via a network. The playback device may receive the calibrations from this device or system, perhaps upon request from the playback device.
  • implementation 1900 involves causing the audio driver(s) to output calibration audio.
  • the playback device may cause an audio stage to drive the audio drivers to output calibration audio.
  • An example audio stage may- include amplifier(s), signal processing (e.g., a DSP), as well as other possible components.
  • the playback device may instruct other playback devices under calibration to output calibration audio, perhaps when acting as a group coordinator for the playback devices under calibration.
  • the calibration audio may include one or more calibration sounds, such as a frequency sweep ("chirp"), brown noise or other types of noise, or a song, among other examples. Additional details on example calibration sounds are noted above in connection with the example calibration sequences described above.
  • frames may represent iterations of an example calibration sound.
  • frames may produce respective samples of the calibration sound as emitted by one or more audio drivers.
  • the frames may repeat to produce multiple samples.
  • each frame includes calibration audio for every sound axis under calibration. Accordingly, when recorded, each frame may include samples of the calibration audio produced by each sound axis. The frames may repeat to produce multiple samples for each sound axis.
  • Example calibration audio to calibrate multiple playback configurations may include a repeating set of frames. Each frame in a set corresponds to a respective playback configuration.
  • example calibration audio to calibrate three playback configurations may include a series of three frames (e.g., frames 1402, 1404, and 1406 of Figure 14 ).
  • the playback device may apply a spatial calibration corresponding to a respective playback configuration.
  • Applying a spatial calibration involves causing an audio stage (or multiple audio stages) to apply respective filter(s) corresponding to each playback configuration.
  • the calibration is applied to modify one or more of magnitude response, frequency response, phase adjustment, or any other acoustic characteristic of the audio driver(s) as the calibration audio is emitted.
  • filters may modify the emitted calibration audio to suit a particular listening location. For instance, example spatial filters may at least partially balance time-of-arrival of sound from multiple sound axes at the particular listening location.
  • the spatial calibration may be applied to calibration audio by a device other than the playback device.
  • a spatial calibration may be applied by any device that stores and/or generates the calibration audio for output by the audio drivers using a processor or DSP of that device.
  • a spatial calibration may be applied by any intermediary device between the device that stores the calibration audio and the playback device(s) under calibration.
  • each frame is further divided into slots.
  • Each slot includes the calibration audio for a respective sound axis under calibration.
  • an example frame for a playbar-type playback device e.g., playback device 104 shown in Figure 1
  • three sound axes such as left, right, and center channels
  • each frame might include four slots, one for each sound axis formed by the playbar-type playback device and one for the sound axis produced by the subwoofer.
  • each frame may include five slots (or six slots if calibrated with a subwoofer).
  • Figure 14 illustrates example calibration audio with constituent frames that are divided into slots.
  • each slot includes the calibration audio for a respective sound axis under calibration.
  • the calibration audio in each slot may include a frequency sweep ("chirp"), brown noise or other types of noise, among other examples.
  • the calibration audio in each sound may include a hybrid calibration sound. Slots may occur sequentially in a known order, so as to facilitate matching slots within recorded calibration audio to respective sound axes. Each slot may have a known duration, which may also facilitate matching slots within recorded calibration audio to respective sound axis.
  • each slot and/or frame may include a watermark (e.g. , a particular pattern of sound) to identify the slot or frame, which may be used to match slots within recorded calibration audio to respective sound axes.
  • implementation 1900 involves receiving data representing one or spectral calibrations.
  • the playback device may receive data representing one or more spectral calibrations from a processing device. These spectral calibrations are based on the calibration audio output by the audio driver(s).
  • the calibration audio output from the audio driver(s) is recorded by one or more recording devices (e.g., an NMD). Before being recorded, the calibration audio may be interact (e.g., be reflected or absorbed) by the surrounding environment and thereby represent characteristics of the environment.
  • Example spectral calibrations may offset acoustics characteristics of the environment to achieve a given response (e.g., a flat response, a response that is considered desirable, or a set equalization). For instance, if a given environment attenuates frequencies around 500 Hz and amplifies frequencies around 14000 Hz, a calibration might boost frequencies around 500 Hz and cut frequencies around 14000 Hz so as to offset these environmental effects.
  • a given response e.g., a flat response, a response that is considered desirable, or a set equalization. For instance, if a given environment attenuates frequencies around 500 Hz and amplifies frequencies around 14000 Hz, a calibration might boost frequencies around 500 Hz and cut frequencies around 14000 Hz so as to offset these environmental effects.
  • Example processing devices include NMDs, other playback devices, control devices, a computing device connected to the media playback system via a local area network, a remote computing device such as a cloud server, or any combination of the above.
  • the processing device(s) may transmit the spatial calibrations to one or more intermediary devices which may transmit the spatial calibrations to the playback device.
  • intermediary devices may store the data representing one or spatial calibrations.
  • implementation 1900 involves applying a particular spectral calibration.
  • the playback device may apply a particular filter corresponding to a given playback configuration when playing back audio content in that playback configuration.
  • the playback device may maintain or have access to respective spectral calibrations corresponding to multiple playback configurations.
  • the playback device may be instructed to enter a particular playback configuration and accordingly apply a particular calibration corresponding to that playback configuration. For instance, a control device may transit a command to form a specific set of sound axes corresponding to a given playback configuration
  • the playback device may detect the proper spectral calibration to apply based on its current configuration.
  • playback devices may be joined into various groupings, such as a zone group or bonded zone. Each grouping may represent a playback configuration.
  • the playback device may apply a particular calibration associated with the playback configuration of that grouping. For instance, based on detecting that the playback device has joined a particular zone group, the playback device may apply a certain calibration associated with zone groups (or with the particular zone group).
  • the playback device may detect the spectral calibration to apply based the audio content being provided to the playback device (or that it has been instructed to play back). For instance, the playback device may detect that it is playing back media content that consists of only audio (e . g ., music). In such cases, the playback device may apply a particular calibration associated with a playback configuration that corresponds to music playback. As another example, the playback device may receive media content that is associated with both audio and video (e.g., a television show or movie). When playing back such content, the playback device may apply a particular calibration corresponding to audio that is paired with video, or perhaps a calibration corresponding to home theater ( e.g. , surround sound).
  • the playback device may apply a particular calibration corresponding to audio that is paired with video, or perhaps a calibration corresponding to home theater ( e.g. , surround sound).
  • the playback device may apply a certain calibration based on the source of the audio content. Receiving content via a particular one of these sources may trigger a particular playback configuration. For instance, receiving content via a network interface may indicate music playback. As such, while receiving content via the network interface, the playback device may apply a particular calibration associated with a particular playback configuration corresponding to music playback. As another example, receiving content via a particular physical input may indicate home theater use ( i.e. , playback of audio from a television show or movie). While playing back content from that input, the playback device may apply a different calibration associated with a playback configuration corresponding to home theater playback.
  • a given zone scene may be associated with a particular playback configuration.
  • the playback device may apply a particular calibration associated with that playback configuration.
  • the content or configuration associated with a zone scene may cause the playback device to apply a particular calibration.
  • a zone scene may involve playback of a particular media content or content source, which causes the playback device to apply a particular calibration.
  • the playback configuration may be indicated to the playback device by way of one or more messages from a control device or another playback device. For instance, after receiving input that selects a particular playback configuration, a device may indicate to the playback device that a particular playback configuration is selected. The playback device may apply a calibration associated with that playback configuration. As another example, the playback device may be a member of a group, such as a bonded zone group. Another playback device, such as a group coordinator device of that group, may detect a playback configuration of the group and send a message indicating that playback configuration (or the calibration for that configuration) to the playback device.
  • the playback device may also apply the calibration to one or more additional playback devices.
  • the playback device may be a member (e.g., the group coordinator) of a group (e.g., a zone group).
  • the playback device may send messages instructing other playback devices in the group to apply the calibration. Upon receiving such a message, these playback devices may apply the calibration.
  • the calibration or calibration state may be shared among devices of a media playback system using one or more state variables.
  • Some examples techniques involving calibration state variables are described in U.S. Patent Application No. 14/793, 190 filed July 7, 2015 , entitled “Calibration State Variable,” and U.S. Patent Application No. 14/793,205 filed July 7, 2015 , entitled “Calibration Indicator.”
  • Figure 20 illustrates an example implementation 2,00 by which an NMD facilitates spectral calibration of a media playback system using applied spatial calibration.
  • implementation 2000 involves detecting a trigger condition that initiates calibration.
  • a NMD detects a trigger condition that initiates calibration of a media playback system.
  • the trigger condition initiates calibration of the playback device(s) in the media playback system for multiple playback configurations, either explicitly or perhaps because the audio driver(s) of the playback device(s) have been set up with multiple playback configurations.
  • Example trigger conditions to initiate a calibration are described above in section III. a, as well as generally throughout the disclosure.
  • implementation 2000 involves causing audio driver(s) to output calibration audio.
  • the NMD causes multiple audio drivers to output calibration audio.
  • the NMD transmit an instruction the playback device(s) under calibration, perhaps via a network interface.
  • Example calibration audio is described above in connection with the example calibration techniques.
  • implementation 2000 involves recording the calibration audio.
  • the NMD records the calibration audio as output by the audio driver(s) of the playback device(s) under calibration via a microphone.
  • multiple NMDs may record the calibration audio via respective microphones.
  • the NMD may be moving through the environment while recording the calibration audio so as to measure the calibration sounds at different locations. With a moving microphone, repetitions of the calibration sound are detected at different physical locations within the environment. Samples of the calibration sound at different locations may provide a better representation of the surrounding environment as compared to samples in one location.
  • control device 126 of media playback system 100 may detect calibration audio emitted by one or more playback devices (e.g., playback devices 104, 106, 108, and/or 1 10 of the Living Room Zone) at various points along the path 700 (e.g. , at point 702 and/or point 704). Alternatively, the control device may record the calibration signal along the path.
  • playback devices e.g., playback devices 104, 106, 108, and/or 1 10 of the Living Room Zone
  • the control device may record the calibration signal along the path.
  • an NMD may display one or more prompts to move the NMD while the calibration audio is being emitted. Such prompts may guide a user in moving the recording device during the calibration.
  • smartphone 500 is displaying control interface 2100 which includes graphical regions 21 02 and 2104. Graphical region 21 02 prompts to watch an animation in graphical region 2104. Such an animation may depict an example of how to move the smartphone within the environment during calibration to measure the calibration audio at different locations. While an animation is shown in graphical region 2104 by way of example, the control device may alternatively show a video or other indication that illustrates how to move the control device within the environment during calibration.
  • Control interface 2100 also includes selectable controls 2106 and 2108, which respectively advance and step backward in the calibration sequence.
  • implementation 2000 involves determining spectral calibrations.
  • the NMD causes a processing device to determine respective sets of spectral filters for the multiple playback configurations under calibrations. These spectral calibrations may be based on the recorded calibration audio output by the audio driver(s).
  • the NMD may include the processing device.
  • the NMD may transmit the recorded audio to one or more other processing devices. Example processing devices and processing techniques are described above.
  • the NMD may causing a particular calibration (e.g., a particular set of spectral filters) corresponding to a given playback configuration to be applied to the sound axes formed by the multiple audio drivers when the media playback system is playing back audio content in the given playback configuration. Additional examples of applying calibrations are described above.
  • a particular calibration e.g., a particular set of spectral filters
  • At least one of the elements in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, and so on, storing the software and/or firmware.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Circuits Of Receivers In General (AREA)

Claims (15)

  1. Procédé destiné à un dispositif de microphone en réseau, NMD, le procédé comprenant :
    la détection (1302, 2002) d'une condition de déclenchement qui initie l'étalonnage d'un système de lecture multimédia comprenant plusieurs pilotes audio,
    l'incitation (1304, 2006), par l'intermédiaire d'une interface réseau du dispositif de microphone en réseau, de chacun des multiples pilotes audio du système de lecture multimédia afin de produire un audio d'étalonnage divisé en un ensemble répétitif de trames (1402, 1404, 1406),
    l'enregistrement (1306), par l'intermédiaire d'un microphone du NMD, de l'audio d'étalonnage produit par les multiples pilotes audio ; et
    l'incitation (1308, 2008) d'un dispositif de traitement afin de déterminer des ensembles respectifs de filtres spectraux destinés au système de lecture multimédia, le procédé étant caractérisé en ce que :
    l'étalonnage du système de lecture multimédia concerne plusieurs configurations de lecture, chaque configuration de lecture représentant un ensemble respectif d'un ou plusieurs axes sonores formés par l'intermédiaire des multiples pilotes audio du système de lecture multimédia, dans lequel chaque axe sonore correspond à un canal d'entrée respectif du contenu audio ; et
    chaque trame de l'audio d'étalonnage de sortie correspond à une configuration de lecture respective ; et
    la condition de déclenchement incite chacun des multiples pilotes audio du système de lecture multimédia afin de produire l'audio d'étalonnage par l'intermédiaire d'un ou plusieurs axes sonores correspondant à la configuration de lecture donnée pendant les créneaux respectifs (1402A-C, 1404A-C, 1406A-C) de chaque trame correspondant à la configuration de lecture respective, de sorte que, pendant chaque trame de l'ensemble de trames, un ensemble respectif de filtres spatiaux comprenant un filtre spatial respectif pour chacun des un ou plusieurs axes sonores correspondant à la configuration de lecture respective est appliqué aux multiples pilotes audio, dans lequel le filtre spatial étalonne le système de lecture multimédia à une zone d'écoute donnée dans l'espace en dirigeant la sortie sonore d'un axe sonore particulier de l'ensemble d'axes sonores dans une certaine direction en mettant en réseau plusieurs pilotes audio afin de former l'axe sonore particulier ; et
    le dispositif de traitement détermine les ensembles respectifs de filtres spectraux destinés aux multiples configurations de lecture en fonction de l'audio d'étalonnage enregistré, chaque ensemble de filtres spectraux comprenant un filtre spectral respectif destiné à chaque axe sonore.
  2. Procédé selon la revendication 1, comprenant en outre, lorsque le système de lecture multimédia lit un contenu audio dans une configuration de lecture donnée, l'application (1310) de l'ensemble déterminé de filtres spectraux correspondant à la configuration de lecture donnée aux axes sonores formés par les multiples pilotes audio.
  3. Procédé selon l'une quelconque revendication précédente, dans lequel l'application de l'ensemble respectif de filtres spatiaux aux multiples pilotes audio comprend l'application par le dispositif de traitement des filtres spatiaux à l'audio d'étalonnage et la transmission de l'audio d'étalonnage aux filtres spatiaux appliqués à un ou plusieurs dispositifs de lecture (200) comprenant les multiples pilotes audio (212).
  4. Procédé selon l'une quelconque revendication précédente, dans lequel le système de lecture multimédia comprend plusieurs dispositifs de lecture, chacun comprenant un sous-ensemble de multiples pilotes audio.
  5. Procédé selon l'une quelconque revendication précédente, dans lequel :
    dans une configuration de lecture de son ambiophonique :
    chaque axe sonore correspond à un canal respectif de contenu audio de son ambiophonique ; et
    un premier filtre spatial correspond à la configuration de lecture de son ambiophonique ;
    dans une configuration de lecture stéréo :
    chaque axe sonore correspond à un canal respectif de contenu audio stéréo ; et
    un deuxième filtre spatial correspond à la configuration de lecture stéréo ; et
    dans une configuration de lecture mono :
    les multiples pilotes audio forment un axe sonore unique ; et
    un troisième filtre spatial correspond à la configuration de lecture mono.
  6. Procédé selon la revendication 5, dans lequel :
    la configuration de lecture mono est une première configuration de lecture mono, la configuration de lecture stéréo est une première configuration de lecture stéréo, la configuration de lecture de son ambiophonique est une première configuration de son ambiophonique ; et les configurations de lecture multiples comprennent au moins l'une des configurations suivantes :
    une seconde configuration de lecture mono, les multiples pilotes audio étant configurés pour former un ou plusieurs axes sonores de gamme complète de fréquences et un axe sonore de caisson de basse afin de produire de manière synchronisée un contenu audio mono lors de la lecture d'un contenu audio dans la configuration de lecture mono, dans lequel un quatrième filtre spatial correspond à la seconde configuration de lecture mono ;
    une seconde configuration de lecture stéréo, les multiples pilotes audio étant configurés pour former un ou plusieurs axes sonores afin de produire de manière synchronisée des canaux de contenu audio stéréo avec un axe sonore de caisson de basse lors de la lecture de contenu audio dans la seconde configuration de lecture stéréo, dans lequel un troisième filtre spatial correspond à la seconde configuration de lecture stéréo ; et
    une seconde configuration de lecture de son ambiophonique, les multiples pilotes audio étant configurés pour former un ou plusieurs axes sonores de gamme complète de fréquences afin de produire de manière synchronisée des canaux respectifs de contenu audio de son ambiophonique avec un axe sonore de caisson de basse lors de la lecture de contenu audio dans la seconde configuration de lecture de son ambiophonique, dans lequel un quatrième filtre spatial correspond à la seconde configuration de lecture de son ambiophonique.
  7. Procédé selon l'une quelconque revendication précédente, dans lequel les configurations de lecture multiples comprennent au moins deux des éléments suivants
    une configuration de lecture musicale, les multiples pilotes audio étant configurés pour former des axes sonores afin de produire un contenu musical lors de la lecture d'un contenu audio dans la configuration de lecture musicale, dans lequel un filtre spatial de lecture musicale correspond à la configuration de lecture musicale, et
    une configuration de lecture home cinéma, les multiples pilotes audio étant configurés pour former des axes sonores afin de produire un contenu audio associé à un contenu vidéo lors de la lecture d'un contenu audio dans la configuration de lecture home cinéma, dans lequel un filtre spatial de lecture home cinéma correspond à la configuration de lecture home cinéma.
  8. Procédé selon l'une quelconque revendication précédente, dans lequel l'audio d'étalonnage est un second audio d'étalonnage, le procédé comprenant en outre :
    avant l'incitation des multiples pilotes audio afin de produire le second audio d'étalonnage, l'incitation des multiples pilotes audio afin de produire le premier audio d'étalonnage divisé en un ensemble répétitif de trames comprenant une trame respective de chaque configuration de lecture parmi les configurations de lecture multiples :
    l'enregistrement, par l'intermédiaire du microphone, du premier audio d'étalonnage produit par les multiples pilotes audio ; et
    l'incitation du dispositif de traitement afin de déterminer les ensembles respectifs de filtres spatiaux destinés aux configurations de lecture multiples en fonction du premier audio d'étalonnage enregistré, chaque ensemble de filtres spatiaux comprenant un filtre spatial respectif destiné à chaque axe sonore.
  9. Procédé selon la revendication 8, dans lequel :
    les ensembles déterminés de filtres spatiaux étalonnent le dispositif de lecture à un emplacement d'écoute particulier dans une zone d'écoute du dispositif de lecture, et
    les filtres spectraux déterminés compensent les caractéristiques acoustiques de la zone d'écoute.
  10. Procédé selon la revendication 8 ou 9, dans lequel :
    l'incitation des multiples pilotes audio afin de produire le premier audio d'étalonnage comprend l'incitation des multiples pilotes audio afin d'émettre un audio d'étalonnage par l'intermédiaire de multiples axes sonores à des emplacements respectifs dans chaque trame, chaque axe sonore correspondant à un canal respectif d'un contenu audio multicanal ; et
    l'incitation du dispositif de traitement afin de déterminer les ensembles respectifs de filtres spatiaux comprend :
    la détermination des retards spatiaux respectifs pour chaque axe sonore des multiples axes sonores en fonction des créneaux d'audio d'étalonnage enregistrés correspondant aux axes sonores en fonction des retards respectifs déterminés,
    dans lequel la détermination des retards respectifs pour chaque axe sonore des multiples axes sonores comprend :
    l'incitation d'un dispositif de traitement afin de déterminer les temps d'arrivée respectifs au microphone pour chaque axe sonore des axes sonores multiples à partir des créneaux d'audio d'étalonnage enregistrés correspondant à chaque axe sonore ; et
    la détermination des retards pour chaque axe sonore des multiples axes sonores, chaque retard déterminé correspondant au temps d'arrivée déterminé d'un axe sonore respectif.
  11. Procédé selon la revendication 10, dans lequel l'incitation du dispositif de traitement permettant de déterminer des moments d'arrivée respectifs au microphone pour chaque axe sonore des multiples axes sonores comprend :
    la division de l'audio d'étalonnage enregistré en trames constitutives et de chaque trame constitutive en créneaux respectifs pour chaque axe sonore ;
    la détermination des réponses impulsionnelles respectives pour les axes sonores à partir des fentes respectives correspondant à chaque axe sonore ;
    l'alignement des réponses impulsionnelles respectives sur un premier point de référence ;
    l'identification de seconds points de référence respectifs dans chaque réponse impulsionnelle ; et la détermination des heures d'arrivée respectives au microphone en fonction des différences respectives entre le premier point de référence et les seconds points de référence dans chaque réponse impulsionnelle, dans lequel les axes sonores sont constitués d'un axe sonore de référence et d'un ou plusieurs autres axes sonores, et dans lequel l'identification des seconds points de référence respectifs dans chaque réponse impulsionnelle comprend :
    l'identification, en tant que second point de référence donné, d'une valeur de crête dans la réponse impulsionnelle de l'axe sonore de référence ; et
    l'identification, en tant qu'autres seconds points de référence, des valeurs de crête respectives des réponses impulsionnelles d'un ou de plusieurs autres axes sonores dans une fenêtre temporelle postérieure au second point de référence donné.
  12. Procédé selon la revendication 10 ou 11, dans lequel la détermination par le dispositif de traitement des heures d'arrivée respectives au microphone pour chaque axe sonore des multiples axes sonores comprend :
    l'envoi, par l'intermédiaire de l'interface réseau au dispositif de traitement :
    l'audio d'étalonnage enregistré, et
    une instruction afin de déterminer les temps d'arrivée respectifs au microphone pour chaque axe sonore des axes sonores multiples ; et
    la réception, par l'intermédiaire de l'interface réseau, des temps d'arrivée respectifs déterminés.
  13. Procédé selon la revendication 10, dans lequel la détermination des retards pour chaque axe sonore des multiples axes sonores comprend :
    la détermination que le temps d'arrivée d'un axe sonore donné dépasse un seuil de retard maximal ; et
    le retard de l'axe sonore donné est réglé sur le seuil de retard maximal lorsque le système de lecture multimédia lit un contenu audio associé à un contenu vidéo.
  14. Procédé selon l'une quelconque revendication précédente, dans lequel la détection de la condition de déclenchement qui initie l'étalonnage d'un système de lecture multimédia comprend l'un des éléments suivants :
    la détection, par l'intermédiaire d'une interface utilisateur, de données d'entrée indiquant une commande pour initier l'étalonnage du système de lecture multimédia ; et
    la détection de la configuration du système de lecture multimédia dans une configuration d'axe particulière, dans lequel les multiples pilotes audio forment un ensemble particulier d'axes sonores.
  15. Système de lecture multimédia comprenant :
    de multiples pilotes audio ;
    un dispositif de microphone en réseau comprenant une interface réseau et un microphone ;
    un dispositif de lecture (200) comprenant au moins un pilote audio à partir des multiples pilotes audio et une interface réseau ; et
    un dispositif de traitement,
    dans lequel le dispositif de microphone en réseau est configuré pour effectuer le procédé selon l'une quelconque des revendications 1 à 14, et
    le dispositif de lecture est configuré pour, lors de la lecture d'un contenu audio dans une configuration de lecture donnée, activer un étage audio afin d'appliquer un filtre spectral particulier correspondant à la configuration de lecture donnée.
EP17754501.9A 2016-07-15 2017-07-14 Correction spectrale à l'aide d'un étalonnage spatial Active EP3485655B1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP23212793.6A EP4325895A3 (fr) 2016-07-15 2017-07-14 Correction spectrale à l'aide d'un étalonnage spatial

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US15/211,822 US9794710B1 (en) 2016-07-15 2016-07-15 Spatial audio correction
US15/211,835 US9860670B1 (en) 2016-07-15 2016-07-15 Spectral correction using spatial calibration
PCT/US2017/042191 WO2018013959A1 (fr) 2016-07-15 2017-07-14 Correction spectrale à l'aide d'un étalonnage spatial

Related Child Applications (2)

Application Number Title Priority Date Filing Date
EP23212793.6A Division EP4325895A3 (fr) 2016-07-15 2017-07-14 Correction spectrale à l'aide d'un étalonnage spatial
EP23212793.6A Division-Into EP4325895A3 (fr) 2016-07-15 2017-07-14 Correction spectrale à l'aide d'un étalonnage spatial

Publications (2)

Publication Number Publication Date
EP3485655A1 EP3485655A1 (fr) 2019-05-22
EP3485655B1 true EP3485655B1 (fr) 2024-01-03

Family

ID=59656155

Family Applications (2)

Application Number Title Priority Date Filing Date
EP17754501.9A Active EP3485655B1 (fr) 2016-07-15 2017-07-14 Correction spectrale à l'aide d'un étalonnage spatial
EP23212793.6A Pending EP4325895A3 (fr) 2016-07-15 2017-07-14 Correction spectrale à l'aide d'un étalonnage spatial

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP23212793.6A Pending EP4325895A3 (fr) 2016-07-15 2017-07-14 Correction spectrale à l'aide d'un étalonnage spatial

Country Status (3)

Country Link
EP (2) EP3485655B1 (fr)
CN (2) CN112492502B (fr)
WO (1) WO2018013959A1 (fr)

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL134979A (en) * 2000-03-09 2004-02-19 Be4 Ltd A system and method for optimizing three-dimensional hearing
US8234395B2 (en) 2003-07-28 2012-07-31 Sonos, Inc. System and method for synchronizing operations among a plurality of independently clocked digital data processing devices
CN101926182B (zh) * 2008-01-31 2013-08-21 三菱电机株式会社 频带分割时间校正信号处理装置
WO2010013180A1 (fr) * 2008-07-28 2010-02-04 Koninklijke Philips Electronics N.V. Système audio et son procédé de fonctionnement
CN101478296B (zh) * 2009-01-05 2011-12-21 华为终端有限公司 一种多声道系统中的增益控制方法及装置
US8559655B2 (en) * 2009-05-18 2013-10-15 Harman International Industries, Incorporated Efficiency optimized audio system
US8219394B2 (en) * 2010-01-20 2012-07-10 Microsoft Corporation Adaptive ambient sound suppression and speech tracking
US8265310B2 (en) * 2010-03-03 2012-09-11 Bose Corporation Multi-element directional acoustic arrays
US9307340B2 (en) * 2010-05-06 2016-04-05 Dolby Laboratories Licensing Corporation Audio system equalization for portable media playback devices
WO2011139502A1 (fr) * 2010-05-06 2011-11-10 Dolby Laboratories Licensing Corporation Égalisation de système audio pour dispositifs portatifs de reproduction multimédia
US9107023B2 (en) * 2011-03-18 2015-08-11 Dolby Laboratories Licensing Corporation N surround
JP2015513832A (ja) * 2012-02-21 2015-05-14 インタートラスト テクノロジーズ コーポレイション オーディオ再生システム及び方法
US9524098B2 (en) * 2012-05-08 2016-12-20 Sonos, Inc. Methods and systems for subwoofer calibration
US9219460B2 (en) 2014-03-17 2015-12-22 Sonos, Inc. Audio settings based on environment
US9690539B2 (en) * 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration user interface
US9106192B2 (en) * 2012-06-28 2015-08-11 Sonos, Inc. System and method for device playback calibration
US9706323B2 (en) 2014-09-09 2017-07-11 Sonos, Inc. Playback device calibration
US20140003635A1 (en) * 2012-07-02 2014-01-02 Qualcomm Incorporated Audio signal processing device calibration
FR2995754A1 (fr) * 2012-09-18 2014-03-21 France Telecom Calibration optimisee d'un systeme de restitution sonore multi haut-parleurs
US9729986B2 (en) * 2012-11-07 2017-08-08 Fairchild Semiconductor Corporation Protection of a speaker using temperature calibration
WO2015009854A2 (fr) * 2013-07-16 2015-01-22 The Trustees Of The University Of Pennsylvania Propagation et perception acoustiques pour des agents autonomes dans des environnements dynamiques
EP2838086A1 (fr) * 2013-07-22 2015-02-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Dans une réduction d'artefacts de filtre en peigne dans un mixage réducteur multicanal à alignement de phase adaptatif
US9729984B2 (en) * 2014-01-18 2017-08-08 Microsoft Technology Licensing, Llc Dynamic calibration of an audio system
US9196432B1 (en) * 2014-09-24 2015-11-24 James Thomas O'Keeffe Smart electrical switch with audio capability
WO2016054090A1 (fr) * 2014-09-30 2016-04-07 Nunntawi Dynamics Llc Procédé pour déterminer un changement de position de haut-parleurs
CN104967953B (zh) * 2015-06-23 2018-10-09 Tcl集团股份有限公司 一种多声道播放方法和系统

Also Published As

Publication number Publication date
CN109716795B (zh) 2020-12-04
EP4325895A2 (fr) 2024-02-21
WO2018013959A1 (fr) 2018-01-18
EP3485655A1 (fr) 2019-05-22
CN112492502A (zh) 2021-03-12
EP4325895A3 (fr) 2024-05-15
CN112492502B (zh) 2022-07-19
CN109716795A (zh) 2019-05-03

Similar Documents

Publication Publication Date Title
US11736878B2 (en) Spatial audio correction
US10448194B2 (en) Spectral correction using spatial calibration
US11818553B2 (en) Calibration based on audio content
US10674293B2 (en) Concurrent multi-driver calibration
EP3485655B1 (fr) Correction spectrale à l'aide d'un étalonnage spatial

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20190215

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20200819

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20230609

GRAJ Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted

Free format text: ORIGINAL CODE: EPIDOSDIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTC Intention to grant announced (deleted)
INTG Intention to grant announced

Effective date: 20230718

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20231206

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602017078115

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240103

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20240103

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1647986

Country of ref document: AT

Kind code of ref document: T

Effective date: 20240103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240503