JP6503457B2 - Audio processing algorithm and database - Google Patents

Audio processing algorithm and database Download PDF

Info

Publication number
JP6503457B2
JP6503457B2 JP2017513241A JP2017513241A JP6503457B2 JP 6503457 B2 JP6503457 B2 JP 6503457B2 JP 2017513241 A JP2017513241 A JP 2017513241A JP 2017513241 A JP2017513241 A JP 2017513241A JP 6503457 B2 JP6503457 B2 JP 6503457B2
Authority
JP
Japan
Prior art keywords
playback
zone
device
audio
audio signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2017513241A
Other languages
Japanese (ja)
Other versions
JP2017528083A (en
Inventor
ティモシー・シーン
サイモン・ジャービス
Original Assignee
ソノズ インコーポレイテッド
ソノズ インコーポレイテッド
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US14/481,505 priority Critical patent/US9952825B2/en
Priority to US14/481,514 priority
Priority to US14/481,505 priority
Priority to US14/481,514 priority patent/US9891881B2/en
Application filed by ソノズ インコーポレイテッド, ソノズ インコーポレイテッド filed Critical ソノズ インコーポレイテッド
Priority to PCT/US2015/048942 priority patent/WO2016040324A1/en
Publication of JP2017528083A publication Critical patent/JP2017528083A/en
Application granted granted Critical
Publication of JP6503457B2 publication Critical patent/JP6503457B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/005Audio distribution systems for home, i.e. multi-room use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/09Electronic reduction of distortion of stereophonic sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/13Aspects of volume control, not necessarily automatic, in stereophonic sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/305Electronic adaptation of stereophonic audio signals to reverberation of the listening space

Description

Cross-reference to related applications

  The present application claims priority to US Patent Application Nos. 14 / 481,505 filed on September 9, 2014 and US Patent Application No. 14 / 481,514 filed on September 9, 2014 And is incorporated herein by reference in its entirety.

  The present application relates to consumer products, and more particularly to methods, systems, products, features, services and other elements directed to media playback, and some aspects thereof.

  In 2003, Sonoz Inc. filed a patent application entitled "How to synchronize audio playback between multiple network devices", which is one of the first patent applications, and in 2005 sold a media playback system Until then, the options for accessing and listening to digital audio in out-loud settings were limited. People are able to experience music from multiple sources via one or more network playback devices with the Sonoz Wireless HiFi system. Through a software control application installed on a smartphone, tablet or computer, people can play the music they want in any room equipped with a network playback device. Also, for example, the controller can be used to stream different songs to each room with playback devices, or multiple rooms can be grouped for synchronized playback, or synchronized in all rooms You can listen to the same song.

  Given the growing interest in digital media so far, there is a need to further develop consumer accessible technologies that can further enhance the listening experience.

  The features, aspects and advantages of the technology disclosed herein are better understood with reference to the following description, the appended claims and the accompanying drawings.

Diagram showing the configuration of an exemplary media playback system that can be implemented in an embodiment Functional block diagram of an exemplary playback device Functional block diagram of an exemplary control device Diagram showing an example controller interface Exemplary flow diagram of a first method of maintaining a database of audio processing algorithms Figure showing an exemplary portion of the first database of the audio processing algorithm Diagram showing an exemplary portion of the second database of the audio processing algorithm Exemplary flow diagram of a second method of maintaining a database of audio processing algorithms Figure showing an exemplary playback zone where the playback device may be calibrated An exemplary flow diagram of a first method of determining an audio processing algorithm based on one or more playback zone characteristics Exemplary flow diagram of a second method of determining an audio processing algorithm based on one or more playback zone characteristics Example flow diagram for identifying an audio processing algorithm from a database of audio processing algorithms

  While the drawings are intended to illustrate some exemplary embodiments, it is understood that the invention is not limited to the arrangements and instrumentality shown in the drawings.

I. Overview When a playback device plays audio content in a playback zone, the quality of playback may depend on the acoustic characteristics of the playback zone. In the description herein, a playback zone may include one or more playback devices or groups of playback devices. The acoustic characteristics of the playback zone may depend on the dimensions of the playback zone, the type of furniture in the playback zone, and the placement of the furniture in the playback zone. As such, different playback zones may have different acoustic characteristics. A single audio processing algorithm does not provide consistent audio playback quality by the playback device in each of the different playback zones, as a given model of the playback device may be used in various different playback zones with different acoustic characteristics There is.

  The examples described herein relate to determining an audio processing algorithm to be applied by the playback device based on the acoustic characteristics of the playback zone such that the playback device is within the playback zone. The application by the playback device of the determined audio processing algorithm when playing back the audio content in the playback zone may allow the audio content rendered by the playback device in the playback zone to have at least some predetermined audio characteristics. In some cases, the application of the audio processing algorithm is to change the audio amplification at one or more audio frequencies of the audio content. Other examples are also possible.

  In one example, a database of audio processing algorithms may be maintained, and the audio processing algorithms in the database may be identified based on one or more characteristics of the playback zone. One or more properties of the regeneration zone may be the acoustic characteristics of the regeneration zone, and / or the dimensions of the regeneration zone, the flooring and / or walling of the regeneration zone, and the number and / or type of furniture in the regeneration zone One or more of them may be included.

  Maintaining a database of audio processing algorithms may include determining at least one audio processing algorithm corresponding to one or more characteristics of the playback zone, and adding the determined audio processing algorithm to the database . In one example, the database may be stored on one or more devices or one or more other devices that maintain the database. In the description herein, unless otherwise stated, the function for maintaining the database is, inter alia, one or more computing devices (ie servers), one or more playback devices or one Or may be performed by multiple controller devices. However, for the sake of simplicity, one or more devices that perform a function may generally be referred to as computing devices.

  In one example, determining such an audio processing algorithm may include the computing device determining an acoustic characteristic of the playback zone. In some cases, the playback zone may be a model room used to simulate a playback zone where the playback device can play audio content. In such cases, one or more physical characteristics of the model room (i.e., dimensions, flooring, wall materials, etc.) may be predetermined. In another case, the playback zone may be a room in the user's home of the playback device. In such cases, the physical characteristics of the regeneration zone may be provided by the user or may be unknown.

  In one example, the computing device may cause the playback device in the playback zone to play back the audio signal. In some cases, the reproduced audio signal may include audio content having a frequency that substantially covers the full frequency range that can be rendered by the reproduction device. The playback device may then detect the audio signal using the playback device's microphone. The microphone of the playback device may be the built-in microphone of the playback device. In some cases, the detected audio signal may include a portion corresponding to the reproduced audio signal. For example, the detected audio signal may comprise a component of the reproduced audio signal reflected within the reproduction zone. The computing device may receive the detected audio signal from the playback device and determine an acoustic response of the playback zone based on the detected audio signal.

  The computing device may then determine the acoustic characteristics of the reproduction zone by removing the acoustic characteristics of the reproduction device from the acoustic response of the reproduction zone. The acoustic properties of the playback device may be acoustic properties corresponding to a model of the playback device. In some cases, the acoustic characteristics corresponding to the model of the playback device may be determined based on the audio signal played back and detected by a representative playback device of the model in the anechoic chamber.

  The computing device may then determine a corresponding audio processing algorithm based on the determined audio characteristics of the playback zone and the predetermined audio characteristics. The predetermined audio characteristics may include specific frequency equalization that is considered good sounding. The corresponding audio processing algorithm is determined, and the application of the corresponding audio processing algorithm by the playback device when playing audio content in the playback zone is at least partially predetermined to the audio content rendered by the playback device in the playback zone It may have audio characteristics. For example, if the acoustic characteristics of the playback zone are such that a particular audio frequency is attenuated relative to other frequencies, the corresponding audio processing algorithm may include a stronger amplification of the particular audio frequency. Other examples are also possible.

  Then, the association between the determined audio processing algorithm and the acoustic characteristics of the playback zone may be stored as an entry in the database. In some cases, the association of the audio processing algorithm with one or more other characteristics of the playback zone may additionally or alternatively be stored in a database. For example, if the playback zone is of a particular size, the association between the audio processing algorithm and the particular room size may be stored in the database. Other examples are also possible.

  In an example, the database may be accessed by the computing device to specify an audio processing algorithm for the playback device to apply to the playback zone. In one example, the computing devices that access the database to identify the audio processing algorithm may be the same computing device that maintains the database, as described above. In another example, the computing devices may be different computing devices.

  In some cases, accessing the database to identify the audio processing algorithm for the playback device to apply to the playback zone may be part of the playback device's calibration. Such calibration of the playback device may be initiated by the playback device itself, by a server in communication with the playback device, or by the controller device. In some cases, calibration may be initiated as the playback device is new and calibration is part of the playback device's initialization. In other cases, the playback devices may be relocated to within the same playback zone or from one playback zone to another. In further cases, the calibration may be initiated by the user of the playback device, such as via the controller device.

  In one example, calibration of the playback device may, among other possibilities, allow the computing device to tell the user of the playback device one of the approximate dimensions of the playback zone, the floor material or wall material, the amount of furniture, etc. Prompting to indicate one or more characteristics may be included. The computing device can prompt the user via a user interface on the controller device. Based on one or more characteristics of the playback zone provided by the user, an audio processing algorithm corresponding to the one or more characteristics of the playback zone may be identified in the database, thus the playback device may The audio processing algorithm specified when playing back the audio content may be applied.

  In another example, calibration of the playback device may include determining acoustic characteristics of the playback zone and identifying corresponding audio processing algorithms based on the acoustic characteristics of the playback zone. The determination of the acoustic properties of the reproduction zone may be similar to those described above. For example, the playback device in the playback zone where the playback device is calibrated may use the microphone of the playback device to detect the second audio signal after playing the first audio signal. The second audio signal may then be based on determining the acoustic characteristics of the playback zone. Based on the determined audio characteristics, a corresponding audio processing algorithm may be identified in the database, so the playback device applies the identified audio processing algorithm when playing back the audio content in the playback zone. May be As mentioned above, the application of the corresponding audio processing algorithm by the playback device when playing back audio content in the playback zone causes the audio content rendered by the playback device in the playback zone to have at least some predetermined audio characteristics be able to.

  While the above description of playback device calibration generally includes a database of audio processing algorithms, one skilled in the art can determine that the audio processing algorithm for the playback zone can be determined without the computing device accessing the database. You will understand. For example, instead of identifying the corresponding audio processing algorithm in the database, the computing device is associated with the acoustic characteristics of the playback zone (from the detected audio signal) and the maintenance and generation of audio processing algorithm entries for the database The audio processing algorithm may be determined by computing an audio processing algorithm based on the same predetermined audio characteristics as those described above. Other examples are also possible.

  In some cases, the playback device to be calibrated may be one of a plurality of playback devices configured to synchronously play audio content in a playback zone. In such cases, the determination of the acoustic characteristics of the playback zone may also include audio signals played back by other playback devices within the playback zone. In one example, while determining the audio processing algorithm, each of the plurality of playback devices in the playback zone can simultaneously play the audio signal, and the audio signal detected by the playback device's microphone is detected by the playback device It may include portions corresponding to the reproduced audio signal as well as portions of the audio signal reproduced by other reproduction devices in the reproduction zone. The acoustic response of the playback zone can be determined based on the detected audio signal, and the playback zone includes other playback devices by removing the acoustic characteristics of the playback device being calibrated from the acoustic response of the playback zone The acoustic characteristics of the can be determined. The audio processing algorithm may then be calculated or identified in the database based on the acoustic characteristics of the playback zone and applied by the playback device.

  In another case, the two or more reproduction devices in the plurality of reproduction devices in the reproduction zone may each have their own built-in microphones and may be individually calibrated according to the above description. In one example, the acoustic properties of the playback zone may be determined based on the set of audio signals detected by the microphones of the two or more playback devices, and the audio processing algorithm corresponding to the acoustic properties is 2 It may be identified for each of the one or more playback devices. Other examples are also possible.

  As mentioned above, the present disclosure includes determining the audio processing algorithm that the playback device applies based on the acoustic characteristics of the particular playback zone that includes the playback device. In one aspect, a computing device is provided. The computing device includes a processor and a memory that is executable by the processor and stores instructions that cause the computing device to perform the function. The function includes causing the playback device in the playback zone to play the first audio signal, and receiving data from the playback device indicative of the second audio signal detected by the playback device's microphone. The second audio signal includes a portion corresponding to the first audio signal. The function further includes determining an audio processing algorithm based on the second audio signal and the acoustic characteristics of the playback device, and transmitting data indicative of the determined audio processing algorithm to the playback device.

  In another aspect, a computing device is provided. The computing device includes a processor and a memory that is executable by the processor and stores instructions that cause the computing device to perform the function. The function causes the first reproduction device to reproduce the first audio signal in the reproduction zone, causes the second reproduction device to reproduce the second audio signal in the reproduction zone, and the microphone of the first reproduction device. Receiving data indicative of the third audio signal detected by the second reproduction device from the first reproduction device. The third audio signal includes (i) a portion corresponding to the first audio signal and (ii) a portion corresponding to the second audio signal reproduced by the second reproduction device. The function also determines an audio processing algorithm based on the third audio signal and the acoustic characteristics of the first playback device, and transmits data indicative of the determined audio processing algorithm to the first playback device. Including.

  In another aspect, a playback device is provided. The playback device includes a processor, a microphone, and a memory that is executable by the processor and stores instructions that cause the playback device to perform the function. The function includes detecting the second audio signal by the microphone while playing the first audio signal in the playback zone. The second audio signal includes a portion corresponding to the first audio signal. The function also determines an audio processing algorithm based on the second audio signal and the acoustic characteristics of the playback device, and the audio processing algorithm determined when playing back the media item in the playback zone. Including applying to audio data corresponding to

  In another aspect, a computing device is provided. The computing device includes a processor and a memory that is executable by the processor and stores instructions that cause the computing device to perform the function. The function includes causing the playback device in the playback zone to play the first audio signal and receiving data indicative of the second audio signal detected by the playback device's microphone. The second audio signal includes a portion corresponding to the first audio signal reproduced by the reproduction device. The function also includes determining an acoustic characteristic of the reproduction zone based on the second audio signal and the characteristics of the reproduction device, determining an audio processing algorithm based on the acoustic characteristic of the reproduction zone, and an audio processing algorithm Storing the association of the playback zone with the acoustic characteristics in a database.

  In another aspect, a computing device is provided. The computing device includes a processor and a memory that is executable by the processor and stores instructions that cause the computing device to perform the function. The function causes the playback device in the playback zone to play the first audio signal, and (i) data indicative of one or more characteristics of the playback zone, and (ii) detected by the microphone of the playback device. Receiving data indicative of the second audio signal. The second audio signal includes a portion corresponding to the audio signal reproduced by the reproduction device. Also, the function determines an audio processing algorithm based on the second audio signal and the characteristics of the playback device, and the determined audio processing algorithm and at least one of the one or more characteristics of the playback zone. Storing the association in a database.

  In another aspect, a computing device is provided. The computing device includes a processor and a memory that is executable by the processor and stores instructions that cause the playback device to perform the function. The function includes maintaining a database of (i) audio processing algorithms and (ii) playback zone characteristics. Each audio processing algorithm of the plurality of audio processing algorithms corresponds to at least one playback zone characteristic of the plurality of playback zone characteristics. The function also receives receiving data indicative of one or more characteristics of the playback zone, identifying an audio processing algorithm in the database based on the data, and transmitting data indicative of the identified audio processing algorithm. including.

  Although some examples described herein refer to functions performed by any entity, such as "user" and / or other entities, it should be understood that this is merely for illustrative purposes. is there. The claims should not be construed as requiring operation of such example entities unless expressly stated by the claims themselves. Those skilled in the art will appreciate that the present disclosure includes other embodiments.

II. Exemplary Operating Environment FIG. 1 illustrates an exemplary configuration of a media playback system 100 that can or can be implemented in one or more embodiments disclosed herein. As shown, the media playback system 100 is associated with an exemplary home environment having multiple rooms and spaces, eg, a master bedroom, an office, a dining room, and a living room. As shown in the example of FIG. 1, the media playback system 100 includes playback devices 102-124, control devices 126 and 128, and a wired or wireless network router 130.

  Further, descriptions of the different components of the exemplary media playback system 100 and how the different components work to provide the user with a media experience are described in the following sections. Although the description herein generally refers to the media playback system 100, the techniques described herein are not limited to the use of the home environment shown in FIG. For example, the techniques described herein may be used in environments where multi-zone audio is desired, such as commercial environments such as restaurants, malls, or airports, sport utility vehicles (SUVs), buses or vehicles. It is useful in the environment of vehicles, ships, or boards, airplanes, etc.

a. Exemplary Playback Device FIG. 2 shows a functional block diagram of an exemplary playback device 200 that configures one or more of the playback devices 102-124 of the media playback system 100 of FIG. The playback device 200 may include a processor 202, software components 204, memory 206, audio processing components 208, audio amplifiers 210, speakers 212, microphones 220, and network interface 214. Network interface 214 includes wireless interface 216 and wired interface 218. In some cases, the playback device 200 does not include the speaker 212, but may include a speaker interface for connecting the playback device 200 to an external speaker. In other cases, the playback device 200 does not include the speaker 212 or the audio amplifier 210, but may include an audio interface for connecting the playback device 200 to an external audio amplifier or an audio visual receiver.

  In one example, processor 202 may be a clocked computer component configured to process input data based on instructions stored in memory 206. Memory 206 may be a non-transitory computer readable storage medium configured to store instructions executable by processor 202. For example, memory 206 may be data storage capable of loading one or more of software components 204 executable by processor 202 to perform certain functions. In one example, the function may include the playback device 200 reading audio data from an audio source or another playback device. In another example, the function may include the playback device 200 transmitting audio data to another device on the network or to the playback device. In yet another example, the functionality may include pairing playback device 200 with one or more playback devices to create a multi-channel audio environment.

  Certain functions include the playback device 200 synchronizing playback of audio content with one or more other playback devices. Preferably, while synchronizing playback, the listener is not aware of the delay between playback of audio content by playback device 200 and playback by one or more other playback devices. U.S. Pat. No. 8,234,395 entitled "System and Method for Synchronizing Operation Between Multiple Independent Clock Digital Data Processing Devices" is incorporated herein by reference, which provides for audio playback between playback devices. It provides a more detailed example where synchronization is stated.

  Additionally, memory 206 may be configured to store data. The data may be, for example, a playback device 200, such as a playback device 200 included as part of one or more zones and / or zone groups, an audio source accessible by the playback device 200, or a playback device 200 (or other playback Associated with the playback queue, which can be associated with the device). The data may be updated periodically and stored as one or more state variables that indicate the state of the playback device 200. Memory 206 may also include data associated with the state of other devices of the media system, with one or more devices at or near their most recent data associated with the system, by sharing between devices at any time. You may have. Other embodiments are also possible.

  Audio processing component 208 includes, among other things, one or more digital-to-analog converters (DACs), analog-to-digital converters (ADCs), audio processing components, audio enhancement components, and digital signal processors (DSPs). It is also good. In one embodiment, one or more audio processing components 208 may be subcomponents of processor 202. In one embodiment, audio content may be processed and / or intentionally modified by audio processing component 208 to generate an audio signal. The generated audio signal is transmitted to the audio amplifier 210, amplified, and reproduced through the speaker 212. In particular, audio amplifier 210 may include a device configured to amplify the audio signal to a level that can drive one or more speakers 212. The speaker 212 may comprise a complete speaker system, including an independent transducer (e.g., a "driver") or a housing that encloses one or more drivers. Some drivers provided in the speaker 212 may include, for example, a subwoofer (for example, for low frequencies), a middle range driver (for example, for intermediate frequencies), and / or a tweeter (for high frequencies). In some cases, each transducer of one or more speakers 212 may be driven by a corresponding individual audio amplifier of audio amplifier 210. In addition to generating an analog signal for playback on playback device 200, audio processing component 208 processes the audio content and transmits the audio content for playback to one or more other playback devices.

  Audio content to be processed and / or reproduced by the reproduction device 200 is received via an external source, eg, an audio line-in input connection (eg, auto-detecting 3.5 mm audio line-in connection) or the network interface 214. May be

  The microphone 220 may include an audio sensor configured to convert the detected sound into an electrical signal. The electrical signals may be processed by the audio processing component 208 and / or the processor 202. The microphones 220 may be arranged in one or more orientations at one or more locations in the playback device 200. Microphone 220 may be configured to detect sound within one or more frequency ranges. In some cases, one or more microphones 220 may be configured to detect sounds within the frequency range of audio that the playback device 200 is capable or rendering. In another case, one or more microphones 220 may be configured to detect sound in the human audio frequency range. Other examples are also possible.

  Network interface 214 may be configured to enable data flow between playback device 200 and one or more other devices over a data network. Thus, the playback device 200 may be a data network from one or more other playback devices in communication with the playback device, a network device in a local area network, or an audio content source on a wide area network such as, for example, the Internet. May be configured to receive audio content. In one example, audio content and other signals sent and received by the playback device 200 may be sent in the form of digital packets that include an Internet Protocol (IP) based source address and an IP based destination address. In such a case, the network interface 214 can appropriately receive and process data addressed to the playback device 200 by the playback device 200 by analyzing the digital packet data.

  As shown, network interface 214 may include wireless interface 216 and wired interface 218. The wireless interface 216 provides a network interface function for the playback device 200 and a communication protocol (eg, wireless standard IEEE 802.11a, 802.11b, 802.11g, 802.11n, 802.11n, 802.11ac, 802.15, 4G mobile) Other devices (eg, other playback devices in the data network associated with the playback device 200, speakers, receivers, network devices, control devices) based on any of the wireless standards (standards) including communication standards etc. It may communicate wirelessly. The wired interface 218 provides a network interface function for the playback device 200 and may communicate via a wired connection with other devices based on a communication protocol (eg, IEEE 802.3). Although the network interface 214 shown in FIG. 2 includes both the wireless interface 216 and the wired interface 218, the network interface 214 may, in certain embodiments, include only the wireless interface or only the wired interface. Good.

  In one example, playback device 200 and another playback device may be paired to play two separate audio components of audio content. For example, playback device 200 may be configured to play left channel audio components, while other playback devices may be configured to play right channel audio components. This can create or enhance stereo effects of the audio content. Paired playback devices (also referred to as "combined playback devices") may also play audio content in synchronization with other playback devices.

  In another example, the playback device 200 may be acoustically integrated with one or more other playback devices to form a single integrated playback device (integrated playback device). The integrated playback device can be configured to process and reproduce sound differently as compared to a non-integrated playback device or paired playback device. This is because the integrated playback device can add a speaker that plays audio content. For example, if the playback device 200 is designed to play audio content in the low frequency range (eg, a subwoofer), the playback device 200 is designed to play audio content in the full frequency range It may be integrated with the device. In this case, the full frequency range playback device may be configured to play only the mid-high frequency component of the audio content when integrated with the low frequency playback device 200. On the other hand, the low frequency range playback device 200 plays back the low frequency component of the audio content. Furthermore, the integrated playback device may be paired with a single playback device, or even another integrated playback device.

  As an example, Sonoz Inc. currently plays including "PLAY: 1", "PLAY: 3", "PLAY: 5", "PLAYBAR", "CONNECT: AMP", "CONNECT", and "SUB" We offer devices for sale. Any other past, present, and / or future playback devices may additionally or alternatively be implemented and used in the playback devices of the embodiments disclosed herein. Further, it is understood that the playback device is not limited to the particular example shown in FIG. 2 or the provided Sonoz product. For example, the playback device may include wired or wireless headphones. In another example, the playback device may include or interact with a docking station for a personal mobile media playback device. In yet another example, the playback device may be integrated with another device or component, such as a television, a light fixture, or some other device for indoor or outdoor use.

b. Exemplary Playback Zone Configuration Returning to the media playback system 100 of FIG. 1, the environment includes one or more playback zones, each playback zone including one or more playback devices. The media playback system 100 is formed of one or more playback zones, and one or more zones may be added or deleted later to provide the exemplary configuration shown in FIG. Each zone may be given a name based on a different room or space, for example an office, a bathroom, a master bedroom, a bedroom, a kitchen, a dining room, a living room, and / or a balcony. In some cases, a single regeneration zone may include multiple rooms or spaces. In another case, a single room or space may include multiple playback zones.

  As shown in FIG. 1, each of the balcony, dining room, kitchen, bathroom, office, and bedroom zones have one playback device, while each of the living room and master bedroom zones has a plurality of playback devices. Have. In the living room zone, the playback devices 104, 106, 108, and 110 may be separate playback devices, one or more combined playback devices, one or more integrated playback devices, or any of these Audio content may be configured to be synchronized and played back in any combination. Similarly, in the case of the master bedroom, the playback devices 122 and 124 may be configured to play audio content synchronously as separate playback devices, as combined playback devices, or as integrated playback devices. .

  In one example, one or more playback zones in the environment of FIG. 1 are playing different audio content. For example, the user can listen to hip-hop music played by the playback device 102 while grilling in the balcony zone. Meanwhile, another user can listen to the classical music played by the playback device 114 while preparing a meal in the kitchen zone. In another example, the playback zone may play the same audio content in synchronization with another playback zone. For example, if the user is in the office zone, the office zone playback device 118 may play the same music as the music being played on the balcony playback device 102. In such a case, the playback devices 102 and 118 are playing the rock music synchronously, so that the user moves between different playback zones seamlessly (or at least the audio content played out-loudly). Almost seamless). Synchronization between playback zones may be performed in the same manner as synchronization between playback devices as described in the aforementioned U.S. Patent No. 8,234,395.

  As mentioned above, the zone configuration of media playback system 100 may be changed dynamically, and in an embodiment, media playback system 100 supports multiple configurations. For example, if a user physically moves one or more playback devices into or out of a zone, media playback system 100 may be reconfigured to accommodate changes. For example, if the user physically moves the playback device 102 from the balcony zone to the office zone, the office zone may include both the playback device 118 and the playback device 102. If desired, playback devices 102 may be paired, or grouped into office zones, and / or renamed, via control devices, eg, control devices 126 and 128. On the other hand, if one or more playback devices are moved to an area in a home environment where the playback zone has not yet been set, a new playback zone may be formed in that area.

  Further, different playback zones of the media playback system 100 may be dynamically combined into zone groups or may be divided into separate playback zones. For example, by combining the dining room zone and the kitchen zone 114 into a dinner party zone group, the playback devices 112 and 114 can synchronize and play audio content. On the other hand, if one user wants to watch TV while the other wants to listen to music in the living room space, the living room zone comprises a television zone including playback device 104 and a listening zone including playback devices 106, 108 and 110. And may be divided.

c. Exemplary Control Device FIG. 3 shows a functional block diagram of an exemplary control device 300 that configures control device 126 and / or 128 of media playback system 100. As shown, control device 300 may include processor 302, memory 304, network interface 306, user interface 308, and microphone 310. In one example, control device 300 may be a control device dedicated to media playback system 100. In another example, the control device 300 may be a network device with media playback system controller application software installed, such as an iPhone®, iPad®, or any other smartphone, tablet or network device (eg, , PC or a network computer such as Mac (registered trademark).

  Processor 302 may be configured to perform functions related to enabling user access, control, and configuration of media playback system 100. Memory 304 may be configured to store instructions executable by processor 302 and to perform those functions. Memory 304 may also be configured to store media playback system controller application software and other data associated with media playback system 100 and the user.

  The microphone 310 may include an audio sensor configured to convert the detected sound into an electrical signal. The electrical signals may be processed by the processor 302. In some cases, where the control device 300 is a device that can also be used as a means for voice communication or voice recording, the one or more microphones 310 may even be microphones that facilitate their function. Good. For example, one or more microphones 310 may be configured to detect sound within a human-generated frequency range and / or a human audio frequency range. Other examples are also possible.

  In one example, the network interface 306 may be an industrial standard (eg, infrared, wireless, wired standard such as IEEE 802.3, IEEE 802.11a, 802.11b, 802.11g, 802.11n, 802.11n, 802.15, etc. Wireless standard, 4G communication standard, etc.). Network interface 306 may provide a means for control device 300 to communicate with other devices within media playback system 100. In an example, data and information (eg, state variables) may be communicated between control device 300 and other devices via network interface 306. For example, the configuration of playback zones and zone groups in the media playback system 100 may be received by the control device 300 from a playback device or another network device, or alternatively by the control device 300 via the network interface 306. It may be sent to a playback device or a network device. In some cases, the other network device may be another control device.

  Playback device control commands, such as volume control and audio playback control, may be communicated from control device 300 to the playback device via network interface 306. As described above, the configuration change of the media playback system 100 can be performed by the user using the control device 300. Configuration changes include adding one or more playback devices to the zone, removing one or more playback devices from the zone, adding one or more zones to the zone group, one or more May be removed from the zone group, forming a combined player or an integrated player, separating the combined player or integrated player into one or more playback devices, and so on. Thus, control device 300 may be referred to as a controller, which may be a dedicated controller with installed media playback system controller application software or may be a network device.

  User interface 308 of control device 300 may be configured to allow user access and control of media playback system 100 by providing a controller interface, such as controller interface 400 shown in FIG. . The controller interface 400 includes a playback control area 410, a playback zone area 420, a playback status area 430, a playback queue area 440, and an audio content source area 450. The illustrated user interface 400 is merely one example of a user interface provided with a network device (and / or the control devices 126 and 128 of FIG. 1) such as the control device 300 of FIG. It may be accessed to control a media playback system such as system 100. Alternatively, various formats, styles, and interactive sequences may be implemented on other user interfaces on one or more network devices to provide similar control access to the media playback system.

  The playback control area 410 may include an icon that can be selected (e.g., using a touch or a cursor). This icon allows playback devices in the selected playback zone or zone group to play or stop, fast forward, rewind, then skip, skip forward, shuffle mode on / off, repeat mode on / off, cross Turn on / off fade mode. The playback control area 410 may include another selectable icon. Other selectable icons may change other settings such as equalization settings and playback volume.

  The playback zone area 420 may include an indication of playback zones within the media playback system 100. In one embodiment, graphical display of the playback zone may be selectable. By moving additional selectable icons, playback zones within the media playback system can be managed or configured. For example, other management or configuration may be performed, such as creating combined zones, creating zone groups, splitting zone groups, and renaming zone groups.

  For example, as shown, a "group" icon may be provided on each of the graphical representations of the playback zone. A "group" icon in the graphic display of a zone may be selectable to select one or more zones in the media playback system to present an option to group with a zone. Once grouped, playback devices in a zone grouped with a zone are configured to play audio content in synchronization with playback devices in the zone. Similarly, a "group" icon may be provided in the graphical representation of the zone group. In this case, the "group" icon is selectable to give the option of deselecting one or more zones in the zone group in order to remove one or more zones in the zone group from the zone group It may be. Other interactions for grouping and ungrouping zones via user interfaces, such as user interface 400, are possible and can be implemented. The display of playback zones in the playback zone area 420 may be dynamically updated as the playback zone or zone group configuration is changed.

  The playback status area 430 displays a graphical representation of the currently played audio content, previously played audio content, or audio content scheduled to be played next within the selected playback zone or zone group. May be included. Selectable playback zones or playback groups may be visually distinguished on the user interface, for example, within playback zone region 420 and / or playback status region 430. The graphic display includes the track title, artist name, album name, album year, track length, and other relevant information useful to the user when controlling the media playback system via the user interface 400. It is also good.

  The play queue area 440 may include a graphical representation of audio content in a play queue associated with the selected play zone or zone group. In one embodiment, each playback zone or zone group may be associated with a playback queue that includes information corresponding to zero or more audio items played by the playback zone or group. For example, each audio item in the playback queue may include a URL, a URL, or any other identifier that can be used by a playback device within a playback zone or zone group. It may be These allow audio items to be found and / or retrieved from local audio content sources or network audio content sources and played back by the playback device.

  In one example, a playlist may be added to the play queue. In this case, information corresponding to each audio item in the playlist may be added to the playback queue. In another example, audio items in the play queue may be saved as a playlist. In yet another example, when the playback device continues to play streaming audio content, eg, an internet radio that is played continuously unless stopped, rather than an audio item that is not played continuously by having a play time, The playback queue may be empty or "unused" but filled. In another embodiment, the playback queue may include Internet radio and / or other streaming audio content items, and be "unused" when a playback zone or zone group is playing those items. Can. Other examples are also possible.

  When a playback zone or zone group is "grouped" or "ungrouped", the playback queue associated with the affected playback zone or zone group may be cleared or reassociated May be For example, if a first playback zone that includes a first playback queue is grouped with a second playback zone that includes a second playback queue, the formed zone group may have an associated playback queue . The associated play queue is initially empty or contains an audio item of the first play queue (eg, if the second play zone is added to the first play zone) (eg, the first play) If a zone is added to the second playback zone, the audio items of the second playback queue may be included, or audio items of both the first playback queue and the second playback queue may be combined. Thereafter, if the formed zone group is ungrouped, the ungrouped first reproduction zone may be reassociated with the previous first reproduction queue or may be associated with an empty new reproduction queue Or may be associated with a new play queue that includes audio items of the play queue that were associated with the zone group before the zone group was ungrouped. Similarly, the ungrouped second playback zone may be reassociated with the previous second playback queue, may be associated with an empty new playback queue, or before the zone group is ungrouped. , And may be associated with a new playback queue that includes audio items of the playback queue that were associated with the zone group.

  Returning to the user interface 400 of FIG. 4, a graphical representation of the audio content in the play queue area 440 includes the track title, artist name, track length, and other relevant information associated with the audio content in the play queue. May be included. In one example, graphical representations of audio content can be moved by selecting additional selectable icons. This allows the management and / or manipulation of the playback cue and / or the audio content displayed in the playback cue. For example, the displayed audio content may be removed from the playback queue, moved to a different position in the playback queue, played immediately or played after the currently playing audio content. May be selected or other operations may be performed. The playback queue associated with the playback zone or zone group may be the memory of one or more playback devices in the playback zone or zone group, the memory of playback devices not in the playback zone or zone group, and / or other designations May be stored in the memory of the device.

  Audio content source area 450 may include a graphical representation of selectable audio content sources. In this audio content source, the audio content may be retrieved and played by the selected playback zone or zone group. Descriptions of audio content sources can be found in the following sections.

d. Exemplary Audio Content Source As illustrated previously, one or more playback devices in a zone or zone group may have multiple audio content to play (e.g., based on the corresponding URI or URL of the audio content) May be configured to be retrieved from a source of available audio content. In one example, audio content may be retrieved directly from the corresponding audio content source (eg, line-in connection) by the playback device. In another example, audio content may be provided to playback devices on the network via one or more other playback devices or network devices.

  An exemplary audio content source may include memory of one or more playback devices in a media playback system. As the media playback system, for example, the media playback system 100 of FIG. 1, a local music library on one or more network devices (for example, a control device, a network compatible personal computer, or a network attached storage (NAS)), A streaming audio service providing audio content via the Internet (eg cloud), or an audio source connected to a media playback system via a playback device or a line-in input connection of a network device, or other possible system May be

  In one embodiment, audio content sources may be periodically added to or removed from media playback systems, such as the media playback system 100 of FIG. In one example, indexing of audio items may be performed each time one or more audio content sources are added, removed or updated. Audio item indexing may include scanning for identifiable audio items in all folders / directories shared over the network. Here, the network is accessible by the playback device in the media playback system. Also, the indexing of audio items may include creating or updating an audio content database that includes metadata (eg, title, artist, album, track length, etc.) and other relevant information. Good. Other relevant information may include, for example, a URI or a URL for finding each identifiable audio item. Other examples for managing and maintaining audio content sources are also possible.

  The above description of the playback device, the control device, the playback zone configuration, and the media content source provides only a few exemplary operating environments in which the functions and methods described below can be implemented. The invention is applicable to other operating environments and configurations of media playback systems, playback devices, and network devices not explicitly mentioned herein, and to implement their functions and methods. Is suitable.

III. Maintaining a Database of Signal Processing Algorithms As mentioned above, some examples described herein relate to maintaining a database of audio processing algorithms. In some cases, maintenance of the database may further include generating and / or updating audio processing algorithm entries for the database. Each of the audio processing algorithms in the database may correspond to one or more characteristics of the playback zone. In one example, one or more characteristics of the playback zone may include acoustic characteristics of the playback zone. The following description relates generally to determining audio processing algorithms stored as entries in a database. However, one skilled in the art will appreciate that similar functions may be performed to update existing entries in the database. The database may be accessed to identify the audio processing algorithm that the playback device applies when playing audio content in a particular playback zone.

a. Exemplary Database of Audio Processing Algorithms and Corresponding Acoustic Characteristics of Playback Zones FIG. 5 shows an exemplary flow diagram of a method 500 of maintaining a database of audio processing algorithms and playback zone acoustic characteristics. As mentioned above, maintaining a database of audio processing algorithms may include determining audio processing algorithms stored in the database. The method 500 shown in FIG. 5 is implemented, for example, in an operating environment including the media playback system 100 of FIG. 1, one or more playback devices 200 of FIG. 2, and one or more control devices 300 of FIG. 7 illustrates an embodiment of a method that can be performed. In one example, method 500 may be performed by a computing device in communication with a media playback system, such as media playback system 100. In another example, some or all of the functionality of method 500 may alternatively be one or more of one or more servers, one or more playback devices, and / or one or more controller devices, etc. It may be performed by several other computing devices.

  Method 500 may include one or more operations, functions, or operations as configured by one or more of blocks 502-510. Although each block is shown in order, these blocks may be performed in parallel and / or in an order different from the order described herein. Also, depending on the desired implementation, blocks may be fewer, more may be split, and / or removed. Further, the flowchart shows an example of the possible functions and operations of the present embodiment with respect to the method 500 as well as the other processes and methods disclosed herein. In this regard, each block may represent a module, segment, or portion of program code that has stored one or more instructions that, when executed by the processor, cause particular logical functions or steps in the process to be performed. . The program code may for example be stored on any type of computer readable recording medium, such as a storage device including a disc or a hard drive.

  Computer readable storage media include non-transitory computer readable storage media, such as computer readable media for storing short duration data, such as register memory, processor cache, and random access memory (RAM). May be. The computer readable medium is capable of long-term storage of non-transitory media, such as read only memory (ROM), optical disk, magnetic disk, compact disk read only memory (CD-ROM), etc. It may include secondary storage or permanent storage. The computer readable medium may be any other volatile or non-volatile storage system. Computer readable media, for example, may be considered as computer readable storage media, ie, tangible storage devices. Also, in method 500, as well as in other processes and methods disclosed herein, each block may represent a circuit, which is wired to perform certain logic functions in the process.

  As shown in FIG. 5, the method 500 causes the computing device to cause the playback device in the playback zone to play the first audio signal (block 502), and the second audio detected by the microphone of the playback device. Receiving the data indicative of the signal (block 504) and determining the acoustic characteristics of the playback zone based on the second audio signal and the characteristics of the playback device (block 506) audio processing based on the acoustic characteristics of the playback zone An algorithm is determined (block 508) and an association of the audio processing algorithm with the acoustic characteristics of the playback zone is stored in a database (block 510).

  As mentioned above, the database can be accessed to identify the audio processing algorithm that the playback device applies when playing back audio content in the playback zone. As such, in one example, method 500 may be performed for various different playback zones to build up a database of audio processing algorithms that correspond to different different playback environments.

  Method 500 includes, at block 502, causing a playback device in a playback zone to play the first audio signal. The playback device may be a playback device similar to the playback device 200 shown in FIG. In some cases, the computing device may cause the playback device to play the first audio signal by sending a command to play the first audio signal. In another case, the computing device may provide the first audio signal to be reproduced to the reproduction device.

  In one example, the first audio signal may be used to determine the acoustic response of the playback zone. As such, the first audio signal may be a test or measurement signal that represents audio content that may be played back by the playback device during normal use by the user. Thus, the first audio signal may include audio content having a frequency that substantially covers the renderable frequency range of the playback device or the human audio frequency range.

  In one example, the playback zone may be a playback zone that represents one of a plurality of playback environments in which the playback device may play audio content during normal use by the user. Referring to FIG. 1, the playback zone can represent any one of different rooms and zone groups in the media playback system 100. For example, the regeneration zone may represent a dining room.

  In one example, the playback zone may be a model playback zone built to simulate a listening environment where the playback device can play audio content. In an example, the playback zone may be one of multiple playback zones configured to simulate multiple playback environments. Multiple playback zones may be constructed for the purpose of creating a database of such audio processing algorithms. In such cases, the particular characteristics of the regeneration zone may be predetermined and / or known. For example, among other possibilities, the dimensions of the reproduction zone, the flooring or wall material of the reproduction zone (or other features that may affect the audio reflection characteristics of the reproduction zone), the furniture in the reproduction zone The number, or size and type of furniture in the regeneration zone may be characteristic of a predetermined and / or known regeneration zone.

  In other cases, the playback zone may be a room in the home of the user of the playback device. For example, as part of the construction of a database, a user of a reproduction device, such as a customer and / or tester, is requested to perform the functions of method 500 to construct a database using the user's reproduction device Sometimes. In some cases, the particular characteristics of the user playback zone may be unknown. In some other cases, some or all of the specific characteristics of the user playback zone may be provided by the user. A database created by performing the functions of method 500 may include entries based on simulated playback zones and / or user playback zones.

  Block 502 includes a computing device, which causes the playback device to play the first audio signal. However, one skilled in the art will appreciate that playback of the first audio signal by the playback device may not necessarily be triggered or initiated by the computing device. For example, the controller device may send a command to the playback device to cause the playback device to play the first audio signal. In another example, the playback device may play the first audio signal without receiving a command from the computing device or controller. Other examples are also possible.

  Method 500 includes, at block 504, receiving data indicative of a second audio signal detected by a microphone of a playback device. As mentioned above, the playback device may be a playback device similar to the playback device 200 shown in FIG. Therefore, the microphone may be the microphone 220. In one example, the computing device may receive data from the playback device. In another example, the computing device may receive data via other playback devices, controller devices or other servers.

  When or shortly after the playback device is playing the first audio signal, the microphone of the playback device may detect the second audio signal. The second audio signal may comprise a detectable audio signal present in the playback zone. For example, the second audio signal may include a portion corresponding to the first audio signal reproduced by the reproduction device.

  In an example, when the microphone detects a second audio signal, the computing device may receive data indicative of the detected second audio signal from the playback device as a media stream. In another example, the computing device may receive data indicative of the second audio signal from the playback device upon completion of the detection of the first audio signal by the microphone of the playback device. In any case, the playback device processes the detected second audio signal (via an audio processing component such as the audio processing component 208 of the playback device 200) to generate data indicative of the second audio signal. The data may be generated and sent to the computing device. In an example, generating data indicative of the second audio signal may include converting the second audio signal from an analog signal to a digital signal. Other examples are also possible.

  Method 500 includes, at block 506, determining acoustic characteristics of the playback zone based on the second audio signal and the characteristics of the playback device. As mentioned above, the second audio signal may include a portion corresponding to the first audio signal reproduced by the reproduction device in the reproduction zone.

  The characteristics of the playback device may include one or more of acoustic characteristics of the playback device, specifications of the playback device (ie, number of transducers, frequency range, amplifier wattage, etc.), and models of the playback device. In some cases, the acoustic characteristics of the playback device and / or the specifications of the playback device may be associated with a model of the playback device. For example, particular models of playback devices may have substantially identical specifications and acoustical characteristics. In an example, a database of models of playback devices, acoustic characteristics of models of playback devices, and / or specifications of models of playback devices may be maintained on the computing device or other devices in communication with the computing device.

In one example, the acoustic response from a playback device playing a first audio signal within a playback zone can be represented by the relationship between the first audio signal and the second audio signal. Mathematically, the first audio signal is f (t), the second audio signal is s (t), and the acoustic response of the reproduction device reproducing the first audio signal within the reproduction zone is h In the case of r (t), the following formula (1) is obtained.

Thus, given the second audio signal s (t) detected by the microphone of the reproduction device and the first signal f (t) reproduced by the reproduction device, h r (t) can be calculated .

In some cases, since the first audio signal f (t) is reproduced by the reproduction device, the acoustic response h r (t) is (i) independent of the acoustic characteristics of the reproduction device and (ii) the reproduction device It may include the acoustic characteristics of the playback zone. Mathematically, this relationship can be expressed as the following equation (2).

Here, h p (t) is an acoustic characteristic of the reproduction device, and h room (t) is an acoustic characteristic of a reproduction zone independent of the reproduction device. As such, the acoustic characteristics of the reproduction zone independent of the reproduction device may be determined by removing the acoustic characteristics of the reproduction device from the acoustic response of the reproduction zone to the first audio signal reproduced by the reproduction device. In other words, the following equation (3) is obtained.

In one example, the acoustic characteristics h p (t) of the playback device arrange the playback device or a representative playback device of the same model in the anechoic chamber, and cause the playback device to play back the measurement signal in the anechoic chamber and play back It can be determined by detecting the response signal by the microphone of the device. The measurement signal reproduced by the reproduction device in the anechoic chamber may be similar to the first audio signal f (t) described above. For example, the measurement signal may include audio content having a frequency that substantially covers the renderable frequency range of the playback device or the human audio frequency range.

The acoustic characteristics h p (t) of the reproduction device can represent the relationship between the reproduced measurement signal and the detected response signal. For example, if the measurement signal has a first signal magnitude at a particular frequency and the detected response signal has a second signal magnitude different from the first signal magnitude at the particular frequency. The acoustic characteristics h p (t) of the playback device can indicate the amplification or attenuation of the signal at a particular frequency.

Mathematically, if the measurement signal is x (t), the detected response signal is y (t), and the acoustic characteristic of the reproduction device in the anechoic chamber is h p (t), then )become.

Thus, h p (t) can be calculated based on the measured signal x (t) and the detected response signal y (t). As mentioned above, h p (t) may be a representative acoustic characteristic for a reproduction device of the same model as that used in the anechoic chamber.

In one example, as described above, the reference acoustical properties h P (t) may be stored in association with the model of the playback device and / or the specifications of the playback device. In one example, h P (t) may be stored on a computing device. In another example, h p (t) may be stored on the playback device and other playback devices of the same model. In a further case, the inverse of h p (t), expressed as h p -1 (t) may be stored instead of h p (t).

Thus, referring back to block 506, the acoustic characteristics h room (t) of the playback zone are the first audio signal f (t), the second audio signal s (t), and the acoustic characteristics h p of the playback device. It can be determined based on (t). In one example, the reciprocal h p −1 (t) of the acoustic characteristics of the playback device can be applied to equation (2). In other words, the following equation (5) is obtained.

Here, I (t) is an impulse signal. And, the acoustic characteristic h room (t) of the reproduction zone time can be simplified as the following equation (6).

  Method 500 includes, at block 506, determining an audio processing algorithm based on the acoustic characteristics of the playback zone and the predetermined audio signal. In one example, the audio processing algorithm is determined to apply an audio processing algorithm determined by the playback device when playing back the first audio signal in the playback zone, the audio characteristic substantially identical to the predetermined audio characteristic. A third audio signal may be generated having at least some predetermined audio characteristics.

  In one example, the predetermined audio characteristic may be audio frequency equalization that is considered good sounding. In some cases, the predetermined audio characteristics may include substantially even equalization over the renderable frequency range of the playback device. In other cases, the predetermined audio characteristics may include equalization that is considered pleasing to the general listener. In further cases, the predetermined audio feature may include a frequency response that is considered suitable for a particular music genre.

  In any case, the computing device can determine the audio processing algorithm based on the acoustic characteristics and the predetermined audio characteristics. In one example, the acoustic characteristics of the playback zone are such that a particular audio frequency is attenuated relative to the other frequencies, and the predetermined audio characteristics are equalized with a particular audio frequency being attenuated to a minimum. When included, the corresponding audio processing algorithm may include enhanced amplification at a particular audio frequency.

If the predetermined audio characteristic is represented by the predetermined audio signal z (t) and the audio processing algorithm is represented by p (t), then the predetermined audio signal z (t), the audio processing algorithm and the acoustic characteristics h of the reproduction zone h The relationship with room (t) can be described mathematically as the following equation (7).

  Therefore, the audio processing algorithm p (t) can be mathematically described as the following equation (8).

  In some cases, determining the audio processing algorithm may include determining one or more parameters for the audio processing algorithm (ie, coefficients for p (t)). For example, the audio processing algorithm may include a particular signal amplification gain at a particular corresponding frequency of the audio signal. As such, parameters indicative of particular signal amplification and / or particular corresponding frequencies of the audio signal may be identified to determine the audio processing algorithm p (t).

The method 500 includes, at block 510, storing an association of the audio processing algorithm with the acoustic characteristics of the playback zone in a database. As such, an entry may be added to the database including the acoustic characteristics h room (t) of the playback zone determined in blocks 504 and 506 and the corresponding audio processing algorithm p (t). In one example, the database may be stored on a local memory storage of the computing device. In another example, if the database is stored on another device, the computing device may transmit the audio processing algorithm and the acoustic characteristics of the playback zone to the other device for storage in the database. . Other examples are also possible.

  As mentioned above, the playback zone for which the audio processing algorithm has been determined is a model playback zone used to simulate a listening environment where the playback device can play audio content, or in the room of the user of the playback device. It may be. In some cases, the database is based on the audio signal reproduced and detected in the room of the user of the reproduction device, with an entry generated on the basis of the audio signal reproduced and detected in the model reproduction zone It may contain a generated entry.

FIG. 6A shows an exemplary portion of the audio processing algorithm database 600, and the audio processing algorithm p (t) determined in the above description is stored in the database 600. As shown, a portion of database 600 may include a plurality of entries 602-608. Entry 602 may include the playback zone acoustics feature h room −1 (t) −1. The acoustical property h room -1 (t) -1 is a mathematical representation of the acoustical properties of the reproduction zone as calculated on the basis of the audio signal detected by the reproduction device and the characteristics of the reproduction device as described above May be The one corresponding to the acoustic characteristic h room -1 (t) -1 in the entry 602 is the audio processing algorithm determined based on the acoustic characteristic h room -1 (t) -1 and the predetermined audio characteristic as described above. The coefficients w 1 , x 1 , y 1 and z 1 for

As further shown, entries 604 of database 600 may include playback zone acoustic characteristics h room −1 (t) −2 and processing algorithm coefficients w 2 , x 2 , y 2 and z 2 . Entries 606 of the database 600 may include playback zone acoustic characteristics h room −1 (t) −3 and processing algorithm coefficients w 3 , x 3 , y 3 and z 3 . Entries 608 of database 600 may include playback zone acoustics characteristics h room −1 (t) −4 and processing algorithm coefficients w 4 , x 4 , y 4 and z 4 .

  Those skilled in the art will appreciate that database 600 is merely one example of a database that may be created and maintained by performing the functions of method 500. In one example, the playback zone acoustics may be stored in different forms or mathematical states (ie, inversion versus non-inversion functions). In another example, the audio processing algorithm may be stored as a function and / or an equalization function. Other examples are also possible.

In one example, some of the functions described above are performed multiple times for the same playback device in the same playback zone to determine the acoustic characteristics h room (t) of the playback zone and the corresponding processing algorithm p (t) sell. For example, by performing blocks 502-506 multiple times, multiple acoustic characteristics of the playback zone can be determined. The composite (ie, average) acoustical properties of the playback zone may be determined from a plurality of acoustical properties, and the corresponding processing algorithm p (t) may be determined based on the synthetic acoustic properties of the playback zone. Then, the association between the corresponding processing algorithm p (t) and the acoustic characteristic h room (t) or h room −1 (t) of the reproduction zone may be stored in the database. In some cases, the first audio signal reproduced by the reproduction device in the reproduction zone may be substantially the same audio signal at each of the function iterations. In some other cases, the first audio signal reproduced by the reproduction device in the reproduction zone may be a different audio signal for part or each of the function iterations. Other examples are also possible.

  The method 500 described above (or some variation of the method 500) may further be performed to generate other entries in the database. For example, considering that the playback device is a first playback device, the playback zone is a first playback zone, and the audio processing algorithm is a first audio processing algorithm, method 500 may additionally or alternatively Thus, it may be performed using a second playback device in a second playback zone. In an example, the second playback device can play the fourth audio signal in the second playback zone, and the microphone of the second playback device is a fourth playback device that is played by the second playback device. A fifth audio signal may be detected that includes a portion of the audio signal. The computing device can then receive data indicative of the fifth audio signal and determine an acoustic characteristic of the second reproduction zone based on the characteristics of the fifth audio signal and the second reproduction device. .

  The computing device may determine a second audio processing algorithm based on the acoustic characteristics of the second playback zone. Here, the application of the determined second audio processing algorithm by the second reproduction device when reproducing the fourth audio signal in the reproduction zone generates a sixth audio signal. The sixth audio signal has substantially the same audio characteristics as the predetermined audio characteristics represented by the predetermined audio signal z (t) shown in Equations (7) and (8). The computing device can then store the association of the second audio processing algorithm and the acoustic characteristics of the second playback zone in a database.

  Although many regeneration zones may be similar in size, construction material, and / or furniture type and arrangement, it is unlikely that the two regeneration zones will have exactly the same regeneration zone acoustical properties. As such, storing individual entries for each unique playback zone acoustic characteristic and their respective corresponding audio processing algorithms may require an unrealistic amount of memory storage, rather rather similar or substantial. It is better to combine entries for spatially identical playback zone acoustics.

  In some cases, the acoustic characteristics of the two playback zones may be similar when the two playback zones are substantially similar rooms. In another case, the computing device may perform method 500 multiple times in the same playback zone, as suggested above, for the same playback device. In further cases, the computing device may perform method 500 for different playback devices in the same playback zone. In still other cases, the computing device may perform method 500 for the playback device in the same playback zone but elsewhere in the playback zone. Other examples are also possible.

  In either case, in the process of generating the playback zone acoustic characteristics and the corresponding audio processing algorithm entry, the computing device determines that the two playback zones have substantially identical playback zone acoustic characteristics It is also good. The computing device may then, in response, determine a third audio processing algorithm based on the first audio processing algorithm and the second audio processing algorithm. For example, the computing device may determine the third audio processing algorithm by averaging the parameters of the first and second audio processing algorithms.

  The computing device may then store in the database the association of the third audio processing algorithm with substantially the same acoustic characteristics. In one example, the database entry for the third audio processing algorithm may have corresponding acoustical properties determined based on an average of two substantially identical acoustical properties. In some cases, as suggested above, the database may have only one entry for substantially identical acoustical properties in order to save storage memory. Therefore, the entries of the acoustic characteristics of the first playback zone and the second playback zone may be discarded, giving priority to the entry for the third audio processing algorithm. Other examples are also possible.

  Although the above description generally refers to method 500 as being performed by a computing device, one skilled in the art will appreciate that the functionality of method 500 may alternatively be one or more, as described above. It will be appreciated that it may be performed by one or more other devices, such as a server, one or more playback devices, and / or one or more controller devices. In other words, one or more of blocks 502-510 may be performed by the computing device while one or more other blocks of blocks 502-510 are one or more May be performed by other computing devices.

  In one example, as described above, playback of the first audio signal by the playback device at block 502 may be performed by the playback device without any external command. Alternatively, the playback device may play the first audio signal in response to commands from the controller device and / or other playback devices. In another example, blocks 502-506 may be performed by one or more playback devices or one or more controller devices, and the computing device may perform blocks 508 and 510. In yet another example, blocks 502-508 may be performed by one or more playback devices or one or more controller devices, and the computing device has the ability to store audio processing algorithms at block 510. It may only be performed. Other examples are also possible.

b. Exemplary Database of Audio Processing Algorithms and Corresponding One or More Characteristics of Playback Zones As indicated above, playback zones may have one or more playback zone characteristics. The one or more reproduction zone characteristics may include the acoustic characteristics of the reproduction zone, as described above. In addition, one or more characteristics of the reproduction zone are (a) dimensions of the reproduction zone, (b) audio reflection characteristics of the reproduction zone, (c) purpose of using the reproduction zone, (d) number of furniture in the reproduction zone , (E) the size of the furniture in the regeneration zone, and (f) the type of furniture in the regeneration zone. In some cases, the audio reflection characteristics of the playback zone may be associated with the flooring and / or walling of the playback zone.

  In some instances, an association of the determined audio processing algorithm, such as p (t) described above, with one or more additional characteristics of the playback zone may be stored in a database. FIG. 7 shows an exemplary flow diagram of a method 700 for maintaining a database of audio processing algorithms and one or more characteristics of playback zones. The method 700 shown in FIG. 7, for example, is implemented within an operating environment including the media playback system 100 of FIG. 1, one or more playback devices 200 of FIG. 2, and one or more control devices 300 of FIG. 7 illustrates an embodiment of a method that can be performed. In one example, method 700 may be performed by a computing device in communication with a media playback system, such as media playback system 100. Alternatively, in another example, some or all of the functionality of method 700 may be one, such as one or more servers, one or more playback devices, and / or one or more controller devices, etc. Or may be performed by a plurality of other computing devices.

  Method 700 may include one or more operations, functions or operations as indicated by one or more of blocks 702-708. Although the blocks are shown in order, these blocks may be performed in parallel and / or in a different order than that described herein. Also, the various blocks may be combined into fewer blocks, divided into additional blocks, and / or eliminated based on the desired implementation.

  As shown in FIG. 7, the method 700 causes the playback device in the playback zone to play the first audio signal (block 702), (i) data indicative of one or more characteristics of the playback zone and ii) receiving data indicative of the second audio signal detected by the reproduction device microphone (block 704), determining an audio processing algorithm based on the second audio signal and the characteristics of the reproduction device (block 706) And storing in the database an association of the determined audio processing algorithm and at least one of the one or more characteristics of the playback zone (block 708).

  The method 700 causes the computing device to play the first audio signal to the playing device in the playing zone at block 702. In one example, block 702 may include the same or substantially the same function as that of block 502 described in connection with FIG. For example, the first audio signal may include audio content having a frequency that substantially covers the renderable frequency range of the playback device or the human audio frequency range. As such, the above description associated with block 502 can apply to block 702.

  Method 700 includes, at block 704, receiving (i) data indicative of one or more characteristics of the playback zone, and (ii) data indicative of a second audio signal detected by a microphone of the playback device. . In one example, block 704 may include the same or substantially the same function as that of block 504 described in connection with FIG. For example, the second audio signal may include a portion corresponding to the first audio signal reproduced by the reproduction device. As such, the above description associated with block 504 may apply to block 704.

  In addition to those described above in connection with block 504, block 704 includes receiving data indicative of one or more characteristics of the playback zone. As mentioned above, the playback zone may be a model playback zone that is used to simulate a listening environment where the playback device can play audio content. In such cases, some of the one or more regeneration zone characteristics for the regeneration zone may be known. For example, dimensions, floor plans, construction materials, and furniture for the regeneration zone may be known. In some cases, a model playback zone may be constructed for the purpose of determining the audio processing algorithm for the database, in which case one or more of the playback zone characteristics may be predetermined. Good. In another case, the playback zone may be the room of the user of the playback device. As mentioned above, the characteristics of such a reproduction zone can contribute to the acoustic characteristics of the reproduction zone.

  In an example, the computing device may receive data indicative of one or more playback zone characteristics via a controller interface of a controller device used by a user or an audio engineer. In another example, the computing device may receive data indicative of one or more characteristics of the playback zone from the playback device within the playback zone. For example, data indicative of one or more characteristics may be received along with data indicative of a second audio signal. Data indicative of one or more playback zone characteristics may be received before, during or after playback of the first audio signal by the playback device at block 702. Other examples are also possible.

  Method 700 includes, at block 706, determining an audio processing algorithm based on the second audio signal and the characteristics of the playback device. In one example, block 706 may include the same or similar functionality as described above in blocks 506 and 508 of FIG. For example, determining the audio processing algorithm comprises determining an acoustic characteristic of the reproduction zone based on the second audio signal and the characteristics of the reproduction device, and thereafter determining an audio processing algorithm based on the acoustic characteristic of the reproduction zone You may include doing. The characteristics of the playback device may include one or more of the acoustic characteristics of the playback device, the specifications of the playback device, and the model of the playback device, as described above.

  As mentioned above, application of the determined audio processing algorithm when playing back the first audio signal in the playing zone may produce a third audio signal. The third audio signal has substantially the same audio characteristics as the predetermined audio characteristics, or at least has some predetermined audio characteristics. In some cases, the predetermined audio characteristic may be the same as or substantially the same as the predetermined audio characteristic represented as the predetermined audio signal p (t) described above. Other examples are also possible.

  Method 800 includes, at block 708, storing an association of the determined audio processing algorithm with at least one of the one or more characteristics of the playback zone in a database. In one example, block 708 may include the same or similar functionality as described above in block 501. However, in this case, the computing device may store in the database an association between the audio processing algorithm and at least one of the one or more characteristics in addition to or instead of the acoustic characteristics of the playback zone .

  As mentioned above, the playback zone for which the audio processing algorithm has been determined is a model playback zone used to simulate a listening environment where the playback device can play audio content, or in the room of the user of the playback device. It may be. In some cases, the database is based on the audio signal reproduced and detected in the room of the user of the reproduction device, with an entry generated on the basis of the audio signal reproduced and detected in the model reproduction zone It may contain a generated entry.

  FIG. 6B illustrates an exemplary portion of a database 650 of audio processing algorithms. The database 650 stores the audio processing algorithm determined in the above description and the association between the audio processing algorithm and the reproduction zone acoustic characteristic. As shown, a portion of database 650 may include multiple entries 652-658 similar to entries 602-608 of database 600. For example, entries 652 and 602 may have the same playback zone sound characteristics and the same audio processing algorithm coefficients. Entries 654 and 604 may have the same playback zone acoustics and the same audio processing algorithm coefficients. Entries 656 and 606 may have the same playback zone acoustics and the same audio processing algorithm coefficients. Entries 658 and 608 may have the same playback zone acoustics and the same audio processing algorithm coefficients.

In addition to playback zone acoustics, database 650 may include zone size information indicating dimensions of playback zones having corresponding playback zone acoustics and audio processing algorithms determined based on the corresponding playback zone acoustics. For example, as shown, entry 652 may have a zone dimension of a 1 × b 1 × c 1 and entry 654 may have a zone dimension of a 2 × b 2 × c 2 , entries 656 may have a zone dimension of a 3 × b 3 × c 3 , entry 654 may have a zone dimension of a 4 × b 4 × c 4 . As such, in this example, the one or more characteristics stored in association with the determined audio processing algorithm include the acoustic characteristics of the playback zone and the dimensions of the playback zone. Other examples are also possible.

  Those skilled in the art will appreciate that database 650 is merely one example of a database that may be created and maintained by performing the functions of method 700. In one example, the playback zone acoustics may be stored in different types or mathematical states (ie, inversion versus non-inversion functions). In another example, the audio processing algorithm may be stored as a function and / or an equalization function. In yet another example, database 650 may include only zone sizes and corresponding audio processing algorithms, and may not include corresponding acoustic characteristics of playback zones. Other examples are also possible.

  Similar to method 500, method 700 described above (or some variations of method 700) may be further performed to generate other entries in the database. For example, considering that the playback device is a first playback device, the playback zone is a first playback zone, and the audio processing algorithm is a first audio processing algorithm, the method 600 may additionally or alternatively Thus, it may be performed using a second playback device in a second playback zone. In an example, the second playback device can play the fourth audio signal in the second playback zone, and the microphone of the second playback device is a fourth playback device that is played by the second playback device. A fifth audio signal may be detected that includes a portion of the audio signal. And the computing device is: (i) data indicative of one or more characteristics of the second playback zone; and (ii) a fifth detected by the microphone of the second playback device in the second playback zone Data indicative of an audio signal may be received.

  The computing device may then determine the acoustic characteristics of the second playback zone based on the characteristics of the fifth audio signal and the second playback device. The computing device may determine a second audio processing algorithm based on the acoustic characteristics of the second playback zone. Here, the application of the determined second audio processing algorithm by the second reproduction device when reproducing the fourth audio signal in the reproduction zone generates a sixth audio signal. The sixth audio signal has substantially the same audio characteristics as the predetermined audio characteristics represented by the predetermined audio signal z (t) shown in Equations (7) and (8). The computing device may then store an association of the second audio processing algorithm with at least one of the one or more characteristics of the second playback zone in a database.

  Similar to that described above in connection with method 500, in the process of generating an entry for the database, the computing device determines that the two playback zones have similar or substantially the same playback zone acoustical characteristics. You may Thus, as mentioned above, the computing device can combine (ie, average) the playback zone acoustics and the determined audio processing algorithm corresponding to the playback zone acoustics, as one entry in the database The synthesized playback zone acoustics and the synthesized audio processing algorithm may be stored. Other examples are also possible.

  As with method 500, although the above description generally refers to method 700 as being performed by a computing device, one skilled in the art will appreciate that the functionality of method 700 may alternatively be one. It will be appreciated that it may be performed by one or more other computing devices, such as or multiple servers, one or more playback devices, and / or one or more controller devices. In other words, one or more of blocks 702-708 may be performed by the computing device while one or more other blocks of blocks 702-708 are one or more May be performed by other computing devices. Other computing devices may include one or more playback devices, one or more controller devices, and / or one or more servers.

  In an example, as described above, playback of the first audio signal by the playback device at block 702 may be performed by the playback device without any external command. Alternatively, the playback device may play the first audio signal in response to commands from the controller device and / or other playback devices. In another example, blocks 702-706 may be performed by one or more playback devices or one or more controller devices, and the computing device may perform block 708. Other examples are also possible.

IV. Calibration of Playback Device Based on Playback Zone Characteristics As mentioned above, some examples described herein include calibration of the playback device for the playback zone. In some cases, calibration of the playback device may include determining an audio processing algorithm that the playback device applies when playing back audio content in the playback zone.

  FIG. 8 shows an exemplary playback environment 800 in which the playback device can be calibrated. As shown, playback environment 800 includes computing device 802, playback devices 804 and 806, controller device 808, and playback zone 810. The playback devices 804 and 806 may be similar to the playback device 200 shown in FIG. As such, playback devices 804 and 806 may each include a microphone, such as microphone 220. In some cases, only one of the playback devices 804 and 806 may have a microphone.

  In one example, the playback devices 804 and 806 can be part of a media playback system and synchronize audio content, such as that shown and described above in connection with the media playback system 100 of FIG. It can be configured to play. In some cases, playback devices 804 and 806 may be grouped together to play audio content synchronously within playback zone 810. Referring back to FIG. 1, the playback zone 810 may be any one or more of different rooms and zone groups in the media playback system 100. For example, the playback zone 810 may be a master bedroom. In such case, playback devices 804 and 806 may correspond to playback devices 122 and 124, respectively.

  In one example, controller device 808 may be a device that may be used to control the media playback system. In some cases, controller device 808 may be similar to control device 300 of FIG. While controller device 808 in FIG. 8 is shown as being inside playback zone 810, controller device 808 may be outside playback zone 810, or playback device 804, playback device 806, And / or may be moved into and out of playback zone 810 in communication with other devices within the media playback system.

  In one example, computing device 802 may be a server in communication with a media playback system. The computing device 802 may be configured to maintain a database of information associated with the media playback system, such as registration numbers associated with the playback devices 804 and 806. Also, computing device 802 may be configured to maintain a database of audio processing algorithms as described in the previous section. Other examples are also possible.

  Methods 900, 1000 and 1100 provide functionality that may be performed for calibration of playback devices in playback zones, such as playback devices 804 and 806 in playback zone 810, as described below.

a. First Exemplary Method of Determining an Audio Processing Algorithm Based on a Detected Audio Signal FIG. 9 is an exemplary flow diagram of a method 900 of determining an audio processing algorithm based on one or more playback zone characteristics. Is shown. The method 900 shown in FIG. 9 may be, for example, the media playback system 100 of FIG. 1, one or more playback devices 200 of FIG. 2, one or more control devices 300 of FIG. 3, and a playback environment 800 of FIG. And FIG. 10 illustrates an embodiment of a method that may be implemented within an operating environment including: In an example, the method 900 may be performed by a computing device in communication with a media playback system. Alternatively, in another example, some or all of the functionality of method 900 may be associated with one or more servers, one or more playback devices, and / or one or more media playback systems. It may be performed by one or more other computing devices, such as a controller device.

  Method 900 may include one or more operations, functions or operations as indicated by one or more of blocks 902-908. Although the blocks are shown in order, these blocks may be performed in parallel and / or in a different order than that described herein. Also, the various blocks may be combined into fewer blocks, divided into additional blocks, and / or eliminated based on the desired implementation.

  As shown in FIG. 9, the method 900 causes the playback device in the playback zone to play the first audio signal (block 902), data indicative of the second audio signal detected by the microphone of the playback device. Receiving from the playback device (block 904), determining an audio processing algorithm based on the second audio signal and the audio characteristics of the playback device (block 906), and playing back data indicative of the determined audio processing algorithm And sending (block 908).

  Method 900 includes, at block 902, causing the playback device in the playback zone to play the first audio signal. Referring to FIG. 8, the playback device may be a playback device 804, and the playback zone may be a playback zone 810. Therefore, the playback device may be the same playback device as the playback device 200 shown in FIG.

  In one example, computing device 802 may determine that playback device 804 should be calibrated for playback zone 810, and responsive thereto, the first audio signal to playback device 804 in playback zone 810 May be played back. In some cases, computing device 802 may determine that playback device 804 is to be calibrated based on an input received from the user indicating that playback device 804 should be calibrated. . In one example, the input may be received from the user via controller device 808. In another case, the computing device 802 determines that the playback device 804 should be calibrated because the playback device 804 is a new playback device or has moved to a playback zone 810 anew. May be In further cases, calibration of the playback device 804 (or any other playback device in the media playback system) may be performed periodically. As such, computing device 802 may determine that playback device 804 should be calibrated based on the calibration schedule of playback device 804. Other examples are also possible. In response to determining that the playback device 804 is to be calibrated, the computing device 802 may then cause the playback device 804 to play the first audio signal.

  Although block 902 includes computing device 802 causing playback device 804 to play the first audio signal, one skilled in the art will appreciate that playback of the first audio signal by playback device 804 is not necessarily caused by computing device 802. It will be understood that it may or may not be started. For example, controller device 808 may send a command to playback device 804 to cause playback device 804 to play the first audio signal. In another example, the playback device 806 may cause the playback device 804 to play the first audio signal. In further examples, the playback device 804 may play the first audio signal without receiving a command from the computing device 802, the playback device 806 or the controller device 808. In an example, the playback device 804 may determine that the calibration should be performed based on the motion of the playback device 804 or the change in playback zone of the playback device 804, in response, the first May be played back. Other examples are also possible.

  As suggested, the first audio signal may be a test signal or a measurement signal to calibrate the playback device 804 for the playback zone 810. As such, the first audio signal may represent audio content that may be played back by the playback device during normal use by the user. Thus, the first audio signal may include audio content having a frequency that substantially covers the renderable frequency range of the playback device or the human audio frequency range. In another example, the first audio signal may be a favorite or commonly played audio track of a user of the playing device.

  Method 900 includes, at block 904, receiving from the playback device a second audio signal detected by a microphone of the playback device. Continuing the above example, the microphone of the playback device 804 may be similar to the microphone 220 of the playback device 200, given that the playback device 804 is similar to the playback device 200 of FIG. In one example, computing device 802 may receive data from playback device 804. In another example, computing device 804 may also receive data via another playback device such as playback device 806, a controller device such as controller device 808, or other computing device such as another server. Good.

  Immediately after or shortly after the playback device 804 is playing the first audio signal, the microphone of the playback device 804 can detect the second audio signal. The second audio signal may include the sound present in the playback zone. For example, the second audio signal may include a portion corresponding to the first audio signal reproduced by the reproduction device 804.

  In an example, the computing device 802 may receive data indicative of the first audio signal from the playback device 804 as a media stream when the microphone detects the second audio signal. In another example, computing device 802 may receive data indicative of the second audio signal from playback device 804 upon completion of detection of the second audio signal by the microphone of playback device 804. In any case, the playback device 804 processes the detected second audio signal (via an audio processing component such as the audio processing component 208 of the playback device 200) to generate data indicative of the second audio signal. It may be generated and sent data to the computing device 802. In an example, generating data indicative of the second audio signal may include converting the second audio signal from an analog signal to a digital signal. Other examples are also possible.

The method 900 includes, at block 906, determining an audio processing algorithm based on the second audio signal and the acoustic characteristics of the playback device. In one example, the acoustic characteristic of the playback device may be h p (t) as described above in connection with block 506 of method 500 shown in FIG. For example, as described above, the reference reproduction device in the anechoic chamber reproduces the measurement signal, data indicating the audio signal detected by the microphone of the reference reproduction device is received from the reference reproduction device, and the detected audio signal and measurement By determining the acoustic characteristics of the playback device based on the comparison with the signal, the acoustic characteristics of the playback device can be determined.

  As suggested above, the reference playback device may be the same model as the playback device 804 being calibrated for playback zone 810. Also, similar to that described above in connection with block 506, the computing device may accordingly determine the acoustic characteristics of the playback zone based on the acoustic characteristics of the playback device and the second audio signal. .

  In one example, the computing device 802 may determine the audio processing algorithm based on the acoustic characteristics of the playback zone similar to those described above in connection with block 508. As such, computing device 802 may determine the audio processing algorithm based on the acoustic characteristics of the playback zone and the predetermined audio characteristics. Here, the application by the playback device 804 of the determined audio processing algorithm when playing back the first audio signal in the playback zone 810 generates a third audio signal. The third audio signal has an audio characteristic substantially identical to the predetermined audio characteristic, or at least to some extent has the predetermined audio characteristic.

In another example, computing device 802 may select an audio processing algorithm that corresponds to the acoustic characteristics of playback zone 810 from among a plurality of audio processing algorithms. For example, the computing device may access databases such as databases 600 and 650 of FIGS. 6A and 6B, respectively, and may specify audio processing algorithms based on the acoustic characteristics of playback zone 810. For example, referring to database 600 of FIG. 6A, if the acoustic characteristics of playback zone 810 are determined as h room −1 (t) −3, then coefficients w 3 , x 3 , y 3 and z 3 of database entry 606 are The audio processing algorithm to have can be specified.

  In some cases, acoustic characteristics that perfectly match the determined acoustic characteristics of the playback zone 810 may not be found in the database. In such a case, an audio processing algorithm may be identified that corresponds to the acoustic characteristics in the database that is most similar to the acoustic characteristics of the playback zone 810. Other examples are also possible.

  Method 900 includes, at block 908, transmitting data indicative of the determined audio processing algorithm to the playback device. Continuing the example, the computing device 802 (or one or more other devices) may send data to the playback device 804 indicating the determined audio processing algorithm. Also, the data indicative of the determined audio processing algorithm may include a command for causing the determined audio processing algorithm to be applied to the playback device 804 when playing back audio content in the playback zone 810. In an example, applying audio processing algorithms to audio content may alter frequency equalization of the audio content. In another example, applying an audio processing algorithm to audio content may change the volume range of the audio content. Other examples are also possible.

  In some cases, the playback zone may include multiple playback devices configured to synchronously play audio content. For example, as discussed above, playback devices 804 and 806 may be configured to play audio content synchronously in playback zone 810. In such cases, the calibration of one of the playback devices may include another playback device.

  In an example, a playback zone, such as playback zone 810, includes a first playback device, such as playback device 804, and a second playback device, such as playback device 806, configured to synchronously play audio content. May be. Calibration of the playback device 804 as performed coordinated by the computing device 802 includes causing the playback device 804 to play the first audio signal and causing the playback device 806 to play the second audio signal. May be.

  In some cases, computing device 802 may cause playback device 806 to play the second audio signal in synchronization with the playback of the first audio signal by playback device 804. In one example, the second audio signal is orthogonal to the first audio signal such that the components of the audio content that are synchronously reproduced by any of the reproduction devices 804 and 806 can be identified. It is also good. In another case, the computing device may cause playback device 806 to play the second audio signal after playback of the first audio signal by playback device 804 is complete. Other examples are also possible.

  The computing device 802 may then receive, from the playback device 804, a third audio signal detected by the microphone of the playback device 804, similar to that described in connection with block 904. However, in this case, the third audio signal may include both a portion corresponding to the first audio signal reproduced by the reproduction device 804 and a portion corresponding to the second audio signal reproduced by the reproduction device 806. Good.

  The computing device 802 may then determine an audio processing algorithm based on the third audio signal and the acoustic characteristics of the playback device 804 and transmit data indicative of the determined audio processing algorithm to the playback device 804 . The determined audio processing algorithm is that the playback device 804 applies when playing back audio content in the playback zone 810, similar to that described above in connection with blocks 906 and 908.

  In some cases, as described above, the playback device 806 may include a microphone and may be calibrated as described above. As shown, the first audio signal reproduced by the reproduction device 804 and the second audio signal reproduced by the reproduction device 806 may be orthogonal or distinguishable. For example, and as discussed above, the playback device 806 may play the second audio signal after the playback device 804 has finished playing the first audio signal. In another example, the second audio signal may have a phase orthogonal to the phase of the first audio signal. In yet another example, the second audio signal may have a different frequency range and / or change than the first audio signal. Other examples are also possible.

  In any case, the computing device 802 can play back the detected third audio signal from the third audio signal detected by the playback device 804 by the identifiable first and second audio signals. The contribution of 804 and the contribution of the playback device 806 to the detected third audio signal may be analyzed. Then, each audio processing algorithm may be determined for the playback device 804 and the playback device 806.

  Each audio processing algorithm may be determined as described above in connection with block 508. In some cases, a first acoustic characteristic of the playback zone may be determined based on a third audio signal detected by the playback device 604, and a second acoustic characteristic of the playback zone is detected by the playback device 806 It may be determined based on the received fourth audio signal. Similar to the third audio signal, the fourth audio signal includes a portion corresponding to the first audio signal reproduced by the reproduction device 804 and a portion corresponding to the second audio signal reproduced by the reproduction device 806. May be included.

  And, each audio processing algorithm for the playback device 804 and the playback device 806 may be determined based on the first acoustic characteristic of the playback zone and the second acoustic characteristic of the playback zone individually or in combination. In some cases, the combination of the first acoustical property of the reproduction zone and the second acoustical property of the reproduction zone is more than that of either of the individual first or second acoustical properties of the reproduction zone. It can represent comprehensive acoustic characteristics. Each audio processing algorithm is then sent to the playback device 804 and the playback device 806 and applied when playing back audio content in the playback zone 810. Other examples are also possible.

  Although the above description generally refers to the method 900 as being performed by the computing device 802 of FIG. 8, those skilled in the art will appreciate that the functionality of the method 900 may alternatively be as described above. It will be appreciated that it may be performed by one or more other computing devices, such as one or more servers, one or more playback devices and / or one or more controller devices. . For example, the functionality of method 900 for calibrating playback device 804 for playback zone 810 may be playback device 804, playback device 806, controller device 808, or otherwise not shown in FIG. May be performed by the device.

  Further, in some cases, one or more of blocks 902-908 may be performed by computing device 802 while one or more other blocks of blocks 902-908 are It may be performed by one or more other devices. For example, blocks 902 and 904 may be performed by one or more of playback device 804, playback device 806 and playback device 808. In other words, a coordinating device other than computing device 802 can coordinate the calibration of playback device 804 for playback zone 810.

  In some cases, at block 906, the coordination device transmits a second audio signal to the computing device 802, and the computing device 802 processes the audio based on the audio characteristics of the second audio signal and the playback device. An algorithm may be determined. The acoustic characteristics of the playback device may be provided by the coordination device to the computing device 802 or from other devices in which the characteristics of the playback device are stored. In some cases, computing device 802 may perform the calculations of block 906 because computing device 802 has more processing power than the coordinating device.

  In one example, once the computing device 802 determines the audio processing algorithm, it sends the determined audio processing algorithm directly to the playback device 804 so that the playback device 804 can apply when playing back audio content in the playback zone 810 You may In another example, the computing device 802 may transmit the determined audio processing algorithm to the coordinating device upon determining the audio processing algorithm, the coordinating device performing block 908 ( If not, the determined audio processing algorithm may be sent to the playback device 804. Other examples are also possible.

b. Second Exemplary Method of Determining an Audio Processing Algorithm Based on a Detected Audio Signal In some cases, as described above, calibration of a playback device in a playback zone may be a computing device such as a server or controller device May be adjusted and implemented. In some other cases, as mentioned above, calibration of the playback device may be adjusted and / or performed by the playback device being calibrated.

  FIG. 10 shows an exemplary flow diagram of a method 1000 of determining an audio processing algorithm based on one or more playback zone characteristics as performed by the playback device being calibrated. The method 1000 shown in FIG. 10 may be, for example, the media playback system 100 of FIG. 1, one or more playback devices 200 of FIG. 2, one or more control devices 300 of FIG. 3, and a playback environment 800 of FIG. And FIG. 10 illustrates an embodiment of a method that may be implemented within an operating environment including: As shown, the method 800 may be performed by the playback device to be calibrated for the playback zone. In some cases, a portion of the functionality of method 1000 may alternatively be one such as one or more servers, one or more other playback devices, and / or one or more controller devices, etc. Or may be performed by a plurality of other computing devices.

  Method 1000 may include one or more operations, functions or operations as indicated by one or more of blocks 1002-1008. Although the blocks are shown in order, these blocks may be performed in parallel and / or in a different order than that described herein. Also, the various blocks may be combined into fewer blocks, divided into additional blocks, and / or eliminated based on the desired implementation.

  As shown in FIG. 10, the method 1000 reproduces the first audio signal in the reproduction zone (block 1002), detects the second audio signal by the microphone (block 1004), the second audio Determining an audio processing algorithm based on the signal and the acoustic characteristics of the playback device (block 1006), applying the determined audio processing algorithm to audio data corresponding to the media item when playing back the media item (block 1008) )including.

  Method 1000 includes, at block 1002, playing a first audio signal in a playing zone. Referring to FIG. 8, the playback device performing the method 1000 may be the playback device 804 while the playback device 804 is in the playback zone 810. In one example, block 1002 is similar to block 902 but may be performed by the playback device 804 being calibrated rather than the computing device 802. Nevertheless, the above description associated with block 902 is applicable to block 1002, sometimes with some variations.

  The method 1000 includes, at block 1004, detecting a second audio signal with a microphone. The second audio signal may include a portion corresponding to the first audio signal reproduced by the reproduction device. In one example, block 1004 is similar to block 904 but may be performed by the playback device 804 being calibrated rather than the computing device 802. Nevertheless, the above description associated with block 904 is applicable to block 1004, with some variations.

  Method 1000 includes, at block 1006, determining an audio processing algorithm based on the second audio signal and the acoustic characteristics of the playback device. In one example, block 1006 is similar to block 906, but may be performed by the playback device 804 being calibrated rather than the computing device 802. Nevertheless, the above description associated with block 906 is applicable to block 1006, sometimes with some variations.

  In some cases, the function of determining the audio processing algorithm may be performed entirely by the playback device 804 being calibrated for the playback zone 810, as described in connection with block 906. For example, the playback device 804 may determine the acoustic characteristics of the playback zone 610 based on the second audio signal and the acoustic characteristics of the playback device 804. In some cases, the playback device 804 may store the acoustic characteristics of the playback device 804 locally. In other cases, the playback device 804 may receive acoustic characteristics of the playback device 804 from other devices.

  In one example, the playback device 804 may then select an audio processing algorithm that corresponds to the acoustic characteristics of the playback zone 610 from among a plurality of audio processing algorithms. For example, the playback device 804 accesses a database such as the databases 600 and 650 shown and described above with reference to FIGS. 6A and 6B, respectively, to obtain acoustic characteristics substantially similar to the acoustic characteristics of the playback zone 610. The audio processing algorithm corresponding to may be identified in the database.

  In another example, similar to the functions described above in connection with block 906 of method 900 and / or block 508 of method 500, playback device 804 may, based on the acoustic characteristics of playback zone 610 and the predetermined audio characteristics, audio. The processing algorithm may be calculated. Here, the application by the playback device 804 of the determined audio processing algorithm when playing back the first audio signal in the playback zone 810 generates a third audio signal. The third audio signal has substantially the same audio characteristics as the predetermined audio characteristics or has at least some predetermined audio characteristics.

  In a further example, as described in the previous section, other devices besides the playback device 804 may perform some or all of the functions of block 1006. For example, playback device 804 may be a computing device such as computing device 802, another playback device such as playback device 806, a controller device such as controller device 808, and / or some other device in communication with playback device 804. , Data indicative of the detected second audio signal may be transmitted to request an audio processing algorithm. In another example, the playback device 804 determines the acoustic characteristics of the playback zone 810 based on the detected audio signal and plays back in response to a request for an audio processing algorithm based on the determined acoustic characteristics of the playback zone 810 Data indicative of the determined acoustic characteristics of zone 810 may be transmitted to other devices.

  In other words, in one aspect, the playback device 804 may be configured to transmit from the other device based on the detected second audio signal and / or the acoustic characteristics of the playback zone 810 provided to the other device by the playback device 804. The audio processing algorithm may be determined by requesting the audio processing algorithm.

  If the playback device 804 provides data indicative of the detected second audio signal rather than the acoustic characteristics of the playback zone 810, the playback device 804 may allow other devices to determine the acoustic characteristics of the playback zone 810 As such, the acoustic characteristics of the playback device 804 may be transmitted along with data indicative of the detected second audio signal. In another case, the device receiving data indicative of the detected second audio signal determines the model of the playback device 804 to transmit data based on the data, and the model of the playback device 804 (ie, the playback device The acoustic characteristics of the playback device 804 may be determined based on the acoustic characteristics database). Other examples are also possible.

  The playback device 804 may then receive the determined audio processing algorithm. In some cases, the playback device 804 may transmit the second audio signal to the other device as the other device has more processing power than the playback device 804. In other cases, the playback device 804 and one or more other devices may perform calculations and functions in parallel for efficient use of processing power. Other examples are also possible.

  Method 800 includes applying the determined audio processing algorithm at block 1008 to audio data corresponding to the media item when playing the media item. In an example, application of the audio processing algorithm to audio data of the media item by the playback device 804 when playing the media item within the playback zone 810 may change frequency equalization of the media item. In another example, application of the audio processing algorithm to audio data of the media item by the playback device 804 when playing the media item in the playback zone 810 may change the volume range of the media item. In one example, the playback device 804 may store the determined audio processing algorithm in a local memory storage, and may apply the audio processing algorithm when playing back audio content in the playback zone 810.

  In one example, the playback device 804 may be calibrated for different configurations of the playback device 804. For example, playback device 804 may be calibrated for a first configuration that includes individual playbacks in playback zone 810 as well as a second configuration that includes synchronized playback with playback device 806 in playback zone 810. In such case, a first audio processing algorithm is determined, stored and applied for a first playback configuration of the playback device, and a second audio processing algorithm is of a second playback configuration of the playback device. To determine, store and apply.

  Then, the playback device 804 may determine an audio processing algorithm based on the playback configuration at a predetermined time of the playback device 804 and apply it when playing back audio content in the playback zone 810. For example, when the playback device 804 is playing audio content in the playback zone 810 without the playback device 806, the playback device 804 may apply a first audio processing algorithm. On the other hand, when the playback device 804 is playing back audio content in the playback zone 810 in synchronization with the playback device 806, the playback device 804 may apply a second audio processing algorithm. Other examples are also possible.

c. Exemplary Method for Determining Audio Processing Algorithm Based on Playback Zone Characteristics In the above description, the determination of the audio processing algorithm is generally as determined based on the audio signal detected by the playback device in the playback zone: It can be based on determining the acoustic properties of the reproduction zone. In some cases, audio processing algorithms may be identified based on other characteristics of the playback zone in addition to or instead of the acoustic characteristics of the playback zone.

  FIG. 11 illustrates an exemplary flow diagram for providing an audio processing algorithm from a database of audio processing algorithms based on one or more characteristics of a playback zone. The method 1100 shown in FIG. 11, for example, includes the media playback system 100 of FIG. 1, one or more playback devices 200 of FIG. 2, one or more control devices 300 of FIG. 3, and a playback environment 800 of FIG. And FIG. 10 illustrates an embodiment of a method that may be implemented within an operating environment including: In an example, the method 1100 may communicate with the playback device that is to be calibrated for the playback zone, one or more playback devices, one or more controller devices, one or more servers, or It may be performed individually or collectively by one or more computing devices.

  The method 1100 may include one or more operations, functions or operations as indicated by one or more of the blocks 1102-1108. Although the blocks are shown in order, these blocks may be performed in parallel and / or in a different order than that described herein. Also, the various blocks may be combined into fewer blocks, divided into additional blocks, and / or eliminated based on the desired implementation.

  As shown in FIG. 11, the method 1100 comprises the steps of: (i) maintaining a plurality of audio processing algorithms and (ii) a database of a plurality of playback zone characteristics (block 1102), one or more characteristics of the playback zone Receiving the indicated data (block 1104), identifying an audio processing algorithm in the database based on the data (block 1106), and transmitting data indicative of the identified audio processing algorithm (block 1108).

  Method 1100 includes, at block 1102, maintaining a database of (i) audio processing algorithms and (ii) playback zone characteristics. In one example, the database may be similar to databases 600 and 650 as shown and described above in connection with FIGS. 6A and 6B, respectively. As such, each audio processing algorithm of the plurality of audio processing algorithms may correspond to one or more playback zone characteristics of the plurality of playback zone characteristics. Maintenance of the database may be as described above in connection with methods 500 and 700 of FIGS. 5 and 7 respectively. As mentioned above, the database may or may not be stored locally on the device maintaining the database.

  The method 1100 includes receiving, at block 1104, data indicative of one or more characteristics of the playback zone. In one example, one or more characteristics of the playback zone may include acoustic characteristics of the playback zone. In another example, one or more properties of the regeneration zone include, among other possibilities, the dimensions of the regeneration zone, the floor of the regeneration zone, the wall material of the regeneration zone, the use of the regeneration zone, the purpose of use of the regeneration zone, It may include the number of furniture, the size of the furniture in the regeneration zone and the type of furniture in the regeneration zone.

  In one example, referring back to FIG. 8, playback device 804 may be calibrated for playback zone 810. As described above, the method 1100 may be performed individually or collectively by the playback device 804, the playback device 806, the controller device 808, the computing device 802, or other device in communication with the playback device 804 being calibrated. It is also good.

  In one case, the one or more characteristics may include the acoustic characteristics of the playback zone 810. In such a case, the playback device 804 in the playback zone 810 plays back the first audio signal and the second audio signal including the portion corresponding to the first audio signal is detected by the microphone of the playback device 804 It is also good. In an example, the data indicative of the one or more characteristics may be data indicative of the detected second audio signal. In another example, based on the detected second audio signal and the acoustic characteristics of the reproduction device 804, the acoustic characteristics of the reproduction zone 810 may be determined similar to those described above. And, data indicative of one or more characteristics may indicate acoustic characteristics of the playback zone. In any case, data indicative of one or more characteristics may then be received by at least one of the one or more devices performing the method 1100.

  In other cases, one or more properties may include the dimensions of the regeneration zone, the flooring of the regeneration zone, the wall material of the regeneration zone, and the like. In such case, the user may be prompted to enter or select one or more characteristics of the playback zone 810 via a controller interface provided by a controller device such as controller device 808. For example, the controller interface may provide, among other possibilities, a list of reproduction zone dimensions and / or a list of furniture arrangements, for the user to select. And, data indicative of one or more characteristics of the playback zone 810 as provided by the user may be received by at least one of the one or more devices performing the method 1100.

Method 1100 includes, at block 1106, identifying an audio processing algorithm in a database based on the data. Referring to the case where the one or more characteristics include the acoustic characteristics of the playback zone 810, the audio processing algorithm may be identified in the database based on the acoustic characteristics of the playback zone 810. For example, referring to database 600 of FIG. 6A, the received data indicates the acoustic characteristics of the playback zone 810 substantially identical to h room -1 (t) -3 or h room -1 (t) -3. In the case, an audio processing algorithm of database entry 606 having coefficients w 3 , x 3 , y 3 and z 3 may be identified. For example, the data indicative of one or more characteristics of the playback zone may simply include data indicative of the detected second audio signal, and the acoustic characteristics of the playback zone may further be described above before identifying the audio processing algorithm. It can be determined as Other examples are also possible.

Referring to the case where the one or more characteristics include, among other characteristics, the dimensions of the playback zone, the audio processing algorithm may be identified in the database based on the dimensions of the playback zone. For example, in the case shown with reference to the database 650 of FIG. 6B, the received data is a 4 × b 4 × c 4 and with or a 4 × b 4 × c 4 substantially the same, the size of the regeneration zone 810, the coefficient w An audio processing algorithm entry of database entry 658 having 4 , 4 , y 4 and z 4 may be identified. Other examples are also possible.

In some cases, multiple audio processing algorithms may be identified based on one or more characteristics of the playback zone indicated in the received data. For example, even if the acoustic characteristics of the playback zone 810 are determined as h room- 1 (t)-3 while the dimension provided by the user for the playback zone 810 is a 4 x b 4 x c 4 Good. Here, h room -1 (t) -3 corresponds to an audio processing algorithm parameter w 3, x 3, y 3 and z 3 as provided in the entry 656 in the database 650 of FIG. 6, a 4 × b 4 × c 4 correspond to audio processing algorithm parameters w 4 , x 4 , y 4 and z 4 as provided in entry 658.

  In one example, audio processing algorithms that correspond to matched or substantially matched acoustic characteristics may be prioritized. In another example, an average of the audio processing algorithms (ie, an average of the parameters) may be calculated, and the average audio processing algorithm may be the identified audio processing algorithm. Other examples are also possible.

  Method 1100 includes transmitting, at block 1108, data indicative of the identified audio processing algorithm. Continuing the example, data indicative of the identified audio processing algorithm may be sent to playback device 804 being calibrated for playback zone 810. In some cases, data indicative of the identified audio processing algorithm may be transmitted directly to the playback device 804. In another case, if the calibration of the playback device 804 is adjusted by the controller device 808, and if the audio processing algorithm is identified by the computing device 802, then data indicating the identified audio processing algorithm may It may be transmitted from the computing device 802 to the playback device 804 via: Other examples are also possible.

  As mentioned above, the functions of method 1100 may be performed by one or more of one or more servers, one or more playback devices, and / or one or more controller devices. In one example, maintenance of the database at block 1102 is performed by computing device 802. Reception of data indicative of one or more characteristics of the playback zone at block 1104 is performed by controller device 808 (data is provided to controller device 808 by playback device 804 being calibrated in playback zone 810) . Block 1106 is executed by a controller device 808 in communication with the computing device 802 to access a database maintained by the computing device 802 to identify audio signal processing. Block 1108 includes the computing device 802 transmitting data indicative of the identified audio processing algorithm to the playback device 804, either directly or via the controller device 808.

  In another example, the functions of method 1100 may be performed completely or substantially completely by one device. For example, computing device 802 may maintain a database as described in connection with block 1102.

  The computing device 802 may then adjust the calibration of the playback device 804. For example, computing device 802 causes playback device 804 to play the first audio signal to detect the second audio signal, and receives data indicative of the detected second audio signal from playback device 804 for playback The acoustic characteristics of the playback zone 810 may be determined based on data from the device 804. In another case, the computing device 802 prompts the user to provide the controller device 808 with one or more characteristics of the playback zone 810 (ie, dimensions, etc. as described above), and the user of the playback zone 810 Data indicative of the characteristics provided by

  The computing device may then identify an audio processing algorithm based on the received data at block 1106, and may send data indicative of the identified audio processing algorithm to the playback device 804 at block 1108. . The computing device 802 may also send commands for the playback device 804 to apply the identified audio processing algorithm when playing back audio content in the playback zone 810. Other examples are also possible.

IV. Conclusion The present specification discloses various exemplary systems, methods, apparatuses, products and the like, which include, among other components, firmware and / or software executed on hardware. It is understood that such examples are merely illustrative and should not be considered limiting. For example, some or all of these firmware, hardware and / or software aspects or components may be exclusively in hardware, exclusively in software, exclusively in firmware, or any of hardware, software and / or firmware It is intended that combinations can be implemented. Thus, those examples provided are not the only way to implement those systems, methods, devices, and / or products.

  Further, "embodiments" as used herein indicate that certain features, structures or characteristics described in connection with the embodiments may be included in at least one embodiment of the invention. Although this phrase is used in various places throughout the specification, it is not all referring to the same embodiment, nor is it a separate or alternative embodiment excluding the other embodiments. Thus, it is understood that the embodiments described herein can be combined explicitly and implicitly by one of ordinary skill in the art with other embodiments.

The following examples describe further or alternative aspects of the present disclosure. The device in any of the following examples may be an element of any of the devices described herein or any configuration of the devices described herein.
(Feature 1)
What is claimed is: 1. A computing device comprising a processor and a memory executable by the processor and having instructions stored thereon for causing a computing device to perform the function.
The function is
Causing the playback device in the playback zone to play the first audio signal,
Receiving from the playback device data indicative of a second audio signal detected by a microphone of the playback device, wherein the second audio signal includes a portion corresponding to the first audio signal;
Determining an audio processing algorithm based on the second audio signal and an acoustic characteristic of the reproduction device; and transmitting data indicating the determined audio processing algorithm to the reproduction device.
A computing device that has
(Feature 2)
Third audio having substantially the same audio characteristics as the predetermined audio characteristics by applying the determined audio processing algorithm by the reproduction device when reproducing the first audio signal in the reproduction zone A computing device of any preceding feature in which a signal is generated.
(Feature 3)
In addition, determining the audio processing algorithm
Determining an acoustic characteristic of the reproduction zone based on the second audio signal and an acoustic characteristic of the reproduction device; and a plurality of audio processing algorithms corresponding to the acoustic characteristic determined by the reproduction zone To choose from,
A computing device of any preceding feature, including:
(Feature 4)
In addition, determining the audio processing algorithm
Determining an acoustic characteristic of the reproduction zone based on the second audio signal and an acoustic characteristic of the reproduction device, and calculating the audio processing algorithm based on an acoustic characteristic of the reproduction zone and a predetermined audio characteristic To do,
A computing device of any preceding feature, including:
(Feature 5)
The computing device of any preceding feature, wherein determining the audio processing algorithm comprises determining one or more parameters for the audio processing algorithm.
(Feature 6)
Said function is also
Reproducing the measurement signal on a reference reproduction device in the anechoic chamber,
Receiving from the reference reproduction device data indicative of an audio signal detected by a microphone of the reference reproduction device, wherein the detected audio signal corresponds to the measurement signal reproduced in the anechoic chamber Determining an acoustic characteristic of the reproduction device based on a comparison of the detected audio signal and the measurement signal;
A computing device of any preceding feature having:
(Feature 7)
What is claimed is: 1. A computing device comprising a processor and a memory executable by the processor and having instructions stored thereon for causing a computing device to perform the function.
The function is
Causing the first reproduction device to reproduce the first audio signal in the reproduction zone,
Causing the second reproduction device to reproduce the second audio signal in the reproduction zone;
Receiving from the first reproduction device data indicative of a third audio signal detected by a microphone of the first reproduction device, wherein the third audio signal comprises: (i) the first A portion corresponding to an audio signal and (ii) a portion corresponding to the second audio signal reproduced by a second reproduction device,
Determining an audio processing algorithm based on the third audio signal and an acoustic characteristic of the first reproduction device; and transmitting data indicating the determined audio processing algorithm to the first reproduction device.
A computing device that has
(Feature 8)
A fourth audio device having a substantially same audio characteristic as a predetermined audio characteristic by applying the determined audio processing algorithm by the first reproduction device when reproducing the first audio signal in the reproduction zone; The computing device of Feature 7, wherein an audio signal of is generated.
(Feature 9)
Determining the audio processing algorithm further comprises:
Determining an acoustic characteristic of the reproduction zone based on the third audio signal and an acoustic characteristic of the first reproduction device; and a plurality of audio processing algorithms corresponding to the acoustic characteristic of the reproduction zone The computing device of feature 7 or feature 8 comprising selecting from:
(Feature 10)
Reproducing a second audio signal to a second reproduction device may be performed by synchronizing the reproduction of the first audio signal by the first reproduction device to the second audio signal to the second reproduction device. The computing device according to any of Features 7 to 9, including playing a.
(Feature 11)
The reproduction of the second audio signal by the second reproduction device may be performed by causing the second audio signal to be transmitted to the second reproduction device after the reproduction of the first audio signal by the first reproduction device is completed. The computing device according to any of Features 7 to 10, including playing.
(Feature 12)
A computing device according to any of features 7 to 11, wherein the first audio signal is orthogonal to the second audio signal.
(Feature 13)
The computing device of any of Features 7 to 12, wherein the first playback device and the second playback device are included in a zone group of playback devices configured to synchronously play audio content.
(Feature 14)
What is claimed is: 1. A playback device comprising a processor, a microphone, and a memory that is executable by the processor and stores instructions that cause the playback device to perform a function.
The function is
Playing back the first audio signal in the playback zone,
Detecting a second audio signal by the microphone, wherein the second audio signal includes a portion corresponding to the first audio signal;
Determining an audio processing algorithm based on the second audio signal and acoustic characteristics of the playback device;
Applying the determined audio processing algorithm to audio data corresponding to the media item when playing back the media item in the playback zone;
Has a playback device.
(Feature 15)
Applying the determined audio processing algorithm when reproducing the first audio signal in the reproduction zone to generate a third audio signal having substantially the same audio characteristics as the predetermined audio characteristics , 14 computing devices.
(Feature 16)
Determining the audio processing algorithm further comprises:
Determining one or more characteristics of the reproduction zone based on the second audio signal and the acoustic characteristics of the reproduction device, and an audio processing algorithm corresponding to the one or more characteristics of the reproduction zone Choose from multiple audio processing algorithms,
The computing device of feature 14 or feature 15, including:
(Feature 17)
Determining the audio processing algorithm
(I) transmitting to the computing device transmission data indicative of the second audio signal and (ii) characteristics of the playback device, and receiving data indicative of the audio processing algorithm from the computing device.
The computing device of any of Features 14 to 16, including:
(Feature 18)
16. The computing device of any of features 14-17, wherein the function further comprises storing the determined audio processing algorithm in the memory.
(Feature 19)
18. The computing device of any of features 14-18, wherein applying the audio processing algorithm to the audio data comprises changing frequency equalization of the media item.
(Feature 20)
The computing device of any of features 14 to 19, wherein applying the audio processing algorithm to the audio data comprises changing a volume range of the media item.
(Feature 21)
A processor, and a memory storing instructions that are executable by the processor to cause a computing device to perform the function;
A computing device comprising
The function is
Causing the playback device in the playback zone to play the first audio signal,
Receiving data indicative of a second audio signal detected by a microphone of the reproduction device, wherein the second audio signal corresponds to the first audio signal reproduced by the reproduction device including,
Determining an acoustic characteristic of the reproduction zone based on the second audio signal and the characteristic of the reproduction device;
Determining an audio processing algorithm based on the acoustic characteristics of the reproduction zone, and storing in the database an association between the audio processing algorithm and the acoustic characteristics of the reproduction zone;
A computing device that has
(Feature 22)
Third audio having substantially the same audio characteristics as the predetermined audio characteristics by applying the determined audio processing algorithm by the reproduction device when reproducing the first audio signal in the reproduction zone A computing device of any preceding feature in which a signal is generated.
(Feature 23)
The playback device is a first playback device,
The reproduction zone is a first reproduction zone,
The audio processing algorithm is a first audio processing algorithm,
Said function is further:
Playing a fourth audio signal on a second playback device in a second playback zone,
Receiving data indicative of a fifth audio signal detected by a microphone of the second reproduction device, wherein the fifth audio signal is reproduced by the fourth reproduction device by the second reproduction device. Including the part corresponding to the audio signal,
Determining an acoustic characteristic of the second reproduction zone based on the characteristic of the fifth audio signal and the second reproduction device;
Determining a second audio processing algorithm based on the acoustic characteristics of the second reproduction zone, and storing the association of the second audio processing algorithm and the acoustic characteristics of the second reproduction zone in a database about,
A computing device of any preceding feature having:
(Feature 24)
Applying the first determined audio processing algorithm by the first reproduction device when reproducing the first audio signal in a first reproduction zone substantially identical to a predetermined audio characteristic A third audio signal having an audio characteristic of
By applying the second determined audio processing algorithm by the second reproduction device when reproducing the fourth audio signal in the second reproduction zone, the predetermined audio characteristics and A sixth audio signal having the same audio characteristics as
Feature 3 computing device.
(Feature 25)
Said function is further:
Determining that the acoustic characteristics of the second reproduction zone are substantially identical to the acoustic characteristics of the first reproduction zone;
In response thereto, determining a third audio processing algorithm based on the first audio processing algorithm and the second audio processing algorithm, and the third audio processing algorithm and the first playback zone Storing the association with the acoustic characteristics of
The computing device of feature 3, comprising:
(Feature 26)
The computing device of any of features 21 to 25, wherein determining the audio processing algorithm comprises determining one or more parameters for the audio processing algorithm.
(Feature 27)
Said function is further:
Receiving data indicative of one or more characteristics of the playback zone, and storing in the database an association of the one or more characteristics of the playback zone with the second audio processing algorithm;
The computing device of any of features 21 to 26, having a.
(Feature 28)
One or more characteristics of the reproduction zone are: (a) dimensions of the reproduction zone, (b) audio reflection characteristics of the reproduction zone, (c) purpose of using the reproduction zone, (d) of furniture in the reproduction zone The computing device of Feature 27, including one or more of: number; (e) size of furniture in the regeneration zone; and (f) type of furniture in the regeneration zone.
(Feature 29)
A processor, and a memory storing instructions that are executable by the processor to cause a computing device to perform the function;
A computing device comprising
The function is
Causing the playback device in the playback zone to play the first audio signal,
(I) receiving data indicative of one or more characteristics of the playback zone and (ii) data indicative of a second audio signal detected by a microphone of the playback device, wherein the second audio is received. The signal includes a portion corresponding to the audio signal reproduced by the reproduction device
Determining an audio processing algorithm based on the second audio signal and the characteristics of the playback device; and associating the determined audio processing algorithm with at least one of the one or more characteristics of the playback zone Storing in the database,
A computing device.
(Feature 30)
Determining the audio processing algorithm further comprises:
Determining an acoustic characteristic of the reproduction zone based on the second audio signal and the characteristics of the reproduction device, and determining an audio processing algorithm based on an acoustic characteristic of the reproduction zone By applying the determined audio processing algorithm by the reproduction device when reproducing the second audio signal in a zone, a third audio signal having an audio characteristic substantially identical to the predetermined audio characteristic is obtained. To be generated,
Features 29 computing devices.
(Feature 31)
The playback device is a first playback device,
The reproduction zone is a first reproduction zone,
The audio processing algorithm is a first audio processing algorithm,
Said function is further:
Playing a third audio signal on a second playback device in a second playback zone,
(I) data indicative of one or more characteristics of the second reproduction zone and (ii) data indicative of a fourth audio signal detected by a microphone of a second reproduction device within the second reproduction zone Receiving, wherein the fourth audio signal includes a portion corresponding to the third audio signal reproduced by the reproduction device.
Determining an audio processing algorithm based on the fourth audio signal and the characteristics of the second playback device; and of the second audio processing algorithm and one or more characteristics of the second playback zone Storing the association with at least one of them in the database,
The computing device of Feature 29 or Feature 30 comprising:
(Feature 32)
Determining the second audio processing algorithm further comprises:
Determining an acoustic characteristic of the reproduction zone based on the fourth audio signal and a characteristic of the reproduction device, and determining an audio processing algorithm based on an acoustic characteristic of the reproduction zone. Fifth, by applying the determined audio processing algorithm by the second reproduction device when reproducing the third audio signal in a reproduction zone, a fifth audio characteristic substantially the same as a predetermined audio characteristic To generate an audio signal of
Features 29 computing devices.
(Feature 33)
Said function is further:
Determining that the acoustic characteristics of the second reproduction zone are substantially identical to the acoustic characteristics of the first reproduction zone;
In response thereto, determining a third audio processing algorithm based on the first audio processing algorithm and the second audio processing algorithm, and the third audio processing algorithm and the first playback zone Storing in the database an association with at least one of the one or more characteristics of
The computing device of feature 32, comprising:
(Feature 34)
One or more characteristics of the reproduction zone are: (a) dimensions of the reproduction zone, (b) audio reflection characteristics of the reproduction zone, (c) purpose of using the reproduction zone, (d) of furniture in the reproduction zone Feature 29 to 33, including one or more of: number, (e) size of furniture in the regeneration zone, (f) type of furniture in the regeneration zone, and (g) acoustic characteristics of the regeneration zone. One of the computing devices.
(Feature 35)
A processor, and a memory that stores instructions that are executable by the processor to cause the playback device to perform the function;
A computing device comprising
The function is
(I) maintaining a plurality of audio processing algorithms and (ii) a database of a plurality of playback zone characteristics, wherein each audio processing algorithm of the plurality of audio processing algorithms comprises: Corresponding to at least one regeneration zone characteristic of
Receiving data indicative of one or more characteristics of the playback zone;
Identifying an audio processing algorithm in the database based on the data; and transmitting data indicative of the identified audio processing algorithm.
A computing device that has
(Feature 36)
The computing device of Feature 35, the data further indicating an audio signal detected by a microphone of a playback device in the playback zone.
(Feature 37)
Identifying the audio processing algorithm in the database further comprises:
Determining an acoustic characteristic of the reproduction zone based on the detected audio signal and the characteristic of the reproduction device; and identifying an audio processing algorithm in the database based on the acoustic characteristic determined of the reproduction zone about,
The computing device of feature 36, including:
(Feature 38)
The plurality of reproduction zone characteristics are (a) dimensions of the reproduction zone, (b) audio reflection characteristics of the reproduction zone, (c) use purpose of the reproduction zone, (d) the number of furniture in the reproduction zone, (e 35. The computing device of feature 35, including one or more of: size of furniture in the regeneration zone, (f) type of furniture in the regeneration zone, and (g) acoustic characteristics of the regeneration zone.
(Feature 39)
The computing device of Feature 35, wherein data indicative of one or more characteristics of the playback zone is received from a controller device.
(Feature 40)
The computing device of Feature 35, wherein data indicative of one or more characteristics of the playback zone is received from a playback device in the playback zone.

  The specification broadly describes exemplary environments, systems, procedures, steps, logic blocks, processes, and other symbolic representations that operate directly or indirectly on data processing devices connected to a network. It is similar to These process descriptions and representations are commonly used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Many specific details are provided to understand the present disclosure. However, it will be understood by one of ordinary skill in the art that certain embodiments of the present disclosure may be practiced without the specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail to avoid unnecessarily obscuring the embodiments. Accordingly, the scope of the present disclosure is defined by the appended claims rather than the embodiments described above.

  When any of the appended claims simply reads to cover an implementation in software and / or firmware, one or more of the elements in at least one example may, herein, be software and / or firmware. It is clearly defined that it includes tangible, non-transitory storage media storing such as, for example, memory, DVD, CD, Blu-ray (registered trademark) and the like.

Claims (11)

  1. Have the playback device (200) in the playback zone play the first audio signal
    Receiving data indicative of a second audio signal detected by a microphone (220) of the playback device, wherein the second audio signal includes a portion corresponding to the first audio signal;
    Based on the characteristics of the reproducing device in said second audio signal and the regeneration zone which has been detected in said regeneration zone, to determine the audio processing algorithms of said regeneration zone,
    Here, the reproduction zone includes first and second reproduction zones.
    Here, the playback device includes a first playback device disposed in the first playback zone and a second playback device disposed in the second playback zone.
    Here, the audio processing algorithm determined for the first playback zone is a first audio processing algorithm, and the audio processing algorithm determined for the second playback zone is a second audio processing algorithm ,
    Determining that the characteristics of the second regeneration zone are substantially identical to the characteristics of the first regeneration zone;
    In response, determining a third audio processing algorithm based on the first audio processing algorithm and the second audio processing algorithm;
    Storing in the database an association between the third audio processing algorithm and the characteristics of the first playback zone;
    Computing Device (300).
  2.   By applying the determined audio processing algorithm by the playback device (200) when playing back the first audio signal in the playback zone, it has substantially the same audio characteristics as the predetermined audio characteristics The computing device of claim 1, wherein a third audio signal is generated.
  3. Determining the audio processing algorithm of the playback zone further comprises:
    Based on the reproduction characteristics of the device (200) of said second audio signal and the regeneration zone is detected in the regeneration zone, determining the acoustic properties of the regeneration zone,
    Determining an audio processing algorithm of the playback zone based on the determined acoustic characteristics of the playback zone ;
    Be stored and determined the audio processing algorithms association with the acoustic characteristics of the regeneration zone in said database,
    The computing device according to claim 1 or 2, comprising
  4. It is possible to determine the audio processing algorithm based on the determined acoustic characteristics of the playback zone:
    Selecting an audio processing algorithm corresponding to the determined acoustic characteristic of the reproduction zone from a plurality of audio processing algorithms, or calculating the audio processing algorithm based on the acoustic characteristic of the reproduction zone and a predetermined audio characteristic ,
    The computing device of claim 3, comprising:
  5. The characteristics of the reproduction device (200) include the acoustic characteristics of the reproduction device (200),
    Make the reference reproduction device (200) in the anechoic chamber reproduce the measurement signal,
    Data from the reference reproduction device (200) is received indicating data detected by the microphone (220) of the reference reproduction device (220), wherein the detected audio signal is reproduced in the anechoic chamber Including a portion corresponding to the measured signal
    Determining an acoustic characteristic of the reproduction device (200) based on a comparison of the detected audio signal and the measurement signal;
    The computing device according to any one of claims 1 to 4.
  6. further,
    A third playback device is arranged in the first playback zone,
    Before receiving data indicative of the second audio signal, in synchronization with the reproduction of the first audio signal by the first reproduction device (200), or by the first reproduction device (200) Having the third reproduction device (200) reproduce a fourth audio signal in the first reproduction zone after the reproduction of the first audio signal is finished,
    Here, the second audio signal detected by the microphone of the first reproduction device further includes a portion corresponding to the fourth audio signal reproduced by the third reproduction device (200).
    A computing device according to any of the preceding claims.
  7. The computing device of claim 6 , wherein the first audio signal is orthogonal to the fourth audio signal .
  8. The system according to claim 6 or 7, wherein the first reproduction device (200) and the third reproduction device (200) are included in a zone group of reproduction devices (200) configured to synchronously reproduce audio content. The computing device of claim 7.
  9. Determining the audio processing algorithm of the playback zone further comprises:
    Receiving data indicative of one or more characteristics of the playback zone;
    Storing in the database an association between the audio processing algorithm and one or more characteristics of the playback zone;
    9. A computing device according to any one of the preceding claims, comprising
  10.   The computing device of claim 9, wherein data indicative of one or more characteristics of the playback zone is received from one of a controller device and a playback device in the playback zone.
  11.   The received one or more characteristics of the playback zone are (a) the dimensions of the playback zone, (b) the audio reflection characteristics of the playback zone, (c) the intended use of the playback zone, (d) the playback 11. The method according to claim 9 or 10, comprising one or more of: the number of furniture in the zone, (e) the size of the furniture in the regeneration zone, and (f) the type of furniture in the regeneration zone. A computing device as described.
JP2017513241A 2014-09-09 2015-09-08 Audio processing algorithm and database Active JP6503457B2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US14/481,505 US9952825B2 (en) 2014-09-09 2014-09-09 Audio processing algorithms
US14/481,514 2014-09-09
US14/481,505 2014-09-09
US14/481,514 US9891881B2 (en) 2014-09-09 2014-09-09 Audio processing algorithm database
PCT/US2015/048942 WO2016040324A1 (en) 2014-09-09 2015-09-08 Audio processing algorithms and databases

Publications (2)

Publication Number Publication Date
JP2017528083A JP2017528083A (en) 2017-09-21
JP6503457B2 true JP6503457B2 (en) 2019-04-17

Family

ID=54292894

Family Applications (2)

Application Number Title Priority Date Filing Date
JP2017513241A Active JP6503457B2 (en) 2014-09-09 2015-09-08 Audio processing algorithm and database
JP2019056360A Pending JP2019134470A (en) 2014-09-09 2019-03-25 Audio processing algorithms and databases

Family Applications After (1)

Application Number Title Priority Date Filing Date
JP2019056360A Pending JP2019134470A (en) 2014-09-09 2019-03-25 Audio processing algorithms and databases

Country Status (4)

Country Link
EP (1) EP3111678A1 (en)
JP (2) JP6503457B2 (en)
CN (1) CN106688248A (en)
WO (1) WO2016040324A1 (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084058B2 (en) 2011-12-29 2015-07-14 Sonos, Inc. Sound field calibration using listener localization
US10127006B2 (en) 2014-09-09 2018-11-13 Sonos, Inc. Facilitating calibration of an audio playback device
WO2016172593A1 (en) 2015-04-24 2016-10-27 Sonos, Inc. Playback device calibration user interfaces
US9106192B2 (en) 2012-06-28 2015-08-11 Sonos, Inc. System and method for device playback calibration
US9690539B2 (en) 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration user interface
US9219460B2 (en) 2014-03-17 2015-12-22 Sonos, Inc. Audio settings based on environment
US9264839B2 (en) 2014-03-17 2016-02-16 Sonos, Inc. Playback device configuration based on proximity detection
US9706323B2 (en) 2014-09-09 2017-07-11 Sonos, Inc. Playback device calibration
US9891881B2 (en) 2014-09-09 2018-02-13 Sonos, Inc. Audio processing algorithm database
US9910634B2 (en) 2014-09-09 2018-03-06 Sonos, Inc. Microphone calibration
US9538305B2 (en) 2015-07-28 2017-01-03 Sonos, Inc. Calibration error conditions
US9693165B2 (en) 2015-09-17 2017-06-27 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US9743207B1 (en) 2016-01-18 2017-08-22 Sonos, Inc. Calibration using multiple recording devices
US10003899B2 (en) 2016-01-25 2018-06-19 Sonos, Inc. Calibration with particular locations
EP3547701A1 (en) * 2016-04-01 2019-10-02 Sonos Inc. Updating playback device configuration information based on calibration data
US9860662B2 (en) 2016-04-01 2018-01-02 Sonos, Inc. Updating playback device configuration information based on calibration data
US9864574B2 (en) 2016-04-01 2018-01-09 Sonos, Inc. Playback device calibration based on representation spectral characteristics
US9763018B1 (en) 2016-04-12 2017-09-12 Sonos, Inc. Calibration of audio playback devices
US9860670B1 (en) 2016-07-15 2018-01-02 Sonos, Inc. Spectral correction using spatial calibration
US9794710B1 (en) 2016-07-15 2017-10-17 Sonos, Inc. Spatial audio correction
US10372406B2 (en) 2016-07-22 2019-08-06 Sonos, Inc. Calibration interface
US10459684B2 (en) 2016-08-05 2019-10-29 Sonos, Inc. Calibration of a playback device based on an estimated frequency response
US10299061B1 (en) 2018-08-28 2019-05-21 Sonos, Inc. Playback device calibration

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0828920B2 (en) * 1992-01-20 1996-03-21 松下電器産業株式会社 Speaker of the measuring device
JP2870359B2 (en) * 1993-05-11 1999-03-17 ヤマハ株式会社 Acoustics correcting apparatus
US7483540B2 (en) * 2002-03-25 2009-01-27 Bose Corporation Automatic audio system equalizing
JP2004159037A (en) * 2002-11-06 2004-06-03 Sony Corp Automatic sound adjustment system, sound adjusting device, sound analyzer, and sound analysis processing program
US8234395B2 (en) 2003-07-28 2012-07-31 Sonos, Inc. System and method for synchronizing operations among a plurality of independently clocked digital data processing devices
JP2005086686A (en) * 2003-09-10 2005-03-31 Fujitsu Ten Ltd Electronic equipment
JP2007271802A (en) * 2006-03-30 2007-10-18 Kenwood Corp Content reproduction system and computer program
JP2008228133A (en) * 2007-03-15 2008-09-25 Matsushita Electric Ind Co Ltd Acoustic system
US8819554B2 (en) * 2008-12-23 2014-08-26 At&T Intellectual Property I, L.P. System and method for playing media
US8300840B1 (en) * 2009-02-10 2012-10-30 Frye Electronics, Inc. Multiple superimposed audio frequency test system and sound chamber with attenuated echo properties
US8588430B2 (en) * 2009-02-11 2013-11-19 Nxp B.V. Controlling an adaptation of a behavior of an audio device to a current acoustic environmental condition
JP2011164166A (en) * 2010-02-05 2011-08-25 D&M Holdings Inc Audio signal amplifying apparatus
US9307340B2 (en) * 2010-05-06 2016-04-05 Dolby Laboratories Licensing Corporation Audio system equalization for portable media playback devices
WO2011139502A1 (en) * 2010-05-06 2011-11-10 Dolby Laboratories Licensing Corporation Audio system equalization for portable media playback devices
US9106192B2 (en) * 2012-06-28 2015-08-11 Sonos, Inc. System and method for device playback calibration

Also Published As

Publication number Publication date
JP2017528083A (en) 2017-09-21
CN106688248A (en) 2017-05-17
WO2016040324A1 (en) 2016-03-17
JP2019134470A (en) 2019-08-08
EP3111678A1 (en) 2017-01-04

Similar Documents

Publication Publication Date Title
US10212512B2 (en) Default playback devices
JP6495481B2 (en) Antenna selection
EP3146731B1 (en) Hybrid test tone for space averaged room audio calibration using a moving microphone
US9690539B2 (en) Speaker calibration user interface
JP6356331B2 (en) Playback device settings based on proximity detection
JP6328261B2 (en) Web page media playback
US9872119B2 (en) Audio settings of multiple speakers in a playback device
JP6437695B2 (en) How to facilitate calibration of audio playback devices
CN106688249A (en) Playback device calibration
US10063983B2 (en) Calibration using multiple recording devices
US9690271B2 (en) Speaker calibration
US9781533B2 (en) Calibration error conditions
US10045142B2 (en) Calibration of audio playback devices
EP3081012B1 (en) Playback device calibration
US9715367B2 (en) Audio processing algorithms
US10409549B2 (en) Audio response playback
JP6215444B2 (en) Media playback system controller having multiple graphic interfaces
US10127006B2 (en) Facilitating calibration of an audio playback device
WO2016040324A1 (en) Audio processing algorithms and databases
US10462505B2 (en) Policies for media playback
US9942678B1 (en) Audio playback settings for voice interaction
US9226073B2 (en) Audio output balancing during synchronized playback
US9874997B2 (en) Social playback queues
US10452709B2 (en) Queue identification
US20150309768A1 (en) Preference Conversion

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20170502

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20170502

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20180413

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20180508

A601 Written request for extension of time

Free format text: JAPANESE INTERMEDIATE CODE: A601

Effective date: 20180806

A601 Written request for extension of time

Free format text: JAPANESE INTERMEDIATE CODE: A601

Effective date: 20181005

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20181107

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20190226

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20190325

R150 Certificate of patent or registration of utility model

Ref document number: 6503457

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150