US20090304205A1 - Techniques for personalizing audio levels - Google Patents

Techniques for personalizing audio levels Download PDF

Info

Publication number
US20090304205A1
US20090304205A1 US12/136,733 US13673308A US2009304205A1 US 20090304205 A1 US20090304205 A1 US 20090304205A1 US 13673308 A US13673308 A US 13673308A US 2009304205 A1 US2009304205 A1 US 2009304205A1
Authority
US
United States
Prior art keywords
user
audio
determining
location
audio volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/136,733
Inventor
Robert Hardacker
Steven Richman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Sony Electronics Inc
Original Assignee
Sony Corp
Sony Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp, Sony Electronics Inc filed Critical Sony Corp
Priority to US12/136,733 priority Critical patent/US20090304205A1/en
Assigned to SONY ELECTRONICS INC., SONY CORPORATION reassignment SONY ELECTRONICS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RICHMAN, STEVEN, HARDACKER, ROBERT
Publication of US20090304205A1 publication Critical patent/US20090304205A1/en
Priority to US14/562,178 priority patent/US9961471B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G3/00Gain control in amplifiers or frequency changers without distortion of the input signal
    • H03G3/20Automatic control
    • H03G3/30Automatic control in amplifiers having semiconductor devices
    • H03G3/3005Automatic control in amplifiers having semiconductor devices in amplifiers suitable for low-frequencies, e.g. audio amplifiers
    • H03G3/301Automatic control in amplifiers having semiconductor devices in amplifiers suitable for low-frequencies, e.g. audio amplifiers the gain being continuously variable
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2400/00Loudspeakers
    • H04R2400/13Use or details of compression drivers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation

Definitions

  • multi-channel formats such as stereo and surround sound.
  • multi-channel audio may be used to output different instruments on different speakers to give the listener the feeling of being in the middle of a band.
  • the audio track of a plane may be faded from front to back to aid the perception of a plane flying out of the screen and past the viewer.
  • users typically perceive the audio differently from one another.
  • Embodiments of the present technology are directed toward techniques for personalizing audio levels.
  • a method of personalizing audio levels includes determining the location and relative audio volume preference of each user. A localized audio volume may then be output proximate each user based upon the location and preferred relative audio volume of each user.
  • a system for personalizing audio levels includes a plurality of speakers, a source of audio and a signal processor.
  • the signal processor receives audio from the source and causes the plurality of speakers to output a localized audio volume proximate each user based upon a location and preferred relative audio volume of each user.
  • FIG. 1 shows a flow diagram of a method of personalizing audio levels, in accordance with one embodiment of the present technology.
  • FIG. 2 shows a block diagram of an exemplary audio or multimedia environment for providing personalized audio levels.
  • each sound source such as a television
  • the effective sound level perceived by a user may differ from the output sound level of the sound source.
  • audio level is used herein to refer to the output level of the sound source
  • audio volume and “listening volume” are used herein to refer to the volume perceived by the user
  • audio level/volume is used herein to refer to the relationship between the “audio level” and the “audio volume.”
  • Embodiments of the present technology personalize the sound level/volume by locating the users in a room, identifying the relative volume preferences of the users, and processing and delivering personalized audio levels to different users located at know locations. Embodiments may also adjust the processing and delivery of personalized audio levels in response to adjustments to the generic volume. Embodiments may also save and recall the location and relative volume preferences as one or more modes for use during other sessions.
  • FIG. 1 shows a method of personalizing audio level/volume, in accordance with one embodiment of the present technology.
  • FIG. 2 shows an exemplary audio or multimedia environment. It is appreciated that multimedia systems that output both audio and video are generally more commonly in use today, and therefore the following detailed description will refer to multimedia systems for convenience. However, it is to be appreciated that the present technology as described herein is equally applicable to systems that only output audio.
  • the exemplary multimedia environment includes source 210 and a plurality of speakers 212 - 222 .
  • the source may be a television, cable tuner, satellite tuner, game console, CD/DVD player, VCR, personal computer, and/or the like.
  • the audio source 210 may output two channels of audio for output on two or more speakers.
  • the audio source 210 may include four or more channels (e.g., 5.1 surround sound) for output on four or more speakers.
  • the speakers may include a front left speaker 212 , a right front speaker 214 , a center speaker 216 , a left surround speaker 218 , a right surround speaker 220 and a subwoofer 222 for outputting a 5.1 surround sound format of audio.
  • the method of personalizing audio level/volume begins with determining the position of each of two or more users 224 , 226 , at 110 .
  • the multimedia system may include a pair of microphones 228 , 230 located at known positions.
  • the microphones 228 , 230 receive sound from each user 224 , 226 .
  • the sound received from each user may be used to triangulate the position of the corresponding user relative to the microphones 228 , 230 .
  • a microphone 228 , 230 may be mounted on each side of a television 210 .
  • each user 224 , 226 may be sequentially prompted to speak.
  • the relative time difference between the sound 232 , 234 received at each speaker 228 , 230 may be used to determine the location of the corresponding user 224 that has been prompted to speak.
  • a signal processing system may be used to determine the time delay between the sound from each corresponding user received at each microphone. The time delay may then be used along with the known position of each microphone 228 , 230 to triangulate the position of the corresponding user 224 . Additional microphones placed in additional locations may be used to determine the relative position in additional dimensions.
  • a remote controller 236 for the system may include a short range radio frequency (RF) transmitter and the television, set top box (STB) or the like may contain two or more antennas. The antennas are located at known positions. When each user 224 , 226 possesses the remote controller 236 , an RF signal is emitted from the transmitter in the remote controller 236 and received by the antennas. The RF signal received at the antennas is used by a signal processor to triangulate the position of the corresponding users.
  • the remote controller 236 may include an infrared (IR) transmitter and two or more IR receivers may be positioned at known locations in the television, STB or the like.
  • IR infrared
  • a remote controller 236 for the system may include a microphone.
  • sound e.g., training tones
  • speaker 212 - 220 at known fixed locations
  • a signal processing system may be used to determine the time delay between the sound from each speaker 212 - 222 received at the microphone in the remote control 236 .
  • the time delay may then be used along with the known position of each speaker 212 - 222 to triangulate the position of the corresponding user 224 , 226 .
  • the relative position of the users can be determined with sufficient accuracy by outputting sound sequentially on three speakers in diverse fixed locations (e.g., center, left rear and right rear speakers 216 - 220 ).
  • the remote controller 236 may include logic for determining the time delay. Data indicating the time delay is may then be sent back to a signal processor in the television, cable tuner, satellite tuner, game console, CD/DVD player, VCR, or personal computer which triangulates the position of the users from the time delay and the know location of the speakers.
  • a signal processor in the remote controller 236 may determine the time delay and triangulate the position of the users and return data to the television, cable tuner, satellite tuner, game console, CD/DVD player, VCR, or personal computer indicating the determined position of the users.
  • the data may be returned from the remote controller across an RF link or IR link of the remote control.
  • the data may be returned via an NFC link, a USB link, memory stick sneaker netted back to the television, cable tuner, satellite tuner, game console, CD/DVD player, VCR, personal computer or the like, or by another similar communication technique.
  • a remote controller 236 for the system may include a transmitter and emits a tone or sound during a particular mode or during activation of one or more buttons on the remote control.
  • the tone or sound may be in the audible range (e.g., 20 Hz-20 KHz), or may be outside the audible range (e.g., 20-48 KHz) such as an ultrasonic tone or sound.
  • the tone or sound is received by a plurality of receivers (e.g., microphones) in fixed locations that are contained in or coupled to the television, set top box (STB) or the like.
  • the sound or tone received from the remote control when possessed by each user may be used by a signal processor in the television, set top box, or the like, to triangulate the position of the corresponding user relative to the receivers.
  • a graphical user interface is displayed on the television 210 .
  • a user selects a relative shape of the room, and a region corresponding to the location in the room of each user 224 , 226 .
  • the location may be selected from a grid overlaid over the relative shape of the room displayed on the GUI.
  • the relative audio volume preference for each user 224 , 226 is determined.
  • a logic unit may determine one or more volume levels preferred by a first user and corresponding volume level preferred by one or more additional users for each volume level preference determined for the first user. The logic unit then determines the relative audio volume preference from the difference between each user's one or more referred audio levels.
  • each user 224 is sequentially prompted to adjust the audio level of the system to the audio volume preferred by the user.
  • the user 224 may use the remote control 236 to select a single preferred audio volume or verbal commands picked up by the microphones 228 , 230 .
  • the difference in the audio levels selected by each user 224 , 226 is used to determine the relative audio volume preference for each user.
  • the relative audio volume preference may be fixed for all audio levels of an audio source. In such an implementation, the audio volume difference between users is fixed for all audio levels.
  • a response curve for the relative audio volume preference of each user may be determined.
  • the users may be iteratively queried to select a preferred effective audio volume for each audio level.
  • each user may specify a minimum and/or maximum effective audio volume. The data concerning the preferred effective audio volume for each audio level, minimum and/or maximum effective audio volume is used to determine a response curve. The response curve indicates the preferred relative audio volume of each user.
  • the location and relative audio volume preference for each user 224 , 226 is determined during a training or setup mode.
  • the position and relative audio volume preference for each user may then be stored 132 as a set of mode information, at 130 .
  • the location and relative audio volume preference for each user determined during processes 110 and 120 may be stored 132 for use 134 in setting the localized volume and adjusting the localized volume in response to commands to adjust the sound level.
  • the location and relative sound volume preference may be determined and stored 132 for the current session.
  • the location and relative sound volume preference may be stored 132 and recalled 134 for the current and subsequent session to reduce the setup time during each session.
  • a currently determined location and relative sound volume preferences may be stored 132 as one of a plurality of modes.
  • a given mode may be selected 134 for use during the session. This can be particularly useful as viewer habits often are characterized by a given group of users located in the same spots in the room from one session to another. For instance, when a husband and wife are watching television, the husband may most times sit in a first location and the wife may most times sit in a second location. When a husband, wife and a child are watching television, the husband may again sit in the first location, the child may most times sit in the second location and the wife may most times sit in a third location.
  • a first mode may include the location and relative audio volume preferences for the husband and wife in the first and second locations respectively.
  • a second mode may include the location and relative sound volume preferences for the husband, child and wife in the first, second and third locations respectively. Any number of additional modes may be created and stored 132 for various combinations of users. Alternatively, the location and relative audio volume preference for each user determined at 120 may be used directly 136 to output localized audio volume at 140 .
  • the audio is output based on the location and preferred relative audio volume of each user, at 140 .
  • the volume of the audio at the location corresponding to each user (e.g., localized audio volume 248 , 250 ) is output at a relative audio volume preferred by the user.
  • the audio source 110 includes a signal processing unit that applies differential level and delay compensation filtering to produce psychoacoustical perception of the audio by one or more users 224 , 226 .
  • Psychoacoustics are utilized to produce given audio volumes in localized regions 248 . 250 of a room by applying differential level and delay compensation to the conventional audio output.
  • the differential level and delay compensation filtering is based on the position of the speakers 212 - 222 , the location of one or more users 224 , 226 , and the relative audio volume preference of the corresponding user.
  • the signal processing unit may be implemented by a microprocessor or a dedicated digital signal processor (DSP). As a result, different relative audio levels for two or more locations are produced. For example, a first location 248 may be +6 dB louder than a second location 250 , regardless of the volume level from the television.
  • the audio level in a localized region 248 around the first user 224 may be at an effective level of 7 and the effective audio level in a localized region 250 around the second user 226 may be at an effective level of 5, when the audio level of the audio source is set to 7.
  • the relative audio volume at the first and second positions may also vary as the audio level of the audio source increases or decreases.
  • the relative difference between the first and second positions 248 , 250 may be +6 dB when the audio level output by the television is at level 4 , and might be +8 dB at a audio level of 7.
  • a command to adjust the audio level is received.
  • a user 224 may use the remote control 236 to issue a command to adjust the audio level up or down using one or more appropriate buttons on the remote controller 236 .
  • the remote controller 236 issues an appropriate command to the appropriate device to adjust the audio level in response to activation of an appropriate button by the user 224 .
  • a microphone on the remote controller 236 television or the like and a digital signal processor (DSP) implementing voice recognition may be used to receive audible commands from the user to adjust the levels.
  • the user may input a command to adjust the audio level using one or more hand gestures or any other means for adjusting the audio level.
  • the localized audio volume proximate each user is adjusted based on the location and preferred relative audio volume of each user in response to the command to adjust the audio level. For example, one of the users 224 may use the remote controller 236 to adjust the audio level from 7 to 9.
  • the audio output is adjusted so that the audio volume in the localized region 248 around the first user 224 is increased to an effective level of 9 and the effective level in a localized region 250 around the second user 226 is increased to an effective level of 7.
  • the process at 150 and 160 may be repeated to increase or decrease the localized audio volumes in response to each corresponding command.
  • Embodiments of the present technology advantageously provide different audio volumes to different locations in a room allowing for two or more users to enjoy the same audio content at different volumes.
  • Psychoacoustics are utilized to produce different effective audio levels in localized regions of a room based on the location and relative sound level preferences of the current set of users.

Abstract

Techniques for personalizing audio levels, in accordance with embodiments of the present technology, provide different audio volumes to different locations in a room allowing for two or more users to enjoy the same audio content at different volumes. Differential level and delay compensation filtering based on a position of each of a plurality of speakers, the location of each user and the preferred relative audio volume of each user are utilized to produce different effective audio levels in localized regions of a room.

Description

    BACKGROUND OF THE INVENTION
  • In the past, electronic audio and video systems included radio, television and record players that output audio in a single channel format. More recently, electronic audio and video systems have expanded to include video games, CD/DVD players, streaming audio (e.g., internet radio), MP3 devices, and the like. The audio and video systems now typically output audio in multi-channel formats such as stereo and surround sound. The use of multi-channel format audio generally enhances the user's listening and viewing experience by more closely replicating the original audio and/or enhancing a visual perception. For example, multi-channel audio may be used to output different instruments on different speakers to give the listener the feeling of being in the middle of a band. In another example, in a movie the audio track of a plane may be faded from front to back to aid the perception of a plane flying out of the screen and past the viewer. However, users typically perceive the audio differently from one another.
  • SUMMARY OF THE INVENTION
  • Embodiments of the present technology are directed toward techniques for personalizing audio levels. In one embodiment, a method of personalizing audio levels includes determining the location and relative audio volume preference of each user. A localized audio volume may then be output proximate each user based upon the location and preferred relative audio volume of each user.
  • In another embodiment, a system for personalizing audio levels includes a plurality of speakers, a source of audio and a signal processor. The signal processor receives audio from the source and causes the plurality of speakers to output a localized audio volume proximate each user based upon a location and preferred relative audio volume of each user.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the present invention are illustrated by way of example and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
  • FIG. 1 shows a flow diagram of a method of personalizing audio levels, in accordance with one embodiment of the present technology.
  • FIG. 2 shows a block diagram of an exemplary audio or multimedia environment for providing personalized audio levels.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Reference will now be made in detail to the embodiments of the present technology, examples of which are illustrated in the accompanying drawings. While the present technology will be described in conjunction with these embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present technology, numerous specific details are set forth in order to provide a thorough understanding of the present technology. However, it is understood that the present technology may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present technology.
  • It is appreciated that each sound source, such as a television, outputs sound within a range of continuous or discrete volumes. In addition, the effective sound level perceived by a user may differ from the output sound level of the sound source. Accordingly, the term “audio level” is used herein to refer to the output level of the sound source, the terms “audio volume” and “listening volume” are used herein to refer to the volume perceived by the user, and the term “audio level/volume” is used herein to refer to the relationship between the “audio level” and the “audio volume.”
  • Two or more users of a multimedia system often disagree on an appropriate listening volume. This results in a less than satisfactory experience for one or more users. Accordingly, there is a need for techniques for personalizing sound level/volume. Embodiments of the present technology personalize the sound level/volume by locating the users in a room, identifying the relative volume preferences of the users, and processing and delivering personalized audio levels to different users located at know locations. Embodiments may also adjust the processing and delivery of personalized audio levels in response to adjustments to the generic volume. Embodiments may also save and recall the location and relative volume preferences as one or more modes for use during other sessions. FIG. 1 shows a method of personalizing audio level/volume, in accordance with one embodiment of the present technology. The method of personalizing audio level/volume will be further described with reference to FIG. 2 which shows an exemplary audio or multimedia environment. It is appreciated that multimedia systems that output both audio and video are generally more commonly in use today, and therefore the following detailed description will refer to multimedia systems for convenience. However, it is to be appreciated that the present technology as described herein is equally applicable to systems that only output audio.
  • The exemplary multimedia environment includes source 210 and a plurality of speakers 212-222. The source may be a television, cable tuner, satellite tuner, game console, CD/DVD player, VCR, personal computer, and/or the like. In one implementation, the audio source 210 may output two channels of audio for output on two or more speakers. In another implementation, the audio source 210 may include four or more channels (e.g., 5.1 surround sound) for output on four or more speakers. For instance, the speakers may include a front left speaker 212, a right front speaker 214, a center speaker 216, a left surround speaker 218, a right surround speaker 220 and a subwoofer 222 for outputting a 5.1 surround sound format of audio.
  • The method of personalizing audio level/volume begins with determining the position of each of two or more users 224, 226, at 110. In one implementation, the multimedia system may include a pair of microphones 228, 230 located at known positions. The microphones 228, 230 receive sound from each user 224, 226. The sound received from each user may be used to triangulate the position of the corresponding user relative to the microphones 228, 230. For example, a microphone 228, 230 may be mounted on each side of a television 210. During a training or setup mode, each user 224, 226 may be sequentially prompted to speak. The relative time difference between the sound 232, 234 received at each speaker 228, 230 may be used to determine the location of the corresponding user 224 that has been prompted to speak. A signal processing system may be used to determine the time delay between the sound from each corresponding user received at each microphone. The time delay may then be used along with the known position of each microphone 228, 230 to triangulate the position of the corresponding user 224. Additional microphones placed in additional locations may be used to determine the relative position in additional dimensions.
  • In another implementation, a remote controller 236 for the system may include a short range radio frequency (RF) transmitter and the television, set top box (STB) or the like may contain two or more antennas. The antennas are located at known positions. When each user 224, 226 possesses the remote controller 236, an RF signal is emitted from the transmitter in the remote controller 236 and received by the antennas. The RF signal received at the antennas is used by a signal processor to triangulate the position of the corresponding users. In a similar implementation, the remote controller 236 may include an infrared (IR) transmitter and two or more IR receivers may be positioned at known locations in the television, STB or the like.
  • In another implementation, a remote controller 236 for the system may include a microphone. When each user 224, 226 possesses the remote controller 236, sound (e.g., training tones) 238-246 emitted from speakers 212-220 at known fixed locations may be used to triangulate the location of the remote controller 236, and therefore the user 224 that posses the remote controller 236, relative to the speakers 212-220. The process is repeated for each user 224, 226. A signal processing system may be used to determine the time delay between the sound from each speaker 212-222 received at the microphone in the remote control 236. The time delay may then be used along with the known position of each speaker 212-222 to triangulate the position of the corresponding user 224, 226. Typically, the relative position of the users can be determined with sufficient accuracy by outputting sound sequentially on three speakers in diverse fixed locations (e.g., center, left rear and right rear speakers 216-220). In one implementation, the remote controller 236 may include logic for determining the time delay. Data indicating the time delay is may then be sent back to a signal processor in the television, cable tuner, satellite tuner, game console, CD/DVD player, VCR, or personal computer which triangulates the position of the users from the time delay and the know location of the speakers. In another implementation, a signal processor in the remote controller 236 may determine the time delay and triangulate the position of the users and return data to the television, cable tuner, satellite tuner, game console, CD/DVD player, VCR, or personal computer indicating the determined position of the users. The data may be returned from the remote controller across an RF link or IR link of the remote control. Alternatively, the data may be returned via an NFC link, a USB link, memory stick sneaker netted back to the television, cable tuner, satellite tuner, game console, CD/DVD player, VCR, personal computer or the like, or by another similar communication technique.
  • In another implementation, a remote controller 236 for the system may include a transmitter and emits a tone or sound during a particular mode or during activation of one or more buttons on the remote control. The tone or sound may be in the audible range (e.g., 20 Hz-20 KHz), or may be outside the audible range (e.g., 20-48 KHz) such as an ultrasonic tone or sound. The tone or sound is received by a plurality of receivers (e.g., microphones) in fixed locations that are contained in or coupled to the television, set top box (STB) or the like. The sound or tone received from the remote control when possessed by each user may be used by a signal processor in the television, set top box, or the like, to triangulate the position of the corresponding user relative to the receivers.
  • In yet another implementation, a graphical user interface (GUI) is displayed on the television 210. A user selects a relative shape of the room, and a region corresponding to the location in the room of each user 224, 226. The location may be selected from a grid overlaid over the relative shape of the room displayed on the GUI.
  • At 120, the relative audio volume preference for each user 224, 226 is determined. When determining the relative audio volume preference for each user it may be best that the audio volume is substantially constant across all the locations at which the users may be located. A logic unit may determine one or more volume levels preferred by a first user and corresponding volume level preferred by one or more additional users for each volume level preference determined for the first user. The logic unit then determines the relative audio volume preference from the difference between each user's one or more referred audio levels.
  • In one implementation, each user 224 is sequentially prompted to adjust the audio level of the system to the audio volume preferred by the user. The user 224 may use the remote control 236 to select a single preferred audio volume or verbal commands picked up by the microphones 228, 230. The difference in the audio levels selected by each user 224, 226 is used to determine the relative audio volume preference for each user. In one implementation, the relative audio volume preference may be fixed for all audio levels of an audio source. In such an implementation, the audio volume difference between users is fixed for all audio levels. In another implementation, a response curve for the relative audio volume preference of each user may be determined. In such an implementation, the users may be iteratively queried to select a preferred effective audio volume for each audio level. In addition, each user may specify a minimum and/or maximum effective audio volume. The data concerning the preferred effective audio volume for each audio level, minimum and/or maximum effective audio volume is used to determine a response curve. The response curve indicates the preferred relative audio volume of each user.
  • In one implementation, the location and relative audio volume preference for each user 224, 226 is determined during a training or setup mode. The position and relative audio volume preference for each user may then be stored 132 as a set of mode information, at 130. The location and relative audio volume preference for each user determined during processes 110 and 120 may be stored 132 for use 134 in setting the localized volume and adjusting the localized volume in response to commands to adjust the sound level. In one implementation, the location and relative sound volume preference may be determined and stored 132 for the current session. In another implementation, the location and relative sound volume preference may be stored 132 and recalled 134 for the current and subsequent session to reduce the setup time during each session. For example, a currently determined location and relative sound volume preferences may be stored 132 as one of a plurality of modes. During other subsequent sessions, a given mode may be selected 134 for use during the session. This can be particularly useful as viewer habits often are characterized by a given group of users located in the same spots in the room from one session to another. For instance, when a husband and wife are watching television, the husband may most times sit in a first location and the wife may most times sit in a second location. When a husband, wife and a child are watching television, the husband may again sit in the first location, the child may most times sit in the second location and the wife may most times sit in a third location. Therefore, a first mode may include the location and relative audio volume preferences for the husband and wife in the first and second locations respectively. A second mode may include the location and relative sound volume preferences for the husband, child and wife in the first, second and third locations respectively. Any number of additional modes may be created and stored 132 for various combinations of users. Alternatively, the location and relative audio volume preference for each user determined at 120 may be used directly 136 to output localized audio volume at 140.
  • The audio is output based on the location and preferred relative audio volume of each user, at 140. The volume of the audio at the location corresponding to each user (e.g., localized audio volume 248, 250) is output at a relative audio volume preferred by the user. The audio source 110 includes a signal processing unit that applies differential level and delay compensation filtering to produce psychoacoustical perception of the audio by one or more users 224, 226. Psychoacoustics are utilized to produce given audio volumes in localized regions 248. 250 of a room by applying differential level and delay compensation to the conventional audio output. The differential level and delay compensation filtering is based on the position of the speakers 212-222, the location of one or more users 224, 226, and the relative audio volume preference of the corresponding user. The signal processing unit may be implemented by a microprocessor or a dedicated digital signal processor (DSP). As a result, different relative audio levels for two or more locations are produced. For example, a first location 248 may be +6 dB louder than a second location 250, regardless of the volume level from the television. In other words, the audio level in a localized region 248 around the first user 224 may be at an effective level of 7 and the effective audio level in a localized region 250 around the second user 226 may be at an effective level of 5, when the audio level of the audio source is set to 7.
  • Depending upon the response curves of the individual listeners, the relative audio volume at the first and second positions may also vary as the audio level of the audio source increases or decreases. For example, the relative difference between the first and second positions 248, 250 may be +6 dB when the audio level output by the television is at level 4, and might be +8 dB at a audio level of 7.
  • At 150, a command to adjust the audio level is received. Typically, a user 224 may use the remote control 236 to issue a command to adjust the audio level up or down using one or more appropriate buttons on the remote controller 236. The remote controller 236 issues an appropriate command to the appropriate device to adjust the audio level in response to activation of an appropriate button by the user 224. In another implementation, a microphone on the remote controller 236, television or the like and a digital signal processor (DSP) implementing voice recognition may be used to receive audible commands from the user to adjust the levels. In yet another implementation, the user may input a command to adjust the audio level using one or more hand gestures or any other means for adjusting the audio level. At 160, the localized audio volume proximate each user is adjusted based on the location and preferred relative audio volume of each user in response to the command to adjust the audio level. For example, one of the users 224 may use the remote controller 236 to adjust the audio level from 7 to 9. The audio output is adjusted so that the audio volume in the localized region 248 around the first user 224 is increased to an effective level of 9 and the effective level in a localized region 250 around the second user 226 is increased to an effective level of 7. The process at 150 and 160 may be repeated to increase or decrease the localized audio volumes in response to each corresponding command.
  • Embodiments of the present technology advantageously provide different audio volumes to different locations in a room allowing for two or more users to enjoy the same audio content at different volumes. Psychoacoustics are utilized to produce different effective audio levels in localized regions of a room based on the location and relative sound level preferences of the current set of users.
  • The foregoing descriptions of specific embodiments of the present technology have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, to thereby enable others skilled in the art to best utilize the present technology and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the Claims appended hereto and their equivalents.

Claims (30)

1. A method of personalizing audio levels comprising outputting a localized audio volume proximate each user based upon a location and preferred relative audio volume of each user.
2. The method according to claim 1, wherein outputting a localized audio volume proximate each user comprises psychoacoustic modulating audio to produce an audio volume proximate each user based upon the location and preferred relative audio volume of each user.
3. The method according to claim 1, wherein outputting a localized audio volume proximate each user comprises applying differential level and delay compensation filtering to audio based on a position of each of a plurality of speakers, the location of each user and the preferred relative audio volume of each user.
4. The method according to claim 1, further comprising:
receiving a command to adjust the audio level; and
adjusting the localized audio volume proximate each user based on the location and preferred relative audio volume of each user in response to the command to adjust the audio level.
5. The method according to claim 4, wherein the command comprises an audio command received from a user.
6. The method according to claim 1, further comprising:
determining the location of each user; and
determining the relative audio volume preference of each user.
7. The method according to claim 6, wherein determining the location of each user comprises:
receiving sound from each user;
determining time delay between the sound received from each user at a plurality of microphones; and
triangulating a position of each user from the time delay between sound received from each user at the plurality of microphones.
8. The method according to claim 6, wherein determining the location of each user comprises:
outputting sound from each of a plurality of speakers;
determining time delay between the sound received from each speaker at a microphone proximate each user; and
triangulating a position of each user from the time delay between sound received from each speaker at the microphone.
9. The method according to claim 6, wherein determining the location of each user comprises:
outputting a radio frequency signal from a transmitter proximate each user;
determining a time delay between the radio frequency signal received at each of a plurality of antennas; and
triangulating a position of each user from the time delay between the radio frequency signal received from the transmitter at each antenna.
10. The method according to claim 6, wherein determining the location of each user comprises:
outputting an infrared signal from a transmitter proximate each user;
determining a time delay between the infrared signal received at each of a plurality of receivers; and
triangulating a position of each user from the time delay between the infrared signal received from the transmitter at each receiver.
11. The method according to claim 6, wherein determining the location of each user comprises:
outputting an tone or sound in the audible sound range or non-audible sound range from a transmitter proximate each user;
determining a time delay between the tone or sound received at each of a plurality of receivers; and
triangulating a position of each user from the time delay between the infrared signal received from the transmitter at each receiver.
12. The method according to claim 6, wherein determining the relative audio volume preference of each user comprises:
determining a preferred audio level of each user; and
determining a difference between the preferred audio level of each user.
13. The method according to claim 6, wherein determining the relative audio volume preference for each user comprises:
determining a preferred audio level for each user for each of a plurality of audio levels; and
determining a difference between the preferred audio level of each user for each of the plurality of audio levels.
14. The method according to claim 6, further comprising storing the location and relative audio volume preference of each user.
15. A method comprising:
accessing mode information including a location and a relative audio volume preference of each user; and
outputting a localized audio volume proximate each user based upon the location and preferred relative audio volume of each user.
16. The method according to claim 15, wherein outputting a localized audio volume proximate each user comprises psychoacoustic modulating audio to produce an audio volume proximate each user based upon the location and preferred relative audio volume of each user.
17. The method according to claim 15, further comprising:
determining the location of each user; and
determining the relative audio volume preference of each user.
18. The method according to claim 15, further comprising:
receiving a command to adjust the audio level; and
adjusting the localized audio volume proximate each user based on the location and preferred relative audio volume of each user in response to the command to adjust the audio level.
19. The method according to claim 15, wherein the relative audio volume preference of each user is fixed for each of a plurality of audio levels.
20. The method according to claim 16, wherein the relative audio volume preference of each user is specified by a response curve.
21. A system for personalizing audio levels comprising:
a plurality of speakers;
a source of audio; and
a signal processor, communicatively coupled between the source and the plurality of speakers, for receiving the audio and a location and a relative audio volume preference of each user and for causing the plurality of speakers to output a localized audio volume proximate each user based upon the location and preferred relative audio volume of each user.
22. The system of claim 21, further comprising:
a remote control for adjusting an audio level of the source; and
the signal processor for adjusting the localized audio volume proximate each user in response to the adjusted audio level of the source.
23. The system of claim 21, further comprising:
a microphone for receiving an audible input from a user;
the signal processor implementing voice recognition for converting the audible input to a command to adjust an audio level of the source and for adjusting the localized audio volume proximate each user in response to the audible input from the user.
24. The system of claim 21, further comprising:
an image sensor for receiving a hand gesture from a user;
the processor implementation a gesture recognition for converting the hand gesture of the user to a command to adjust an audio level of the source and for adjusting the localized audio volume proximate each user in response to the audible input from the user
25. The system of claim 21, further comprising:
a microphone proximate a user for receiving a sound from each of the plurality of speakers; and
the signal processor for determining the location of each user from a time difference between receipt of the sound from each of the plurality of speakers by the microphone.
26. The system of claim 21, further comprising a logic unit for determining one or more volume levels preferred by a first user and corresponding volume levels preferred by one or more additional users and determining the relative audio volume preference of each user from the difference between corresponding volume levels preferred by the one or more additional users relative to the first user.
27. A system comprising:
a means for determining a location of each of a plurality of users;
a means for determining a relative audio volume preference of each user;
a means for storing the location and relative audio volume preference of each user as one of a plurality of modes; and
a means for outputting the localized audio volume proximate each user based upon a selected one of the plurality of modes.
28. The system of claim 27, further comprising:
a means for receiving a command to adjust the audio level; and
a means for adjusting the localized audio volume in response to the command to adjust the audio level.
29. The system of claim 27, wherein the relative audio volume preference of each user is fixed for each of a plurality of audio levels.
30. The system of claim 27, wherein the relative audio volume preference of each user is specified by a response curve.
US12/136,733 2008-06-10 2008-06-10 Techniques for personalizing audio levels Abandoned US20090304205A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/136,733 US20090304205A1 (en) 2008-06-10 2008-06-10 Techniques for personalizing audio levels
US14/562,178 US9961471B2 (en) 2008-06-10 2014-12-05 Techniques for personalizing audio levels

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/136,733 US20090304205A1 (en) 2008-06-10 2008-06-10 Techniques for personalizing audio levels

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/562,178 Continuation US9961471B2 (en) 2008-06-10 2014-12-05 Techniques for personalizing audio levels

Publications (1)

Publication Number Publication Date
US20090304205A1 true US20090304205A1 (en) 2009-12-10

Family

ID=41400344

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/136,733 Abandoned US20090304205A1 (en) 2008-06-10 2008-06-10 Techniques for personalizing audio levels
US14/562,178 Active 2029-01-17 US9961471B2 (en) 2008-06-10 2014-12-05 Techniques for personalizing audio levels

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/562,178 Active 2029-01-17 US9961471B2 (en) 2008-06-10 2014-12-05 Techniques for personalizing audio levels

Country Status (1)

Country Link
US (2) US20090304205A1 (en)

Cited By (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120114137A1 (en) * 2010-11-05 2012-05-10 Shingo Tsurumi Acoustic Control Apparatus and Acoustic Control Method
WO2012173801A1 (en) 2011-06-15 2012-12-20 Dolby Laboratories Licensing Corporation Method for capturing and playback of sound originating from a plurality of sound sources
US20130083944A1 (en) * 2009-11-24 2013-04-04 Nokia Corporation Apparatus
US20130170647A1 (en) * 2011-12-29 2013-07-04 Jonathon Reilly Sound field calibration using listener localization
US20130229344A1 (en) * 2009-07-31 2013-09-05 Echostar Technologies L.L.C. Systems and methods for hand gesture control of an electronic device
US20140071159A1 (en) * 2012-09-13 2014-03-13 Ati Technologies, Ulc Method and Apparatus For Providing a User Interface For a File System
EP2723090A2 (en) * 2012-10-19 2014-04-23 Sony Corporation A directional sound apparatus, method graphical user interface and software
CN103999488A (en) * 2011-12-19 2014-08-20 高通股份有限公司 Automated user/sensor location recognition to customize audio performance in a distributed multi-sensor environment
US20140354791A1 (en) * 2013-05-31 2014-12-04 Lg Electronics Inc. Image display device and method of controlling the same
US8910265B2 (en) 2012-09-28 2014-12-09 Sonos, Inc. Assisted registration of audio sources
WO2015076930A1 (en) * 2013-11-22 2015-05-28 Tiskerling Dynamics Llc Handsfree beam pattern configuration
WO2015126814A3 (en) * 2014-02-20 2015-10-15 Bose Corporation Content-aware audio modes
US9264839B2 (en) 2014-03-17 2016-02-16 Sonos, Inc. Playback device configuration based on proximity detection
US9419575B2 (en) 2014-03-17 2016-08-16 Sonos, Inc. Audio settings based on environment
US9538305B2 (en) 2015-07-28 2017-01-03 Sonos, Inc. Calibration error conditions
US9648422B2 (en) 2012-06-28 2017-05-09 Sonos, Inc. Concurrent multi-loudspeaker calibration with a single measurement
US20170131966A1 (en) * 2015-11-10 2017-05-11 Google Inc. Automatic Audio Level Adjustment During Media Item Presentation
US9668049B2 (en) 2012-06-28 2017-05-30 Sonos, Inc. Playback device calibration user interfaces
US9686625B2 (en) * 2015-07-21 2017-06-20 Disney Enterprises, Inc. Systems and methods for delivery of personalized audio
US9693165B2 (en) 2015-09-17 2017-06-27 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US9690539B2 (en) 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration user interface
US9690271B2 (en) 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration
EP3188505A1 (en) * 2016-01-04 2017-07-05 Harman Becker Automotive Systems GmbH Sound reproduction for a multiplicity of listeners
US9706323B2 (en) 2014-09-09 2017-07-11 Sonos, Inc. Playback device calibration
JP2017123650A (en) * 2016-01-04 2017-07-13 ハーマン ベッカー オートモーティブ システムズ ゲーエムベーハー Sound reproduction for very large number of listeners
US9743207B1 (en) 2016-01-18 2017-08-22 Sonos, Inc. Calibration using multiple recording devices
US9749763B2 (en) 2014-09-09 2017-08-29 Sonos, Inc. Playback device calibration
US9763018B1 (en) 2016-04-12 2017-09-12 Sonos, Inc. Calibration of audio playback devices
US9794710B1 (en) 2016-07-15 2017-10-17 Sonos, Inc. Spatial audio correction
US9860670B1 (en) 2016-07-15 2018-01-02 Sonos, Inc. Spectral correction using spatial calibration
US9860662B2 (en) 2016-04-01 2018-01-02 Sonos, Inc. Updating playback device configuration information based on calibration data
US9864574B2 (en) 2016-04-01 2018-01-09 Sonos, Inc. Playback device calibration based on representation spectral characteristics
US9891881B2 (en) 2014-09-09 2018-02-13 Sonos, Inc. Audio processing algorithm database
US9952825B2 (en) 2014-09-09 2018-04-24 Sonos, Inc. Audio processing algorithms
US9990171B2 (en) * 2014-04-29 2018-06-05 Boe Technology Group Co., Ltd. Audio device and method for automatically adjusting volume thereof
US10003899B2 (en) 2016-01-25 2018-06-19 Sonos, Inc. Calibration with particular locations
US10081335B1 (en) * 2017-08-01 2018-09-25 Ford Global Technologies, Llc Automotive rain detector using psycho-acoustic metrics
US10097150B1 (en) * 2017-07-13 2018-10-09 Lenovo (Singapore) Pte. Ltd. Systems and methods to increase volume of audio output by a device
US10111002B1 (en) * 2012-08-03 2018-10-23 Amazon Technologies, Inc. Dynamic audio optimization
US10127006B2 (en) 2014-09-09 2018-11-13 Sonos, Inc. Facilitating calibration of an audio playback device
US10191714B2 (en) 2016-01-14 2019-01-29 Performance Designed Products Llc Gaming peripheral with built-in audio support
US10284983B2 (en) 2015-04-24 2019-05-07 Sonos, Inc. Playback device calibration user interfaces
US10299061B1 (en) 2018-08-28 2019-05-21 Sonos, Inc. Playback device calibration
US10372406B2 (en) 2016-07-22 2019-08-06 Sonos, Inc. Calibration interface
US10459684B2 (en) 2016-08-05 2019-10-29 Sonos, Inc. Calibration of a playback device based on an estimated frequency response
CN110874204A (en) * 2017-05-16 2020-03-10 苹果公司 Method and interface for home media control
US10585639B2 (en) 2015-09-17 2020-03-10 Sonos, Inc. Facilitating calibration of an audio playback device
EP3624460A1 (en) * 2017-05-16 2020-03-18 Apple Inc. Methods and interfaces for home media control
US10664224B2 (en) 2015-04-24 2020-05-26 Sonos, Inc. Speaker calibration user interface
US10734965B1 (en) 2019-08-12 2020-08-04 Sonos, Inc. Audio calibration of a portable playback device
WO2020173156A1 (en) * 2019-02-27 2020-09-03 北京地平线机器人技术研发有限公司 Method, device and electronic device for controlling audio playback of multiple loudspeakers
US10779085B1 (en) 2019-05-31 2020-09-15 Apple Inc. User interfaces for managing controllable external devices
US10928980B2 (en) 2017-05-12 2021-02-23 Apple Inc. User interfaces for playing and managing audio items
US10992795B2 (en) 2017-05-16 2021-04-27 Apple Inc. Methods and interfaces for home media control
US10996917B2 (en) 2019-05-31 2021-05-04 Apple Inc. User interfaces for audio media control
US11037150B2 (en) 2016-06-12 2021-06-15 Apple Inc. User interfaces for transactions
US11079913B1 (en) 2020-05-11 2021-08-03 Apple Inc. User interface for status indicators
US11080004B2 (en) 2019-05-31 2021-08-03 Apple Inc. Methods and user interfaces for sharing audio
US11106423B2 (en) 2016-01-25 2021-08-31 Sonos, Inc. Evaluating calibration of a playback device
US11126704B2 (en) 2014-08-15 2021-09-21 Apple Inc. Authenticated device used to unlock another device
US11157143B2 (en) 2014-09-02 2021-10-26 Apple Inc. Music user interface
US11200309B2 (en) 2011-09-29 2021-12-14 Apple Inc. Authentication with secondary approver
US11206484B2 (en) 2018-08-28 2021-12-21 Sonos, Inc. Passive speaker authentication
US11206309B2 (en) 2016-05-19 2021-12-21 Apple Inc. User interface for remote authorization
US11281711B2 (en) 2011-08-18 2022-03-22 Apple Inc. Management of local and remote media items
US11283916B2 (en) 2017-05-16 2022-03-22 Apple Inc. Methods and interfaces for configuring a device in accordance with an audio tone signal
US11304003B2 (en) 2016-01-04 2022-04-12 Harman Becker Automotive Systems Gmbh Loudspeaker array
US11316966B2 (en) 2017-05-16 2022-04-26 Apple Inc. Methods and interfaces for detecting a proximity between devices and initiating playback of media
US11392291B2 (en) 2020-09-25 2022-07-19 Apple Inc. Methods and interfaces for media control with dynamic feedback
US11431836B2 (en) 2017-05-02 2022-08-30 Apple Inc. Methods and interfaces for initiating media playback
US11539831B2 (en) 2013-03-15 2022-12-27 Apple Inc. Providing remote interactions with host device using a wireless device
US11540052B1 (en) 2021-11-09 2022-12-27 Lenovo (United States) Inc. Audio component adjustment based on location
US11567648B2 (en) 2009-03-16 2023-01-31 Apple Inc. Device, method, and graphical user interface for moving a current position in content at a variable scrubbing rate
US11620103B2 (en) 2019-05-31 2023-04-04 Apple Inc. User interfaces for audio media control
US11683408B2 (en) 2017-05-16 2023-06-20 Apple Inc. Methods and interfaces for home media control
US11818560B2 (en) * 2012-04-02 2023-11-14 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for gestural manipulation of a sound field
WO2023219413A1 (en) * 2022-05-11 2023-11-16 Samsung Electronics Co., Ltd. Method and system for modifying audio content for listener
US11847378B2 (en) 2021-06-06 2023-12-19 Apple Inc. User interfaces for audio routing
US11907013B2 (en) 2014-05-30 2024-02-20 Apple Inc. Continuity of applications across devices

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9319816B1 (en) * 2012-09-26 2016-04-19 Amazon Technologies, Inc. Characterizing environment using ultrasound pilot tones
US20170330563A1 (en) * 2016-05-13 2017-11-16 Bose Corporation Processing Speech from Distributed Microphones
WO2019014827A1 (en) * 2017-07-18 2019-01-24 深圳市智晟达科技有限公司 Method for adjusting playing mode according to sitting position, and digital television
US10665234B2 (en) * 2017-10-18 2020-05-26 Motorola Mobility Llc Detecting audio trigger phrases for a voice recognition session

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020159611A1 (en) * 2001-04-27 2002-10-31 International Business Machines Corporation Method and system for automatic reconfiguration of a multi-dimension sound system
US20040114770A1 (en) * 2002-10-30 2004-06-17 Pompei Frank Joseph Directed acoustic sound system
US20050063556A1 (en) * 2003-09-23 2005-03-24 Mceachen Peter C. Audio device
US20060062401A1 (en) * 2002-09-09 2006-03-23 Koninklijke Philips Elctronics, N.V. Smart speakers
US20070116306A1 (en) * 2003-12-11 2007-05-24 Sony Deutschland Gmbh Dynamic sweet spot tracking
US20070154041A1 (en) * 2006-01-05 2007-07-05 Todd Beauchamp Integrated entertainment system with audio modules
US20080025518A1 (en) * 2005-01-24 2008-01-31 Ko Mizuno Sound Image Localization Control Apparatus
US20080252595A1 (en) * 2007-04-11 2008-10-16 Marc Boillot Method and Device for Virtual Navigation and Voice Processing
US20080285772A1 (en) * 2007-04-17 2008-11-20 Tim Haulick Acoustic localization of a speaker
US7929720B2 (en) * 2005-03-15 2011-04-19 Yamaha Corporation Position detecting system, speaker system, and user terminal apparatus
US7970153B2 (en) * 2003-12-25 2011-06-28 Yamaha Corporation Audio output apparatus
US8159399B2 (en) * 2008-06-03 2012-04-17 Apple Inc. Antenna diversity systems for portable electronic devices
US8483413B2 (en) * 2007-05-04 2013-07-09 Bose Corporation System and method for directionally radiating sound

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4251688A (en) * 1979-01-15 1981-02-17 Ana Maria Furner Audio-digital processing system for demultiplexing stereophonic/quadriphonic input audio signals into 4-to-72 output audio signals
ATE376892T1 (en) * 1999-09-29 2007-11-15 1 Ltd METHOD AND APPARATUS FOR ALIGNING SOUND WITH A GROUP OF EMISSION TRANSDUCERS
US6744364B2 (en) * 2001-10-25 2004-06-01 Douglas L. Wathen Distance sensitive remote control systems
US20080232608A1 (en) * 2004-01-29 2008-09-25 Koninklijke Philips Electronic, N.V. Audio/Video System
JP4887290B2 (en) * 2005-06-30 2012-02-29 パナソニック株式会社 Sound image localization controller
JP4882380B2 (en) * 2006-01-16 2012-02-22 ヤマハ株式会社 Speaker system
US7991163B2 (en) * 2006-06-02 2011-08-02 Ideaworkx Llc Communication system, apparatus and method
US20080044038A1 (en) * 2006-08-18 2008-02-21 Houle Douglas W Stereophonic sound system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020159611A1 (en) * 2001-04-27 2002-10-31 International Business Machines Corporation Method and system for automatic reconfiguration of a multi-dimension sound system
US20060062401A1 (en) * 2002-09-09 2006-03-23 Koninklijke Philips Elctronics, N.V. Smart speakers
US20040114770A1 (en) * 2002-10-30 2004-06-17 Pompei Frank Joseph Directed acoustic sound system
US20050063556A1 (en) * 2003-09-23 2005-03-24 Mceachen Peter C. Audio device
US20070116306A1 (en) * 2003-12-11 2007-05-24 Sony Deutschland Gmbh Dynamic sweet spot tracking
US7970153B2 (en) * 2003-12-25 2011-06-28 Yamaha Corporation Audio output apparatus
US20080025518A1 (en) * 2005-01-24 2008-01-31 Ko Mizuno Sound Image Localization Control Apparatus
US7929720B2 (en) * 2005-03-15 2011-04-19 Yamaha Corporation Position detecting system, speaker system, and user terminal apparatus
US20070154041A1 (en) * 2006-01-05 2007-07-05 Todd Beauchamp Integrated entertainment system with audio modules
US20080252595A1 (en) * 2007-04-11 2008-10-16 Marc Boillot Method and Device for Virtual Navigation and Voice Processing
US20080285772A1 (en) * 2007-04-17 2008-11-20 Tim Haulick Acoustic localization of a speaker
US8483413B2 (en) * 2007-05-04 2013-07-09 Bose Corporation System and method for directionally radiating sound
US8159399B2 (en) * 2008-06-03 2012-04-17 Apple Inc. Antenna diversity systems for portable electronic devices

Cited By (240)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11567648B2 (en) 2009-03-16 2023-01-31 Apple Inc. Device, method, and graphical user interface for moving a current position in content at a variable scrubbing rate
US11907519B2 (en) 2009-03-16 2024-02-20 Apple Inc. Device, method, and graphical user interface for moving a current position in content at a variable scrubbing rate
US8705872B2 (en) * 2009-07-31 2014-04-22 Echostar Technologies L.L.C. Systems and methods for hand gesture control of an electronic device
US9479721B2 (en) 2009-07-31 2016-10-25 Echostar Technologies L.L.C. Systems and methods for hand gesture control of an electronic device
US9176590B2 (en) 2009-07-31 2015-11-03 Echostar Technologies L.L.C. Systems and methods for hand gesture control of an electronic device
US20130229344A1 (en) * 2009-07-31 2013-09-05 Echostar Technologies L.L.C. Systems and methods for hand gesture control of an electronic device
US10271135B2 (en) * 2009-11-24 2019-04-23 Nokia Technologies Oy Apparatus for processing of audio signals based on device position
US20130083944A1 (en) * 2009-11-24 2013-04-04 Nokia Corporation Apparatus
US9967690B2 (en) * 2010-11-05 2018-05-08 Sony Corporation Acoustic control apparatus and acoustic control method
US20120114137A1 (en) * 2010-11-05 2012-05-10 Shingo Tsurumi Acoustic Control Apparatus and Acoustic Control Method
WO2012173801A1 (en) 2011-06-15 2012-12-20 Dolby Laboratories Licensing Corporation Method for capturing and playback of sound originating from a plurality of sound sources
US11893052B2 (en) 2011-08-18 2024-02-06 Apple Inc. Management of local and remote media items
US11281711B2 (en) 2011-08-18 2022-03-22 Apple Inc. Management of local and remote media items
US11200309B2 (en) 2011-09-29 2021-12-14 Apple Inc. Authentication with secondary approver
US11755712B2 (en) 2011-09-29 2023-09-12 Apple Inc. Authentication with secondary approver
US10492015B2 (en) 2011-12-19 2019-11-26 Qualcomm Incorporated Automated user/sensor location recognition to customize audio performance in a distributed multi-sensor environment
KR101714134B1 (en) 2011-12-19 2017-03-08 퀄컴 인코포레이티드 Automated user/sensor location recognition to customize audio performance in a distributed multi-sensor environment
CN107018475A (en) * 2011-12-19 2017-08-04 高通股份有限公司 User/sensor positioning identification is automated to customize audio performance in distributed multi-sensor environment
KR20140107512A (en) * 2011-12-19 2014-09-04 퀄컴 인코포레이티드 Automated user/sensor location recognition to customize audio performance in a distributed multi-sensor environment
CN103999488A (en) * 2011-12-19 2014-08-20 高通股份有限公司 Automated user/sensor location recognition to customize audio performance in a distributed multi-sensor environment
US9408011B2 (en) 2011-12-19 2016-08-02 Qualcomm Incorporated Automated user/sensor location recognition to customize audio performance in a distributed multi-sensor environment
US10945089B2 (en) 2011-12-29 2021-03-09 Sonos, Inc. Playback based on user settings
US10986460B2 (en) 2011-12-29 2021-04-20 Sonos, Inc. Grouping based on acoustic signals
US20130170647A1 (en) * 2011-12-29 2013-07-04 Jonathon Reilly Sound field calibration using listener localization
US10334386B2 (en) 2011-12-29 2019-06-25 Sonos, Inc. Playback based on wireless signal
US10455347B2 (en) 2011-12-29 2019-10-22 Sonos, Inc. Playback based on number of listeners
US9930470B2 (en) 2011-12-29 2018-03-27 Sonos, Inc. Sound field calibration using listener localization
US11910181B2 (en) 2011-12-29 2024-02-20 Sonos, Inc Media playback based on sensor data
US11889290B2 (en) 2011-12-29 2024-01-30 Sonos, Inc. Media playback based on sensor data
US11849299B2 (en) 2011-12-29 2023-12-19 Sonos, Inc. Media playback based on sensor data
US11528578B2 (en) 2011-12-29 2022-12-13 Sonos, Inc. Media playback based on sensor data
US11122382B2 (en) 2011-12-29 2021-09-14 Sonos, Inc. Playback based on acoustic signals
US11825289B2 (en) 2011-12-29 2023-11-21 Sonos, Inc. Media playback based on sensor data
US11825290B2 (en) 2011-12-29 2023-11-21 Sonos, Inc. Media playback based on sensor data
US11290838B2 (en) 2011-12-29 2022-03-29 Sonos, Inc. Playback based on user presence detection
US11153706B1 (en) 2011-12-29 2021-10-19 Sonos, Inc. Playback based on acoustic signals
US11197117B2 (en) 2011-12-29 2021-12-07 Sonos, Inc. Media playback based on sensor data
US9084058B2 (en) * 2011-12-29 2015-07-14 Sonos, Inc. Sound field calibration using listener localization
US11818560B2 (en) * 2012-04-02 2023-11-14 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for gestural manipulation of a sound field
US9913057B2 (en) 2012-06-28 2018-03-06 Sonos, Inc. Concurrent multi-loudspeaker calibration with a single measurement
US10045139B2 (en) 2012-06-28 2018-08-07 Sonos, Inc. Calibration state variable
US10129674B2 (en) 2012-06-28 2018-11-13 Sonos, Inc. Concurrent multi-loudspeaker calibration
US10284984B2 (en) 2012-06-28 2019-05-07 Sonos, Inc. Calibration state variable
US9690539B2 (en) 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration user interface
US9690271B2 (en) 2012-06-28 2017-06-27 Sonos, Inc. Speaker calibration
US10296282B2 (en) 2012-06-28 2019-05-21 Sonos, Inc. Speaker calibration user interface
US11516608B2 (en) 2012-06-28 2022-11-29 Sonos, Inc. Calibration state variable
US11516606B2 (en) 2012-06-28 2022-11-29 Sonos, Inc. Calibration interface
US10791405B2 (en) 2012-06-28 2020-09-29 Sonos, Inc. Calibration indicator
US9736584B2 (en) 2012-06-28 2017-08-15 Sonos, Inc. Hybrid test tone for space-averaged room audio calibration using a moving microphone
US10045138B2 (en) 2012-06-28 2018-08-07 Sonos, Inc. Hybrid test tone for space-averaged room audio calibration using a moving microphone
US11368803B2 (en) 2012-06-28 2022-06-21 Sonos, Inc. Calibration of playback device(s)
US9648422B2 (en) 2012-06-28 2017-05-09 Sonos, Inc. Concurrent multi-loudspeaker calibration with a single measurement
US9749744B2 (en) 2012-06-28 2017-08-29 Sonos, Inc. Playback device calibration
US11064306B2 (en) 2012-06-28 2021-07-13 Sonos, Inc. Calibration state variable
US10412516B2 (en) 2012-06-28 2019-09-10 Sonos, Inc. Calibration of playback devices
US10674293B2 (en) 2012-06-28 2020-06-02 Sonos, Inc. Concurrent multi-driver calibration
US9788113B2 (en) 2012-06-28 2017-10-10 Sonos, Inc. Calibration state variable
US9668049B2 (en) 2012-06-28 2017-05-30 Sonos, Inc. Playback device calibration user interfaces
US9961463B2 (en) 2012-06-28 2018-05-01 Sonos, Inc. Calibration indicator
US9820045B2 (en) 2012-06-28 2017-11-14 Sonos, Inc. Playback calibration
US11800305B2 (en) 2012-06-28 2023-10-24 Sonos, Inc. Calibration interface
US10111002B1 (en) * 2012-08-03 2018-10-23 Amazon Technologies, Inc. Dynamic audio optimization
US20140071159A1 (en) * 2012-09-13 2014-03-13 Ati Technologies, Ulc Method and Apparatus For Providing a User Interface For a File System
US9432365B2 (en) 2012-09-28 2016-08-30 Sonos, Inc. Streaming music using authentication information
US9185103B2 (en) 2012-09-28 2015-11-10 Sonos, Inc. Streaming music using authentication information
US8910265B2 (en) 2012-09-28 2014-12-09 Sonos, Inc. Assisted registration of audio sources
US9876787B2 (en) 2012-09-28 2018-01-23 Sonos, Inc. Streaming music using authentication information
EP2723090A2 (en) * 2012-10-19 2014-04-23 Sony Corporation A directional sound apparatus, method graphical user interface and software
EP2723090A3 (en) * 2012-10-19 2014-08-20 Sony Corporation A directional sound apparatus, method graphical user interface and software
US9191767B2 (en) 2012-10-19 2015-11-17 Sony Corporation Directional sound apparatus, method graphical user interface and software
GB2507106A (en) * 2012-10-19 2014-04-23 Sony Europe Ltd Directional sound apparatus for providing personalised audio data to different users
US11539831B2 (en) 2013-03-15 2022-12-27 Apple Inc. Providing remote interactions with host device using a wireless device
US20140354791A1 (en) * 2013-05-31 2014-12-04 Lg Electronics Inc. Image display device and method of controlling the same
US9420216B2 (en) * 2013-05-31 2016-08-16 Lg Electronics Inc. Image display device and method of controlling the same
CN104219471A (en) * 2013-05-31 2014-12-17 Lg电子株式会社 Image display device and method of controlling the same
CN112351367A (en) * 2013-11-22 2021-02-09 苹果公司 Method, system and apparatus for adjusting sound emitted by a speaker array
US20160295340A1 (en) * 2013-11-22 2016-10-06 Apple Inc. Handsfree beam pattern configuration
EP3072315B1 (en) * 2013-11-22 2021-11-03 Apple Inc. Handsfree beam pattern configuration
KR101815211B1 (en) * 2013-11-22 2018-01-05 애플 인크. Handsfree beam pattern configuration
US11432096B2 (en) 2013-11-22 2022-08-30 Apple Inc. Handsfree beam pattern configuration
WO2015076930A1 (en) * 2013-11-22 2015-05-28 Tiskerling Dynamics Llc Handsfree beam pattern configuration
AU2014353473C1 (en) * 2013-11-22 2018-04-05 Apple Inc. Handsfree beam pattern configuration
CN105794231A (en) * 2013-11-22 2016-07-20 苹果公司 Handsfree beam pattern configuration
US10251008B2 (en) * 2013-11-22 2019-04-02 Apple Inc. Handsfree beam pattern configuration
CN109379671A (en) * 2013-11-22 2019-02-22 苹果公司 Hands-free beam pattern configuration
AU2014353473B2 (en) * 2013-11-22 2017-11-02 Apple Inc. Handsfree beam pattern configuration
WO2015126814A3 (en) * 2014-02-20 2015-10-15 Bose Corporation Content-aware audio modes
US9578436B2 (en) 2014-02-20 2017-02-21 Bose Corporation Content-aware audio modes
US10511924B2 (en) 2014-03-17 2019-12-17 Sonos, Inc. Playback device with multiple sensors
US10299055B2 (en) 2014-03-17 2019-05-21 Sonos, Inc. Restoration of playback device configuration
US9872119B2 (en) 2014-03-17 2018-01-16 Sonos, Inc. Audio settings of multiple speakers in a playback device
US10129675B2 (en) 2014-03-17 2018-11-13 Sonos, Inc. Audio settings of multiple speakers in a playback device
US10791407B2 (en) 2014-03-17 2020-09-29 Sonon, Inc. Playback device configuration
US10412517B2 (en) 2014-03-17 2019-09-10 Sonos, Inc. Calibration of playback device to target curve
US11540073B2 (en) 2014-03-17 2022-12-27 Sonos, Inc. Playback device self-calibration
US9264839B2 (en) 2014-03-17 2016-02-16 Sonos, Inc. Playback device configuration based on proximity detection
US9521487B2 (en) 2014-03-17 2016-12-13 Sonos, Inc. Calibration adjustment based on barrier
US9344829B2 (en) 2014-03-17 2016-05-17 Sonos, Inc. Indication of barrier detection
US10863295B2 (en) 2014-03-17 2020-12-08 Sonos, Inc. Indoor/outdoor playback device calibration
US9419575B2 (en) 2014-03-17 2016-08-16 Sonos, Inc. Audio settings based on environment
US9743208B2 (en) 2014-03-17 2017-08-22 Sonos, Inc. Playback device configuration based on proximity detection
US9439022B2 (en) 2014-03-17 2016-09-06 Sonos, Inc. Playback device speaker configuration based on proximity detection
US9439021B2 (en) 2014-03-17 2016-09-06 Sonos, Inc. Proximity detection using audio pulse
US10051399B2 (en) 2014-03-17 2018-08-14 Sonos, Inc. Playback device configuration according to distortion threshold
US11696081B2 (en) 2014-03-17 2023-07-04 Sonos, Inc. Audio settings based on environment
US9516419B2 (en) 2014-03-17 2016-12-06 Sonos, Inc. Playback device setting according to threshold(s)
US9521488B2 (en) 2014-03-17 2016-12-13 Sonos, Inc. Playback device setting based on distortion
US9990171B2 (en) * 2014-04-29 2018-06-05 Boe Technology Group Co., Ltd. Audio device and method for automatically adjusting volume thereof
US11907013B2 (en) 2014-05-30 2024-02-20 Apple Inc. Continuity of applications across devices
US11126704B2 (en) 2014-08-15 2021-09-21 Apple Inc. Authenticated device used to unlock another device
US11157143B2 (en) 2014-09-02 2021-10-26 Apple Inc. Music user interface
US10701501B2 (en) 2014-09-09 2020-06-30 Sonos, Inc. Playback device calibration
US10154359B2 (en) 2014-09-09 2018-12-11 Sonos, Inc. Playback device calibration
US9891881B2 (en) 2014-09-09 2018-02-13 Sonos, Inc. Audio processing algorithm database
US10127006B2 (en) 2014-09-09 2018-11-13 Sonos, Inc. Facilitating calibration of an audio playback device
US9910634B2 (en) 2014-09-09 2018-03-06 Sonos, Inc. Microphone calibration
US11029917B2 (en) 2014-09-09 2021-06-08 Sonos, Inc. Audio processing algorithms
US10127008B2 (en) 2014-09-09 2018-11-13 Sonos, Inc. Audio processing algorithm database
US11625219B2 (en) 2014-09-09 2023-04-11 Sonos, Inc. Audio processing algorithms
US9781532B2 (en) 2014-09-09 2017-10-03 Sonos, Inc. Playback device calibration
US9952825B2 (en) 2014-09-09 2018-04-24 Sonos, Inc. Audio processing algorithms
US10271150B2 (en) 2014-09-09 2019-04-23 Sonos, Inc. Playback device calibration
US9706323B2 (en) 2014-09-09 2017-07-11 Sonos, Inc. Playback device calibration
US9936318B2 (en) 2014-09-09 2018-04-03 Sonos, Inc. Playback device calibration
US9749763B2 (en) 2014-09-09 2017-08-29 Sonos, Inc. Playback device calibration
US10599386B2 (en) 2014-09-09 2020-03-24 Sonos, Inc. Audio processing algorithms
US10664224B2 (en) 2015-04-24 2020-05-26 Sonos, Inc. Speaker calibration user interface
US10284983B2 (en) 2015-04-24 2019-05-07 Sonos, Inc. Playback device calibration user interfaces
US9686625B2 (en) * 2015-07-21 2017-06-20 Disney Enterprises, Inc. Systems and methods for delivery of personalized audio
US9538305B2 (en) 2015-07-28 2017-01-03 Sonos, Inc. Calibration error conditions
US10129679B2 (en) 2015-07-28 2018-11-13 Sonos, Inc. Calibration error conditions
US10462592B2 (en) 2015-07-28 2019-10-29 Sonos, Inc. Calibration error conditions
US9781533B2 (en) 2015-07-28 2017-10-03 Sonos, Inc. Calibration error conditions
US9693165B2 (en) 2015-09-17 2017-06-27 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US11197112B2 (en) 2015-09-17 2021-12-07 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US11099808B2 (en) 2015-09-17 2021-08-24 Sonos, Inc. Facilitating calibration of an audio playback device
US9992597B2 (en) 2015-09-17 2018-06-05 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US11803350B2 (en) 2015-09-17 2023-10-31 Sonos, Inc. Facilitating calibration of an audio playback device
US10585639B2 (en) 2015-09-17 2020-03-10 Sonos, Inc. Facilitating calibration of an audio playback device
US10419864B2 (en) 2015-09-17 2019-09-17 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US11706579B2 (en) 2015-09-17 2023-07-18 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US9858036B2 (en) * 2015-11-10 2018-01-02 Google Llc Automatic audio level adjustment during media item presentation
US10656901B2 (en) 2015-11-10 2020-05-19 Google Llc Automatic audio level adjustment during media item presentation
US20170131966A1 (en) * 2015-11-10 2017-05-11 Google Inc. Automatic Audio Level Adjustment During Media Item Presentation
US11304003B2 (en) 2016-01-04 2022-04-12 Harman Becker Automotive Systems Gmbh Loudspeaker array
US10097944B2 (en) 2016-01-04 2018-10-09 Harman Becker Automotive Systems Gmbh Sound reproduction for a multiplicity of listeners
EP3188505A1 (en) * 2016-01-04 2017-07-05 Harman Becker Automotive Systems GmbH Sound reproduction for a multiplicity of listeners
JP2017123650A (en) * 2016-01-04 2017-07-13 ハーマン ベッカー オートモーティブ システムズ ゲーエムベーハー Sound reproduction for very large number of listeners
US10191714B2 (en) 2016-01-14 2019-01-29 Performance Designed Products Llc Gaming peripheral with built-in audio support
US11432089B2 (en) 2016-01-18 2022-08-30 Sonos, Inc. Calibration using multiple recording devices
US10841719B2 (en) 2016-01-18 2020-11-17 Sonos, Inc. Calibration using multiple recording devices
US10063983B2 (en) 2016-01-18 2018-08-28 Sonos, Inc. Calibration using multiple recording devices
US9743207B1 (en) 2016-01-18 2017-08-22 Sonos, Inc. Calibration using multiple recording devices
US10405117B2 (en) 2016-01-18 2019-09-03 Sonos, Inc. Calibration using multiple recording devices
US11800306B2 (en) 2016-01-18 2023-10-24 Sonos, Inc. Calibration using multiple recording devices
US11184726B2 (en) 2016-01-25 2021-11-23 Sonos, Inc. Calibration using listener locations
US11516612B2 (en) 2016-01-25 2022-11-29 Sonos, Inc. Calibration based on audio content
US10735879B2 (en) 2016-01-25 2020-08-04 Sonos, Inc. Calibration based on grouping
US10003899B2 (en) 2016-01-25 2018-06-19 Sonos, Inc. Calibration with particular locations
US10390161B2 (en) 2016-01-25 2019-08-20 Sonos, Inc. Calibration based on audio content type
US11006232B2 (en) 2016-01-25 2021-05-11 Sonos, Inc. Calibration based on audio content
US11106423B2 (en) 2016-01-25 2021-08-31 Sonos, Inc. Evaluating calibration of a playback device
US9860662B2 (en) 2016-04-01 2018-01-02 Sonos, Inc. Updating playback device configuration information based on calibration data
US10880664B2 (en) 2016-04-01 2020-12-29 Sonos, Inc. Updating playback device configuration information based on calibration data
US10405116B2 (en) 2016-04-01 2019-09-03 Sonos, Inc. Updating playback device configuration information based on calibration data
US10402154B2 (en) 2016-04-01 2019-09-03 Sonos, Inc. Playback device calibration based on representative spectral characteristics
US11736877B2 (en) 2016-04-01 2023-08-22 Sonos, Inc. Updating playback device configuration information based on calibration data
US11212629B2 (en) 2016-04-01 2021-12-28 Sonos, Inc. Updating playback device configuration information based on calibration data
US11379179B2 (en) 2016-04-01 2022-07-05 Sonos, Inc. Playback device calibration based on representative spectral characteristics
US9864574B2 (en) 2016-04-01 2018-01-09 Sonos, Inc. Playback device calibration based on representation spectral characteristics
US10884698B2 (en) 2016-04-01 2021-01-05 Sonos, Inc. Playback device calibration based on representative spectral characteristics
US10299054B2 (en) 2016-04-12 2019-05-21 Sonos, Inc. Calibration of audio playback devices
US10750304B2 (en) 2016-04-12 2020-08-18 Sonos, Inc. Calibration of audio playback devices
US11889276B2 (en) 2016-04-12 2024-01-30 Sonos, Inc. Calibration of audio playback devices
US11218827B2 (en) 2016-04-12 2022-01-04 Sonos, Inc. Calibration of audio playback devices
US10045142B2 (en) 2016-04-12 2018-08-07 Sonos, Inc. Calibration of audio playback devices
US9763018B1 (en) 2016-04-12 2017-09-12 Sonos, Inc. Calibration of audio playback devices
US11206309B2 (en) 2016-05-19 2021-12-21 Apple Inc. User interface for remote authorization
US11900372B2 (en) 2016-06-12 2024-02-13 Apple Inc. User interfaces for transactions
US11037150B2 (en) 2016-06-12 2021-06-15 Apple Inc. User interfaces for transactions
US11736878B2 (en) 2016-07-15 2023-08-22 Sonos, Inc. Spatial audio correction
US9794710B1 (en) 2016-07-15 2017-10-17 Sonos, Inc. Spatial audio correction
US11337017B2 (en) 2016-07-15 2022-05-17 Sonos, Inc. Spatial audio correction
US10129678B2 (en) 2016-07-15 2018-11-13 Sonos, Inc. Spatial audio correction
US9860670B1 (en) 2016-07-15 2018-01-02 Sonos, Inc. Spectral correction using spatial calibration
US10448194B2 (en) 2016-07-15 2019-10-15 Sonos, Inc. Spectral correction using spatial calibration
US10750303B2 (en) 2016-07-15 2020-08-18 Sonos, Inc. Spatial audio correction
US11237792B2 (en) 2016-07-22 2022-02-01 Sonos, Inc. Calibration assistance
US10372406B2 (en) 2016-07-22 2019-08-06 Sonos, Inc. Calibration interface
US10853022B2 (en) 2016-07-22 2020-12-01 Sonos, Inc. Calibration interface
US11531514B2 (en) 2016-07-22 2022-12-20 Sonos, Inc. Calibration assistance
US11698770B2 (en) 2016-08-05 2023-07-11 Sonos, Inc. Calibration of a playback device based on an estimated frequency response
US10459684B2 (en) 2016-08-05 2019-10-29 Sonos, Inc. Calibration of a playback device based on an estimated frequency response
US10853027B2 (en) 2016-08-05 2020-12-01 Sonos, Inc. Calibration of a playback device based on an estimated frequency response
US11431836B2 (en) 2017-05-02 2022-08-30 Apple Inc. Methods and interfaces for initiating media playback
US10928980B2 (en) 2017-05-12 2021-02-23 Apple Inc. User interfaces for playing and managing audio items
AU2021203669B2 (en) * 2017-05-16 2022-06-09 Apple Inc. Methods and interfaces for home media control
US11683408B2 (en) 2017-05-16 2023-06-20 Apple Inc. Methods and interfaces for home media control
US11095766B2 (en) 2017-05-16 2021-08-17 Apple Inc. Methods and interfaces for adjusting an audible signal based on a spatial position of a voice command source
US11201961B2 (en) 2017-05-16 2021-12-14 Apple Inc. Methods and interfaces for adjusting the volume of media
US11750734B2 (en) 2017-05-16 2023-09-05 Apple Inc. Methods for initiating output of at least a component of a signal representative of media currently being played back by another device
US11316966B2 (en) 2017-05-16 2022-04-26 Apple Inc. Methods and interfaces for detecting a proximity between devices and initiating playback of media
CN110874204A (en) * 2017-05-16 2020-03-10 苹果公司 Method and interface for home media control
US11412081B2 (en) 2017-05-16 2022-08-09 Apple Inc. Methods and interfaces for configuring an electronic device to initiate playback of media
US10992795B2 (en) 2017-05-16 2021-04-27 Apple Inc. Methods and interfaces for home media control
US11283916B2 (en) 2017-05-16 2022-03-22 Apple Inc. Methods and interfaces for configuring a device in accordance with an audio tone signal
EP3624460A1 (en) * 2017-05-16 2020-03-18 Apple Inc. Methods and interfaces for home media control
US10097150B1 (en) * 2017-07-13 2018-10-09 Lenovo (Singapore) Pte. Ltd. Systems and methods to increase volume of audio output by a device
US10081335B1 (en) * 2017-08-01 2018-09-25 Ford Global Technologies, Llc Automotive rain detector using psycho-acoustic metrics
US11877139B2 (en) 2018-08-28 2024-01-16 Sonos, Inc. Playback device calibration
US10582326B1 (en) 2018-08-28 2020-03-03 Sonos, Inc. Playback device calibration
US10299061B1 (en) 2018-08-28 2019-05-21 Sonos, Inc. Playback device calibration
US11350233B2 (en) 2018-08-28 2022-05-31 Sonos, Inc. Playback device calibration
US11206484B2 (en) 2018-08-28 2021-12-21 Sonos, Inc. Passive speaker authentication
US10848892B2 (en) 2018-08-28 2020-11-24 Sonos, Inc. Playback device calibration
WO2020173156A1 (en) * 2019-02-27 2020-09-03 北京地平线机器人技术研发有限公司 Method, device and electronic device for controlling audio playback of multiple loudspeakers
US11856379B2 (en) 2019-02-27 2023-12-26 Beijing Horizon Robotics Technology Research And Development Co., Ltd. Method, device and electronic device for controlling audio playback of multiple loudspeakers
CN111629301A (en) * 2019-02-27 2020-09-04 北京地平线机器人技术研发有限公司 Method and device for controlling multiple loudspeakers to play audio and electronic equipment
US11010121B2 (en) 2019-05-31 2021-05-18 Apple Inc. User interfaces for audio media control
US11157234B2 (en) 2019-05-31 2021-10-26 Apple Inc. Methods and user interfaces for sharing audio
US11080004B2 (en) 2019-05-31 2021-08-03 Apple Inc. Methods and user interfaces for sharing audio
US10904029B2 (en) 2019-05-31 2021-01-26 Apple Inc. User interfaces for managing controllable external devices
US10779085B1 (en) 2019-05-31 2020-09-15 Apple Inc. User interfaces for managing controllable external devices
US11785387B2 (en) 2019-05-31 2023-10-10 Apple Inc. User interfaces for managing controllable external devices
US11620103B2 (en) 2019-05-31 2023-04-04 Apple Inc. User interfaces for audio media control
US11755273B2 (en) 2019-05-31 2023-09-12 Apple Inc. User interfaces for audio media control
US10996917B2 (en) 2019-05-31 2021-05-04 Apple Inc. User interfaces for audio media control
US11714597B2 (en) 2019-05-31 2023-08-01 Apple Inc. Methods and user interfaces for sharing audio
US11853646B2 (en) 2019-05-31 2023-12-26 Apple Inc. User interfaces for audio media control
US11728780B2 (en) 2019-08-12 2023-08-15 Sonos, Inc. Audio calibration of a portable playback device
US11374547B2 (en) 2019-08-12 2022-06-28 Sonos, Inc. Audio calibration of a portable playback device
US10734965B1 (en) 2019-08-12 2020-08-04 Sonos, Inc. Audio calibration of a portable playback device
US11513667B2 (en) 2020-05-11 2022-11-29 Apple Inc. User interface for audio message
US11079913B1 (en) 2020-05-11 2021-08-03 Apple Inc. User interface for status indicators
US11782598B2 (en) 2020-09-25 2023-10-10 Apple Inc. Methods and interfaces for media control with dynamic feedback
US11392291B2 (en) 2020-09-25 2022-07-19 Apple Inc. Methods and interfaces for media control with dynamic feedback
US11847378B2 (en) 2021-06-06 2023-12-19 Apple Inc. User interfaces for audio routing
US11540052B1 (en) 2021-11-09 2022-12-27 Lenovo (United States) Inc. Audio component adjustment based on location
WO2023219413A1 (en) * 2022-05-11 2023-11-16 Samsung Electronics Co., Ltd. Method and system for modifying audio content for listener

Also Published As

Publication number Publication date
US9961471B2 (en) 2018-05-01
US20150086021A1 (en) 2015-03-26

Similar Documents

Publication Publication Date Title
US9961471B2 (en) Techniques for personalizing audio levels
US10178492B2 (en) Apparatus, systems and methods for adjusting output audio volume based on user location
US10484813B2 (en) Systems and methods for delivery of personalized audio
EP2664165B1 (en) Apparatus, systems and methods for controllable sound regions in a media room
US10440496B2 (en) Spatial audio processing emphasizing sound sources close to a focal distance
US9736614B2 (en) Augmenting existing acoustic profiles
US9788114B2 (en) Acoustic device for streaming audio data
US8434006B2 (en) Systems and methods for adjusting volume of combined audio channels
US20150358756A1 (en) An audio apparatus and method therefor
JP2015056905A (en) Reachability of sound
US9747923B2 (en) Voice audio rendering augmentation
US9930469B2 (en) System and method for enhancing virtual audio height perception
US9847767B2 (en) Electronic device capable of adjusting an equalizer according to physiological condition of hearing and adjustment method thereof
KR102531886B1 (en) Electronic apparatus and control method thereof
CN109982209A (en) A kind of car audio system
KR20140090469A (en) Method for operating an apparatus for displaying image
TWI607374B (en) Calibration method and computer readable recording medium
CN111133775A (en) Acoustic signal processing device and acoustic signal processing method
KR20160077284A (en) Audio and Set-Top-Box All-in-One System, and Video Signal and Audio Signal Processing Method therefor
KR20160002319U (en) Audio and Set-Top-Box All-in-One System

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY ELECTRONICS INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HARDACKER, ROBERT;RICHMAN, STEVEN;REEL/FRAME:021075/0437;SIGNING DATES FROM 20080514 TO 20080516

Owner name: SONY CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HARDACKER, ROBERT;RICHMAN, STEVEN;REEL/FRAME:021075/0437;SIGNING DATES FROM 20080514 TO 20080516

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE