DK201500024A1 - Adaptive System According to User Presence - Google Patents

Adaptive System According to User Presence Download PDF

Info

Publication number
DK201500024A1
DK201500024A1 DKPA201500024A DKPA201500024A DK201500024A1 DK 201500024 A1 DK201500024 A1 DK 201500024A1 DK PA201500024 A DKPA201500024 A DK PA201500024A DK PA201500024 A DKPA201500024 A DK PA201500024A DK 201500024 A1 DK201500024 A1 DK 201500024A1
Authority
DK
Denmark
Prior art keywords
sound
zone
audio
user
parameters
Prior art date
Application number
DKPA201500024A
Inventor
Søren Borup Jensen
Søren Bech
Original Assignee
Bang & Olufsen As
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bang & Olufsen As filed Critical Bang & Olufsen As
Priority to DKPA201500024A priority Critical patent/DK178752B1/en
Priority to EP16020006.9A priority patent/EP3046341B1/en
Publication of DK201500024A1 publication Critical patent/DK201500024A1/en
Application granted granted Critical
Publication of DK178752B1 publication Critical patent/DK178752B1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/005Audio distribution systems for home, i.e. multi-room use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/308Electronic adaptation dependent on speaker or headphone connection

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The present invention relates to a method and system for dynamic configuring and reconfiguring a multimedia reproduction system comprising two or more sound channels. The premises of the configuring based on the position of one or more listeners and the one or more sound channels in the reproduction system. In addition, the configuring includes portable devices for audio/video sources and audio/video rendering equipment. The feature obtained is a sound distribution into individual sound zones, with optimized sound quality as perceived by one or more users/listeners.

Description

Adaptive System According to User Presence
The present invention relates to a method and system for dynamic configuring and reconfiguring a multimedia reproduction system comprising two or more sound channels. The premises of the reconfiguring based on the position of one or more listeners and the one or more sound channels in the reproduction system.
In addition, the configuring includes portable devices for audio/video sources and audio/video rendering equipment.
The feature obtained is a sound distribution into individual sound zones, with optimized sound quality as perceived by a user/listener.
The system includes a feature for adaption according to perceptive aspects per user and his/her position in the domain, this known as “object based rendering per user”. This is an advanced configuration tasks for rendering of object-based audio material in domestic environments, where both the reproduction system and the listener position are dynamic. The position of the rendering devices and the user(s) in a domain might be determined via precision GPS means.
The invention includes digital signal processing, sound transducers, filters and amplifier configurations is to be applied in surround sound systems and traditional stereophonic system. The system applied in any domain like a home, in a vehicle, in a boat, in an airplane, in an office or in any other private or public domain.
It is a well-known problem in prior art loudspeaker systems operating in closed rooms/spaces, that the sound expired by the user may vary according to the listeners position in a space relative to the loudspeaker system transducers.
Thus to obtain a certain perceived quality level of a loudspeaker system, e.g. in a car, the individual sound channels incorporating one or more loudspeaker modules must be calibrated and adjusted individually and according to the number of persons in the car, and their position in the space e.g. their seated positions.
This principle may be applied in any type of room like airplanes, boats, theatres, arenas, and shopping centres and alike.
A first aspect of the invention is: A method for automatic configuring and reconfiguring a multimedia reproduction system in one or more of rooms, said room(s) being enabled with audio- or video- or both types of rendering devices. Said room(s) being enabled with two or more individual sound zones such that two or more users, simultaneously, may listen to the same multimedia file or may listen to different multimedia files in each of the sound zones in which the users are present, and where: • determine the physical position of the one or more listeners, • determine the physical position of the one or more loudspeaker transducer(s), • apply the physical position as information to select a set of predefined sound parameters, the set of parameters include FIR settings per loudspeaker transducer, • apply context information related to an actual media file, said context information being object based audio metadata, • apply psycho-acoustical information related to a user perceptual experience, • provide the set of parameters per sound channel and/or per loudspeaker transducer accordingly, and apply the above parameters fully or partly as constraints in a constraint solver which upon execution finds one or more legal combination(s) among all the defined set of legal combinations.
Description
The invention discloses a surround sound system where different sound settings enabled via digital signal processing means controlling the individual sound channels parameters e.g. the equalization (filters), the delay, the gain (amplification). The premise for the control based upon the listener position in the listening room/space.
In a more advanced embodiment of the invention, the control and sound setting may work in a contextual mode of operation to accommodate: • the position of the one or more listeners; • a functional mode of operation, e.g. adjust the sound system settings to be in movie mode; • the type of music, e.g. rock; • the time of the day; e.g. morning/day/night/Christmas etc.
The invention includes: • sensor means to detect a listener and rendering device position in a room; • detect spoken commands from user(s); • a mode of operation related to a user position or a required function; • information available according to context (place, time and music content); • single channel/multi-channel sound system, e.g. mono, two channels stereo, or a 5.1 surround sound system; • digital controlled sound system e.g. digital control of gain, equalization and delay; • active speakers including amplifiers and filters for each loudspeaker transducer.
The digital control of the sound system based on standard means for: • gain: adjust the signal by a certain level, i.e. + /- xx dB, e.g. +0.1 dB; • delay: the signal is delayed by a specific time, i.e. yy ms, e.g. 100 ms; • EQ: the signal is filtered according to the Finite Impulse Response principle (FIR) or the signal is filtered according to the Infinite Impulse Response principle (iiR); a number of coefficients are specified; the number of parameters typically to be from 1 > 1000.
The invention implemented in different embodiments, in which the alternative configuring procedures are: • a traditional algorithm in terms of a sequential software program in which the values of the resulting adjustment parameters are embedded in the code itself; the algorithm validates according to a specified system structure of an actual loudspeaker configuration; • a table based concept in which one or more tables defines the attributes to be applied per loudspeaker channel versus the mode of operation, the listener position and other context related information (e.g. time).
The fundamental goal of the invention is that enhances the perceived sound quality seen from a listener prospective. This obtained by adjusting the gain, the delay and the equalization parameters of the individual sound channels in a multichannel sound system. This feature is regardless of the physical location of the loudspeakers transducers.
The invention discloses a system concept having the functional features as “Multimedia - Multi room/domain - Multi user” System Concept.
The key aspects included in the invention are: • Access, distribute and control multimedia information o In a multi room and multi domain (zone) environment; o Enabled with control in a multi user mode of operation; • Automatic configuring of multimedia sources available to user carried equipment; • Automatic configuring of multimedia rendering devices available to user presence and position in a room/domain;
Automatic configuring of signal/control path’s from data source(s) to rendering device(s).
The primary keywords of the system concept:
Automatically adaptive according to: a) installed equipment, b) user presence and c) user carried equipment: • Configure for use in one - or more rooms individually.
• Configure one - or more domain/comfort zone(s) in a room.
• Configure a room according to one or more users present in a room.
• Configure a room according to available rendering devices in a room.
• Configure a room according to available source devices in a room.
• Configure a room according to user presence and position in a room and the perceptual attributes related to that position.
The basic means in the system are, see Figure 1: • Sources i.e. media files (virtual/physical) located on - and accessible via the internet, physical disk or cloud.
• Sources i.e. media streams located on - and accessible via the internet.
• Rendering devices to provide media files content; the devices being: display means (screen, projector), audio means (active loudspeakers).
• Browse/preview - and control devices, the devices being: tablet, smart phone, remote-terminal.
• System control devices (switch/router/bridge/controller), data communication wired/wireless and all configured to actual product requirements.
• The system controller (piggyback) is a combined network connection and data switch that automatically configures control/data path(s) from the data sources to rendering devices. The configuration according to -, and related to user requests and user appearance in a specific room/domain.
The video rendering is via a screen (not a TV) and based on digital input from the internet.
The audio rendering is typically via active loudspeaker and wirelessly via digital networks.
The wireless distribution is within domains, in a zone-based concept that might include two individual sound zones in the same room.
A room is a domain with alternative configurations, see Figure 2: • X: Configured with sound rendering devices, e.g. active loudspeakers and a video rendering device, e.g. a simple screen (not a TV).
• Y: Configured with sound rendering devices, e.g. active loudspeakers.
• Z: Configured with two domains, each considered as comfort zones and might include both video and audio; the illustration displays one domain of type X (audio & video) and one domain of type Y (audio).
• Tablet, SmartPhone as user interface.
• Intelligent data router/bridge/signal and control switch, connecting source-and rendering devices.
A use case example is a tablet applied to browse, preview, select, and providing content: • A user is browsing on the tablet (iPad), and finds an interesting YouTube video.
• “Pushes” the video/audio and provided on the big screen and on the loudspeakers in zone X.
• “Pushes” the audio and provided on the connected loudspeakers in zone Y, as well.
• Another user continues browsing on the tablet, finds interesting music on a music-service, and activates streaming of this music; the stream becomes active on the tablet and the user commands the music provided in comfort zone Z.
Specifically the multi user feature of the system adapt to the user behavior and presence of the users in a domain: • A user M enters into domain P in the room, which becomes active, and providing the sound via two loudspeakers sourced from the SmartPhone in his hand.
• Another user W enters into domain Q in the room, which becomes active, and providing the sound via two loudspeakers sourced from the SmartPhone in her hand; and optionally providing video on the display screen in case the active file includes audio and video.
• A competing situation may appear if a user enters into a domain already occupied by another user; in this situation, the system may automatically give priority to the user having a highest rank. The rank according to a simple predefined access profile per user. The system may identify the user via “finger print” sensing and/or “eye iris” detection, or the user having the highest rank may command the system with a given spoken control command.
The system includes a feature for adaption according to perceptive aspects per user and his/her position in the domain, this known as “object based rendering per user”. This is an advanced configuration tasks for rendering of object-based audio material in domestic environments, where both the reproduction system and the listener position are dynamic. The position of the rendering devices and the user(s) in a domain might be determined via precision GPS means.
In another aspect, the reconfiguring process validates the position of the one or more users, and the validation priorities the position of the one or more listeners: • the reconfiguring process executes automatically, when a user moves from one position in a sound zone to another position in the same sound zone; • the reconfiguring process executes automatically, when an audio rendering device moves from one position in a sound zone to another position in the same sound zone.
The reconfiguring process includes one or more algorithms to provide the calculation of each of the values of the sound channel adjustment-variables.
In the preferred embodiment, the reconfiguring process provides a table with relations enabled for access by the digital signal processor to provide each of the values of the sound channel adjustment-variables.
The saved attributes and key parameters are loaded into the reconfiguring means supported by electronically means connected wirelessly or is connected wired to the audio reproduction system.
In a third, aspect the invention the reconfiguring process provides the settings of the sound parameters; gain, equalization and delay for one sound channel applied in one sound field zone related to the physical position of first group of people including one or more listeners.
To accommodate for the handling more zones including different groups of people the reconfiguring process provides the settings of the sound parameters; gain, equalization and delay for one sound channel, to be applied in one sound field zone related to the physical position of one or more other groups of people including one or more listeners.
A listener position is detected via standard sensor means like switches or infrared detectors or strain gauges or temperature sensitive detectors, or indoor GPS.
The configuring process executes automatically controlled by the audio amplifier means, including a digital signal processor, which drives the loudspeaker system. The reconfiguring means are embedded into the controller of an audio reproduction system and/or distributed in one or all of the rendering devices, e.g. sound transducers, displays, and TV’s.
In a preferred embodiment, the reconfiguring is controlled via a table mapping the mode of operation to adjustment parameters for each speaker in every channel or just relevant channels. The adjustment parameters are e.g. but not limited to: equalization, delay and gain.
The table may be represented as one or more data set as most appropriate to the digital controller unit.
E.g., one data set may contain the relations among: • listener position, and user identification (an ID number) • loud speaker channel #, • parameter settings (EQ, delay, and gain).
Another data set may contain the functional/mode related information like: • functional settings (movie, audio only), • refer to media source and related Metadata (the object based audio file) • loud speaker channel #, • parameter settings (EQ, delay, and gain).
This table concept is a data driven control systems, and enables for easy updates of the functional behaviour of a specific system simply by loading alternative data set into the controller.
In addition, the invention includes a constraint solver, which comprises a table with digital data representing at least the constraints of the listener positions and equipment positions and related acoustical adjustment, attributes and corresponding variable values.
The constraint solver processing enables an arbitrary access mode to information with no order of sequences required.
According to the invention, the configuration domain table is organized as relations among variables in the general mathematical notation of ‘Disjunctive Form’:
AttribVariable 1.1 and AttribVariable 1.2 and AttribVariable 1.3 and AttribVariable 1 .n Or AttribVariable 2.1 and AttribVariable 2.2 and AttribVariable 2.3 and AttribVariable 2.n
Or Or
Or AttribVariable m.1 and AttribVariable m.2 and AttribVariable m.3 and AttribVariable m.n
For example, AttribVariable 1.1 may be defining a listener position, AttribVariable 1.1a speaker transducer unit, AttribVariable 1.3 a speaker system/subsystem and AttribVariable 1 .n gain value for the transducer unit. In another example, AttribVariable 2.n may be a reference to another table.
An alternative definition term is the ‘Conjunctive Form’:
AttribVariable 1.1 or AttribVariable 1.2 or AttribVariable 1.3 or AttribVariable 1 .n And AttribVariable 2.1 or AttribVariable 2.2 or AttribVariable 2.3 or AttribVariable 2.n
And And
And AttribVariable m.1 or AttribVariable m.2 or AttribVariable m.3 or AttribVariable m.n
With this method of defining the problem domain, it becomes a multi-dimensional state space enabling equal and direct access to any point in the defined set of solutions. The term multidimensional as a contrast to a tree-like programming structure, which is two-dimensional.
According to the invention, the product configuration function proceeds by finding the result of the interrogation in the set of allowed and possible combinations in one or more Configuration Constraint Tables. According to definitions made in the configuration constraint tables, the result might be: - a list of variables useful in the application e.g. gain, and filter setting i.e. equalization, for the one or more transducers; - a list of variables useful in the application e.g. delay for the one or more transducers; - a list of variables useful in the application to configure individual sound domains to related to zone of sound of targeted to one or more users.
The constraint solver evaluates alternative configurations. The alternatives being one or more of the defined set of legal combination in the constraint table.
Table entries in a constraint table may combine into legal/illegal combination such that all the logical operators known from the Boolean algebra will be included as required. The logical operators being: AND, OR, NOT, XOR, Logical Implication (->), Logical Bi-implication (=).
An example of a table definition is:
Thus in an aspect of the invention the reconfiguring means is embedded into an audio reproduction system. Thus, the system becomes the controller of the reconfiguring process that provides a reconfiguration of the system itself.
In the following, preferred embodiments of the invention, with reference to the drawings, are illustrated:
Figure 1 displays system concept components including: • Multi-media source of information: A/V files, physical files and virtual files residing on the Cloud and accessed via the Internet.
• Rendering Devices: A/V devices screens and TV’s, projectors and audio like active loudspeakers.
• Browse and operate means includes: tablets, smartphone, remote terminals enabled with one way - or two way of operation/control. Miscellaneous web-enabled utilities like refrigerator, cloths and alike is a kind of remote terminal in that sense.
• Network means include: network router, and data/signal switch.
A loudspeaker transducer unit is the fundamental means that transforms the electrical signal to the sound waves produced by movements of the membrane of the transducer unit. The characteristics of each loudspeaker transducer unit are determined by measurement and/or from specifications delivered by the supplier of the unit.
In the preferred embodiment an audio reproduction system comprising active sound transducers including an amplifier is provided for each transducer unit. This type of amplifier is e.g. the technology ICEpower from Bang & Olufsen DK.
In a high quality audio reproduction system a dedicated filter means, an equalizer is provided per amplifier. The means provides a frequency dependent amplification to control the overall gain, which may be regulated up or down as required. Means for down regulation may be as simple as adjustment of a resistive means serial connected to the loudspeaker module.
To control the sound distribution into individual zones of sound fields the sound delay among channels must be controlled. In a preferred embodiment the delay is controlled individually per sound channel.
Figure 2 displays specific alternative configurations including: • Multi-media sources of information via the Internet or Cloud. An Intelligent network device interconnects the sources to the rendering devices, configuring of the rendering devices and controls partly the rendering devices.
• Rendering devices are configured with different functional capabilities: o A room (Y) may include audio rendering devices, o A room (X) may include video rendering - and audio rendering devices.
o A room (Z) may include two domains configured as comfort zones. One zone enabled for audio rendering (P), and one zone enabled for audio and video rendering (Q).
o Optionally a “quiet zone” in any of the rooms/domains is configured. Thus, if zone Q is active in playing audio, zone P is controlled to be quiet.
Figure 3 displays how a media file is provided via loudspeaker means to a user.
An A/V media file with related Meta Data is provided to a user e.g. as sound. Physical constraints define the data about the position in a room of the loudspeaker means.
Psycho acoustical constraints define perceptual model data applied as correction values to optimize the user listening experience (see Figure 4).
A first user P1 is in one room and the FIR setting are according to loudspeaker position and user position in that room.
A second user (Q1) is in another room and the FIR setting are according to loudspeaker position and user position in that room.
The second user moves to another position (Q2) in the second room and accordingly the FIR settings are adjusted.
Figure 4 illustrates how to optimize a listener experience according to given physical constraints and perceptual constraints.
A given media file has related Meta Data describing the ideal setup (X) for the rendering system to comply with: • a playback according to this gives the user the perfect experience (P1); • a playback of a less optimal system setup (Y) gives the user less perfect experience (P2).
To enhance the user experience the goal is that P1 * P2, and to accommodate this a perceptual model applied in the control process as a correction Δ value having |P1 —P2|, as the delta value of the perception.
The correction function relates the physical aspects and the perceptual aspect as determined via experiments.
The perceptual model includes physical measurements in the environment: S1 for the optimal and S2 for the actual situation.
The reconfiguring process executes automatically, when an audio file (sound object) actually being rendered relates to object based audio metadata that includes information about the position of the sound object versus a period of time, said information being relative correction values +, 0, -, for a set of FIR parameters being addressed.
Figure 5 displays a preferred embodiment the configure means is imbedded into the controller of the audio sound system, and/or distributed partly in rendering devices. Thus, the audio sound system (100) is the master having the digital signal processor means that initiates, controls and applies the reconfiguring process. Alternative CS table definitions (120) may be loaded into the digital signal processor from external means e.g. a laptop/PC, in addition to actual settings for the system (115).
The application interfaces with the constraint solver is via an input/output variable list of variables, which are referred in the constraint definitions (120).
The invention is used to automatically adjusting an audio sound system to a maximum level of quality taking into account the position of individual listeners in dedicated zone of sound fields.
The invention includes sound channel configurations applied in surround sound systems and traditional stereophonic system, and to be applied in any domain like a home, in a vehicle, in an airplane in a boat, in an office or in any other public domain and alike.

Claims (10)

1. method for automatic configuring and reconfiguring a multimedia reproduction system in one or more of rooms, said room(s) being enabled with audio- or video- or both types of rendering devices, and said room(s) being enabled with one or more individual sound zones such that two or more users, simultaneously, may listen to the same multimedia file or may listen to different multimedia files in each of the sound zones in which the users are present, the method is characterized by: • determine the physical position of the one or more listeners, • determine the physical position of the one or more loudspeaker transducer(s), • apply the physical position as information to select a set of predefined sound parameters, the set of parameters include FIR settings per loudspeaker transducer, • apply context information related to an actual media file, said context information being object based audio metadata, • apply psycho-acoustical information related to a user perceptual experience, • provide the set of parameters per sound channel and/or per loudspeaker transducer accordingly, and apply the above parameters fully or partly as constraints in a constraint solver which upon execution finds one or more legal combination(s) among all the defined set of legal combinations.
2. A method according to claim 1, where the reconfiguring process executes automatically, when a user moves from one position in a sound zone to another position in the same sound zone.
3. A method according to claim 2, where the reconfiguring process executes automatically, when an audio rendering device moves from one position in a sound zone to another position in the same sound zone.
4. A method according to claim 3, where the reconfiguring process executes automatically, when an audio file (sound object) actually being rendered relates to object based audio metadata that includes information about the position of the sound object versus a period of time, said information being relative correction values +, 0, -, for a set of FIR parameters being addressed.
5. A method according to claim 4, where a quiet zone in any of the rooms/domains is configured, said quiet zone being relative to an active rendered sound, in a zone in the same room/domain as the quiet zone.
6. A method according to claim 5, where the reconfiguring process executes automatically, when a user carried source device moves from one position in a sound zone to another position in the same sound zone.
7. A method according to claim 6, where the reconfiguring process executes automatically, when a user moves from one position in a sound zone to a position in another sound zone.
8. A method according to claim 7, where the reconfiguring process executes automatically, when a user moves from one position in a sound zone to a position in another sound zone that is occupied by another active user.
9. A method according to all claims, where one or more of the active devices: controllers, audio rendering devices, video rendering devices that constitute an audio/video reproduction system includes a configurator.
10. A method according to all claims, where the saved attributes and key parameters are loaded into the configurator supported by electronically means connected wirelessly or connected wired to the audio reproduction system.
DKPA201500024A 2015-01-14 2015-01-14 Adaptive System According to User Presence DK178752B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DKPA201500024A DK178752B1 (en) 2015-01-14 2015-01-14 Adaptive System According to User Presence
EP16020006.9A EP3046341B1 (en) 2015-01-14 2016-01-06 Adaptive method according to user presence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DKPA201500024A DK178752B1 (en) 2015-01-14 2015-01-14 Adaptive System According to User Presence

Publications (2)

Publication Number Publication Date
DK201500024A1 true DK201500024A1 (en) 2016-08-01
DK178752B1 DK178752B1 (en) 2017-01-02

Family

ID=55070831

Family Applications (1)

Application Number Title Priority Date Filing Date
DKPA201500024A DK178752B1 (en) 2015-01-14 2015-01-14 Adaptive System According to User Presence

Country Status (2)

Country Link
EP (1) EP3046341B1 (en)
DK (1) DK178752B1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9980076B1 (en) 2017-02-21 2018-05-22 At&T Intellectual Property I, L.P. Audio adjustment and profile system
US10516961B2 (en) 2017-03-17 2019-12-24 Nokia Technologies Oy Preferential rendering of multi-user free-viewpoint audio for improved coverage of interest
US10735885B1 (en) 2019-10-11 2020-08-04 Bose Corporation Managing image audio sources in a virtual acoustic environment
EP4256815A2 (en) * 2020-12-03 2023-10-11 Dolby Laboratories Licensing Corporation Progressive calculation and application of rendering configurations for dynamic applications
GB2616073A (en) * 2022-02-28 2023-08-30 Audioscenic Ltd Loudspeaker control

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100092005A1 (en) * 2008-10-09 2010-04-15 Manufacturing Resources International, Inc. Multidirectional Multisound Information System
US20130230175A1 (en) * 2012-03-02 2013-09-05 Bang & Olufsen A/S System for optimizing the perceived sound quality in virtual sound zones
US20140064501A1 (en) * 2012-08-29 2014-03-06 Bang & Olufsen A/S Method and a system of providing information to a user
EP2806664A1 (en) * 2013-05-24 2014-11-26 Harman Becker Automotive Systems GmbH Sound system for establishing a sound zone

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2161950B1 (en) * 2008-09-08 2019-01-23 Harman Becker Gépkocsirendszer Gyártó Korlátolt Felelösségü Társaság Configuring a sound field
US20130294618A1 (en) * 2012-05-06 2013-11-07 Mikhail LYUBACHEV Sound reproducing intellectual system and method of control thereof
US9179232B2 (en) * 2012-09-17 2015-11-03 Nokia Technologies Oy Method and apparatus for associating audio objects with content and geo-location
WO2014122550A1 (en) * 2013-02-05 2014-08-14 Koninklijke Philips N.V. An audio apparatus and method therefor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100092005A1 (en) * 2008-10-09 2010-04-15 Manufacturing Resources International, Inc. Multidirectional Multisound Information System
US20130230175A1 (en) * 2012-03-02 2013-09-05 Bang & Olufsen A/S System for optimizing the perceived sound quality in virtual sound zones
US20140064501A1 (en) * 2012-08-29 2014-03-06 Bang & Olufsen A/S Method and a system of providing information to a user
EP2806664A1 (en) * 2013-05-24 2014-11-26 Harman Becker Automotive Systems GmbH Sound system for establishing a sound zone

Also Published As

Publication number Publication date
DK178752B1 (en) 2017-01-02
EP3046341B1 (en) 2019-03-06
EP3046341A1 (en) 2016-07-20

Similar Documents

Publication Publication Date Title
US10536123B2 (en) Volume interactions for connected playback devices
US11909365B2 (en) Zone volume control
US11729568B2 (en) Acoustic signatures in a playback system
EP2161950B1 (en) Configuring a sound field
CN107852562B (en) Correcting state variables
EP3046341B1 (en) Adaptive method according to user presence
EP2867895B1 (en) Modification of audio responsive to proximity detection
JP6161791B2 (en) Private queue for media playback system
CA2842003C (en) Shaping sound responsive to speaker orientation
US9008330B2 (en) Crossover frequency adjustments for audio speakers
KR20130048794A (en) Dynamic adjustment of master and individual volume controls
WO2015108937A1 (en) Software application and zones
JP2016523017A (en) Media playback system playback queue transfer
WO2019165038A1 (en) Content based dynamic audio settings
CN111095191A (en) Display device and control method thereof
DK178063B1 (en) Dynamic Configuring of a Multichannel Sound System
Jackson et al. Object-Based Audio Rendering
US20240111483A1 (en) Dynamic Volume Control
US20230104774A1 (en) Media Content Search In Connection With Multiple Media Content Services