EP3046341A1 - Adaptive system according to user presence - Google Patents
Adaptive system according to user presence Download PDFInfo
- Publication number
- EP3046341A1 EP3046341A1 EP16020006.9A EP16020006A EP3046341A1 EP 3046341 A1 EP3046341 A1 EP 3046341A1 EP 16020006 A EP16020006 A EP 16020006A EP 3046341 A1 EP3046341 A1 EP 3046341A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- sound
- audio
- zone
- user
- val
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003044 adaptive effect Effects 0.000 title description 2
- 238000009877 rendering Methods 0.000 claims abstract description 42
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000012937 correction Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 241000331006 Euchaeta media Species 0.000 description 2
- 230000003321 amplification Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000033228 biological regulation Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000003828 downregulation Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2227/00—Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
- H04R2227/005—Audio distribution systems for home, i.e. multi-room use
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R27/00—Public address systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/308—Electronic adaptation dependent on speaker or headphone connection
Definitions
- the present invention relates to a method and system for dynamic configuring and reconfiguring a multimedia reproduction system comprising two or more sound channels.
- the premises of the reconfiguring based on the position of one or more listeners and the one or more sound channels in the reproduction system.
- the configuring includes portable devices for audio/video sources and audio/video rendering equipment.
- the feature obtained is a sound distribution into individual sound zones, with optimized sound quality as perceived by a user/listener.
- the system includes a feature for adaption according to perceptive aspects per user and his/her position in the domain, this known as "object based rendering per user".
- object based rendering per user This is an advanced configuration tasks for rendering of object-based audio material in domestic environments, where both the reproduction system and the listener position are dynamic.
- the position of the rendering devices and the user(s) in a domain might be determined via precision GPS means.
- the invention includes digital signal processing, sound transducers, filters and amplifier configurations is to be applied in multichannel sound systems, surround sound systems and traditional stereophonic system.
- the system applied in any domain like a home, in a vehicle, in a boat, in an airplane, in an office or in any other private or public domain.
- the individual sound channels incorporating one or more loudspeaker modules must be calibrated and adjusted individually and according to the number of persons in the car, and their position in the space e.g. their seated positions.
- the current invention discloses a supplemental method by applying context information related to an actual media file, said context information being audio metadata and applying psycho acoustical information related to the users perceptual experience.
- This principle may be applied in any type of room like airplanes, boats, theatres, arenas, and shopping centres and alike.
- a first aspect of the invention is:
- the invention discloses a multichannel sound system where different sound settings enabled via digital signal processing means controlling the individual sound channels parameters e.g. the equalization (filters), the delay, the gain (amplification).
- the premise for the control based upon the listener position in the listening room/space.
- control and sound setting may work in a contextual mode of operation to accommodate:
- the invention includes:
- the fundamental goal of the invention is to enhance the perceived sound quality seen from a listener prospective. This obtained by adjusting the gain, the delay and the equalization parameters of the individual sound channels in a multichannel sound system. This feature is regardless of the physical location of the loudspeakers transducers.
- the invention discloses a system concept having the functional features as "Multimedia - Multi room/domain - Multi user" System Concept.
- the video rendering is via a screen (not a TV ) and based on digital input from the internet.
- the audio rendering is typically via active loudspeaker and wirelessly via digital networks.
- the wireless distribution is within domains, in a zone-based concept that might include two individual sound zones in the same room.
- a use case example is a tablet applied to browse, preview, select, and providing content:
- the system includes a feature for adaption according to perceptive aspects per user and his/her position in the domain, this known as "object based rendering per user".
- object based rendering per user This is an advanced configuration tasks for rendering of object-based audio material in domestic environments, where both the reproduction system and the listener position are dynamic.
- the position of the rendering devices and the user(s) in a domain might be determined via precision GPS means.
- the reconfiguring process validates the position of the one or more users, and the validation priorities the position of the one or more listeners:
- the reconfiguring process includes one or more algorithms to provide the calculation of each of the values of the sound channel adjustment-variables.
- the reconfiguring process provides a table with relations enabled for access by the digital signal processor to provide each of the values of the sound channel adjustment-variables.
- the saved attributes and key parameters are loaded into the reconfiguring means supported by electronically means connected wirelessly or is connected wired to the audio reproduction system.
- the reconfiguring process provides the settings of the sound parameters; gain, equalization and delay for one sound channel applied in one sound field zone related to the physical position of first group of people including one or more listeners.
- the reconfiguring process provides the settings of the sound parameters; gain, equalization and delay for one sound channel, to be applied in one sound field zone related to the physical position of one or more other groups of people including one or more listeners.
- a listener position is detected via standard sensor means like switches or infrared detectors or strain gauges or temperature sensitive detectors, or indoor GPS.
- the configuring process executes automatically controlled by the audio amplifier means, including a digital signal processor, which drives the loudspeaker system.
- the reconfiguring means are embedded into the controller of an audio reproduction system and/or distributed in one or all of the rendering devices, e.g. sound transducers, displays, and TV's.
- the reconfiguring is controlled via a table mapping the mode of operation to adjustment parameters for each speaker in every channel or just relevant channels.
- the adjustment parameters are e.g. but not limited to: equalization, delay and gain.
- the table may be represented as one or more data set as most appropriate to the digital controller unit.
- one data set may contain the relations among:
- Another data set may contain the functional/mode related information like:
- This table concept is a data driven control systems, and enables for easy updates of the functional behaviour of a specific system simply by loading alternative data set into the controller.
- the invention includes a constraint solver, which comprises a table with digital data representing at least the constraints of the listener positions and equipment positions and related acoustical adjustment, attributes and corresponding variable values.
- the constraint solver processing enables an arbitrary access mode to information with no order of sequences required.
- the configuration domain table is organized as relations among variables in the general mathematical notation of 'Disjunctive Form':
- AttribVariable 1.1 may be defining a listener position, AttribVariable 1.1 a speaker transducer unit, AttribVariable 1.3 a speaker system/subsystem and AttribVariable 1.n gain value for the transducer unit.
- AttribVariable 2.n may be a reference to another table.
- the product configuration function proceeds by finding the result of the interrogation in the set of allowed and possible combinations in one or more Configuration Constraint Tables. According to definitions made in the configuration constraint tables, the result might be:
- the constraint solver evaluates alternative configurations.
- the alternatives being one or more of the defined set of legal combination in the constraint table.
- Table entries in a constraint table may combine into legal/illegal combination such that all the logical operators known from the Boolean algebra will be included as required.
- the reconfiguring means is embedded into an audio reproduction system.
- the system becomes the controller of the reconfiguring process that provides a reconfiguration of the system itself.
- the invention is used to automatically adjusting an audio sound system to a maximum level of quality taking into account the position of individual listeners in dedicated zone of sound fields.
- the invention includes sound channel configurations applied in surround sound systems and traditional stereophonic system, and to be applied in any domain like a home, in a vehicle, in an airplane in a boat, in an office or in any other public domain and alike.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Circuit For Audible Band Transducer (AREA)
- Stereophonic System (AREA)
Abstract
Furthermore, the configuring apply context information related to an actual media file, said context information being object based audio metadata.
The feature obtained is a sound distribution into individual sound zones, with optimized sound quality as perceived by one or more users/listeners.
Description
- The present invention relates to a method and system for dynamic configuring and reconfiguring a multimedia reproduction system comprising two or more sound channels. The premises of the reconfiguring based on the position of one or more listeners and the one or more sound channels in the reproduction system. In addition, the configuring includes portable devices for audio/video sources and audio/video rendering equipment.
The feature obtained is a sound distribution into individual sound zones, with optimized sound quality as perceived by a user/listener. - The system includes a feature for adaption according to perceptive aspects per user and his/her position in the domain, this known as "object based rendering per user". This is an advanced configuration tasks for rendering of object-based audio material in domestic environments, where both the reproduction system and the listener position are dynamic. The position of the rendering devices and the user(s) in a domain might be determined via precision GPS means.
- The invention includes digital signal processing, sound transducers, filters and amplifier configurations is to be applied in multichannel sound systems, surround sound systems and traditional stereophonic system. The system applied in any domain like a home, in a vehicle, in a boat, in an airplane, in an office or in any other private or public domain.
- It is a well-known problem in prior art loudspeaker systems operating in closed rooms/spaces, that the sound experienced by the user may vary according to the listeners position in a space relative to the loudspeaker system transducers.
- In prior art
US 2013/0230175 a system is disclosed to optimize the perceived sound quality in virtual sound zones, where the system includes a method to establish a threshold of acceptability for interfering audio programme on a target audio program. - Thus, to obtain a certain perceived quality level of a loudspeaker system, e.g. in a car, the individual sound channels incorporating one or more loudspeaker modules must be calibrated and adjusted individually and according to the number of persons in the car, and their position in the space e.g. their seated positions.
- The current invention discloses a supplemental method by applying context information related to an actual media file, said context information being audio metadata and applying psycho acoustical information related to the users perceptual experience.
- This principle may be applied in any type of room like airplanes, boats, theatres, arenas, and shopping centres and alike.
- A first aspect of the invention is:
- A method for automatic configuring and reconfiguring a multimedia reproduction system in one or more of rooms, said room(s) being enabled with audio- or video- or both types of rendering devices. Said room(s) being enabled with two or more individual sound zones such that two or more users, simultaneously, may listen to the same multimedia file or may listen to different multimedia files in each of the sound zones in which the users are present, and where a system controller:
- determine the physical position of the one or more listeners,
- determine the physical position of the one or more loudspeaker transducer(s),
- apply the physical position as information to select a set of predefined sound parameters, the set of parameters include FIR settings per loudspeaker transducer,
- apply context information related to an actual media file, said context information being object based audio metadata,
- apply psycho-acoustical information related to a user perceptual experience,
- provide the set of parameters per sound channel and/or per loudspeaker transducer accordingly, and apply the above parameters fully or partly as constraints in a constraint solver which upon execution finds one or more legal combination(s) among all the defined set of legal combinations.
- The invention discloses a multichannel sound system where different sound settings enabled via digital signal processing means controlling the individual sound channels parameters e.g. the equalization (filters), the delay, the gain (amplification). The premise for the control based upon the listener position in the listening room/space.
- In a more advanced embodiment of the invention, the control and sound setting may work in a contextual mode of operation to accommodate:
- the position of the one or more listeners;
- a functional mode of operation, e.g. adjust the sound system settings to be in movie mode;
- the type of music, e.g. rock;
- the time of the day; e.g. morning/day/night/Christmas etc.
- The invention includes:
- sensor means to detect a listener and rendering device position in a room;
- detect spoken commands from user(s);
- a mode of operation related to a user position or a required function;
- information available according to context (place, time and music content);
- single channel/multi-channel sound system, e.g. mono, two channels stereo, or a 5.1 surround sound system;
- digital controlled sound system e.g. digital control of gain, equalization and delay;
- active speakers including amplifiers and filters for each loudspeaker transducer.
- The digital control of the sound system based on standard means for:
- gain: adjust the signal by a certain level, i.e. + /- xx dB, e.g. +0.1 dB;
- delay: the signal is delayed by a specific time, i.e. yy ms, e.g. 100 ms;
- EQ: the signal is filtered according to the Finite Impulse Response principle (FIR) or the signal is filtered according to the Infinite Impulse Response principle (iiR); a number of coefficients are specified; the number of parameters typically to be from 1 > 1000.
- The invention implemented in different embodiments, in which the alternative configuring procedures are:
- a traditional algorithm in terms of a sequential software program (e.g. in a DSP) in which the values of the resulting adjustment parameters are embedded in the code itself; the algorithm validates according to a specified system structure of an actual loudspeaker configuration;
- a table based concept in which one or more tables defines the attributes to be applied per loudspeaker channel versus the mode of operation, the listener position and other context related information (e.g. time).
- The fundamental goal of the invention is to enhance the perceived sound quality seen from a listener prospective. This obtained by adjusting the gain, the delay and the equalization parameters of the individual sound channels in a multichannel sound system. This feature is regardless of the physical location of the loudspeakers transducers.
- The invention discloses a system concept having the functional features as "Multimedia - Multi room/domain - Multi user" System Concept.
- The key aspects included in the invention are:
- Access, distribute and control multimedia information
- o In a multi room and multi domain (zone) environment;
- o Enabled with control in a multi user mode of operation;
- Automatic configuring of multimedia sources available to user carried equipment;
- Automatic configuring of multimedia rendering devices available to user presence and position in a room/domain;
- Automatically adaptive according to: a) installed equipment, b) user presence and c) user carried equipment:
- Configure for use in one - or more rooms individually.
- Configure one - or more domain/comfort zone(s) in a room.
- Configure a room according to one or more users present in a room.
- Configure a room according to available rendering devices in a room.
- Configure a room according to available source devices in a room.
- Configure a room according to user presence and position in a room and the perceptual attributes related to that position.
- The basic means in the system are, see
Figure 1 : - Sources i.e. media files (virtual/physical) located on - and accessible via the internet, physical disk or cloud (101,102,103).
- Sources (104) i.e. media streams located on - and accessible via the internet.
- Rendering devices (105,106,107,110) to provide media files content; the devices being: display means (screen, projector), audio means (active loudspeakers).
- Browse/preview - and control devices (108,109,110, the devices being: tablet, smart phone, remote-terminal.
- System control devices (switch/router/bridge/controller), data communication wired/wireless (111,112) and all configured to actual product requirements.
- The system controller (113) is a combined network connection and data switch that automatically configures control/data path(s) from the data sources to rendering devices (114,115). The configuration according to -, and related to user requests and user appearance in a specific room/domain.
- The video rendering is via a screen (not a TV) and based on digital input from the internet.
The audio rendering is typically via active loudspeaker and wirelessly via digital networks.
The wireless distribution is within domains, in a zone-based concept that might include two individual sound zones in the same room. - A room is a domain with alternative configurations, see
Figure 2 : - X (201): Configured with sound rendering devices, e.g. active loudspeakers and a video rendering device, e.g. a simple screen (not a TV).
- Y (202): Configured with sound rendering devices, e.g. active loudspeakers.
- Z (203): Configured with two domains (204,205), each considered as comfort zones and might include both video and audio; the illustration displays one domain of type X (audio & video) and one domain of type Y (audio).
- Tablet, SmartPhone as user interface (206).
- Intelligent data router/bridge/signal and control switch, connecting source- and rendering devices (207).
- A use case example is a tablet applied to browse, preview, select, and providing content:
- A user is browsing on the tablet (iPad), and finds an interesting YouTube video.
- "Pushes" the video/audio and provided on the big screen and on the loudspeakers in zone X (201).
- "Pushes" the audio and provided on the connected loudspeakers in zone Y (202), as well.
- Another user continues browsing on the tablet, finds interesting music on a music-service, and activates streaming of this music; the stream becomes active on the tablet and the user commands the music provided in comfort zone Z (203).
- Specifically the multi user feature of the system adapt to the user behavior and presence of the users in a domain:
- A user M (207) enters into domain P (205) in the room, which becomes active, and providing the sound via two loudspeakers sourced from the SmartPhone (206) in hand.
- Another user W enters into domain Q in the room, which becomes active, and providing the sound via two loudspeakers sourced from the SmartPhone (209) in hand; and optionally providing video on the display screen in case the active file includes audio and video.
- A competing situation may appear if a user enters into a domain already occupied by another user; in this situation, the system may automatically give priority to the user having a highest rank. The rank according to a simple predefined access profile per user. The system may identify the user via "finger print" sensing and/or "eye iris" detection, or the user having the highest rank may command the system with a given spoken control command.
- The system includes a feature for adaption according to perceptive aspects per user and his/her position in the domain, this known as "object based rendering per user". This is an advanced configuration tasks for rendering of object-based audio material in domestic environments, where both the reproduction system and the listener position are dynamic. The position of the rendering devices and the user(s) in a domain might be determined via precision GPS means.
- In another aspect, the reconfiguring process validates the position of the one or more users, and the validation priorities the position of the one or more listeners:
- the reconfiguring process executes automatically, when a user moves from one position in a sound zone to another position in the same sound zone;
- the reconfiguring process executes automatically, when an audio rendering device moves from one position in a sound zone to another position in the same sound zone.
- The reconfiguring process includes one or more algorithms to provide the calculation of each of the values of the sound channel adjustment-variables.
- In the preferred embodiment, the reconfiguring process provides a table with relations enabled for access by the digital signal processor to provide each of the values of the sound channel adjustment-variables.
The saved attributes and key parameters are loaded into the reconfiguring means supported by electronically means connected wirelessly or is connected wired to the audio reproduction system. - In a third, aspect the invention the reconfiguring process provides the settings of the sound parameters; gain, equalization and delay for one sound channel applied in one sound field zone related to the physical position of first group of people including one or more listeners.
- To accommodate for the handling more zones including different groups of people the reconfiguring process provides the settings of the sound parameters; gain, equalization and delay for one sound channel, to be applied in one sound field zone related to the physical position of one or more other groups of people including one or more listeners.
- A listener position is detected via standard sensor means like switches or infrared detectors or strain gauges or temperature sensitive detectors, or indoor GPS.
- The configuring process executes automatically controlled by the audio amplifier means, including a digital signal processor, which drives the loudspeaker system. The reconfiguring means are embedded into the controller of an audio reproduction system and/or distributed in one or all of the rendering devices, e.g. sound transducers, displays, and TV's.
- In a preferred embodiment, the reconfiguring is controlled via a table mapping the mode of operation to adjustment parameters for each speaker in every channel or just relevant channels. The adjustment parameters are e.g. but not limited to: equalization, delay and gain.
- The table may be represented as one or more data set as most appropriate to the digital controller unit.
E.g., one data set may contain the relations among: - listener position, and user identification (an ID number)
- loud speaker channel #,
- parameter settings (EQ, delay, and gain).
- Another data set may contain the functional/mode related information like:
- functional settings (movie, audio only),
- refer to media source and related Metadata (the object based audio file)
- loud speaker channel #,
- parameter settings (EQ, delay, and gain).
- This table concept is a data driven control systems, and enables for easy updates of the functional behaviour of a specific system simply by loading alternative data set into the controller.
- In addition, the invention includes a constraint solver, which comprises a table with digital data representing at least the constraints of the listener positions and equipment positions and related acoustical adjustment, attributes and corresponding variable values.
The constraint solver processing enables an arbitrary access mode to information with no order of sequences required. -
- For example, AttribVariable 1.1 may be defining a listener position, AttribVariable 1.1 a speaker transducer unit, AttribVariable 1.3 a speaker system/subsystem and AttribVariable 1.n gain value for the transducer unit. In another example, AttribVariable 2.n may be a reference to another table.
- An alternative definition term is the 'Conjunctive Form':
AttribVariable 1.1 or AttribVariable 1.2 or AttribVariable 1.3 or AttribVariable 1.n And AttribVariable 2.1 or AttribVariable 2.2 or AttribVariable 2.3 or AttribVariable 2.n And .... And .... And AttribVariable m.1 or AttribVariable m.2 or AttribVariable m.3 or AttribVariable m.n
- a list of variables useful in the application e.g. gain, and filter setting i.e. equalization, for the one or more transducers;
- a list of variables useful in the application e.g. delay for the one or more transducers;
- a list of variables useful in the application to configure individual sound domains to related to zone of sound of targeted to one or more users.
Mode or position | Speaker channel # | Other (e.g. time, content) | Equalization EQ | Delay | Gain |
user 1 pos-1 | 1 | p-val-d1 | q-val-d1 | r-val-d1 | |
2 | p-val-d2 | q-val-d2 | r-val-d2 | ||
..... | ..... | ..... | ..... | ||
n | p-val-dn | q-val-dn | r-val-dn | ||
user 1 pos-2 | 1 | p-val-f1 | q-val-f1 | r-val-f1 | |
2 | p-val-f2 | q-val-f2 | r-val-f2 | ||
.... | ..... | ..... | ..... | ||
n | p-val-fn | q-val-fn | r-val-fn | ||
user 2 pos-1 | 1 | p-val-r1 | q-val-r1 | r-val-r1 | |
2 | p-val-r2 | q-val-r2 | r-val-r2 | ||
..... | ..... | ..... | ..... | ||
n | p-val-rn | q-val-rn | r-val-rn | ||
All | 1 | p-val-a1 | q-val-a1 | r-val-a1 | |
2 | p-val-a2 | q-val-a2 | r-val-a2 | ||
.... | ..... | ..... | ..... | ||
n | p-val-an | q-val-an | r-val-an | ||
device1 pos-1 | 1 | p-val-m1 | q-val-m1 | r-val-m1 | |
2 | p-val-m2 | q-val-m2 | r-val-m2 | ||
..... | ..... | ..... | ..... | ||
n | p-val-mn | q-val-mn | r-val-mn | ||
..... | ..... | ||||
Other | 1 | p-val-o1 | q-val-o1 | r-val-o1 | |
2 | p-val-o2 | q-val-o2 | r-val-o2 | ||
.... | ..... | ..... | ..... | ||
n | p-val-on | q-val-on | r-val-on |
-
Figure 1 displays system concept (100) components including:- Multi-media source of information (114): A/V files, physical files and virtual files residing on the Cloud and accessed via the Internet (101,102,103,104).
- Rendering Devices (115): A/V devices screens and TV's, projectors and audio like active loudspeakers (105,106,107).
- Browse and operate means includes: tablets, smartphone, remote terminals enabled with one way - or two way of operation/control. Miscellaneous web-enabled utilities like refrigerator, cloths and alike is a kind of remote terminal in that sense (108,109,110).
- Network means include: network, router, and data/signal switch (111,112,113).
In the preferred embodiment an audio reproduction system comprising active sound transducers including an amplifier is provided for each transducer unit. This type of amplifier is e.g. the technology ICEpower from Bang & Olufsen DK.
In a high quality audio reproduction system a dedicated filter means, an equalizer is provided per amplifier. The means provides a frequency dependent amplification to control the overall gain, which may be regulated up or down as required. Means for down regulation may be as simple as adjustment of a resistive means serial connected to the loudspeaker module.
To control the sound distribution into individual zones of sound fields the sound delay among channels must be controlled. In a preferred embodiment the delay is controlled individually per sound channel. -
Figure 2 displays specific alternative configurations including:- Multi-media sources of information via the Internet or Cloud. An Intelligent network device interconnects the sources to the rendering devices, configuring of the rendering devices and controls partly the rendering devices.
- Rendering devices are configured with different functional capabilities:
- ∘ A room Y (202) may include audio rendering devices.
- ∘ A room X (201) may include video rendering - and audio rendering devices.
- ∘ A room Z (203) may include two domains configured as comfort zones. One zone enabled for audio rendering P (205), and one zone enabled for audio and video rendering Q (204).
- ∘ Optionally a "quiet zone" in any of the rooms/domains is configured. Thus, if zone Q is active in playing audio, zone P is controlled to be quiet.
-
Figure 3 displays how a media file is provided via loudspeaker means (306,308) to a user.
A/V media file(s) with related Meta Data is provided to a user e.g. as sound (301,302).
Physical constraints define the data about the position in a room of the loudspeaker means (303).
Psycho acoustical constraints define perceptual model data applied as correction values to optimize the user listening experience (304), see alsoFigure 4 .
A first user P1 is in one room and the FIR setting (305) are according to loudspeaker (306) position and user position in that room.
A second user Q1 is in another room and the FIR setting (307) are according to loudspeaker (308) position and user position in that room.
The second user moves to another position Q2 in the second room and accordingly the FIR settings (307) are adjusted. -
Figure 4 illustrates how to optimize listeners experience according to given physical constraints and perceptual constraints when rendering a media file (400) in two zones.
The perceptual constraints are considered to be a perceptual model (409) being the criteria to be fulfilled to optimize the listener experience for each of the user in each of the two sound zones.
Thus, to enhance the user experience the goal is that P1 ≈ P2, and to accommodate this the perceptual model applied in the control process as a correction Δ value having |P1 -P2|, as the delta value of the perception. The correction function relates the physical aspects and the perceptual aspect as determined via experiments.
A sound zone may be characterized as a "personal bubble". One or more bubbles can be configured in a room, typically two bubbles one per two users. A loudspeaker configuration can be a fixed number of loudspeaker transducers placed around in a room, or it might be a variable number of transducers. Some of these fixed located in a room and others being brought into or out from the room as user(s) are entering/leaving that room.
Infigure 4 two sound zones / bubbles are displayed, one as setup X (401), rendering foruser 1 in bubble S1 (403), and having a second setup Y (402), rendering foruser 2 in bubble S2 (404) for another user.
Due to the variability in number and location of loudspeaker transducers, the current number and physical position of loudspeaker transducers available for the sound field control system (113) is continuously monitored and determined.
Utilizing knowledge of the location of loudspeakers and bubbles (204,205), and how each loudspeaker radiates sound to the bubble, digital filters (rendering of the bubbles) are designed to generate personal sound in each bubble without disturbing users in the different bubbles.
To maximize the user experience in each bubble a dynamic change of the acoustical parameters (gain, delay, EQ) is executed as a delta regulation according to scenario specific perceptual models.
The applicant in patentUS 2015/0264507 discloses detailed information about perceptual modelling.
The reconfiguring process executes automatically, when an audio file (sound object) actually being rendered relates to object based audio metadata that includes information about the position of the sound object versus a period of time, said information being relative correction values +, 0, -, for a set of FIR parameters being addressed. -
Figure 5 displays a preferred embodiment the configure means is imbedded into the controller of the audio sound system, and/or distributed partly in rendering devices. Thus, the audio sound system (500) is the master having the digital signal processor means that initiates, controls and applies the reconfiguring process. Alternative CS table definitions (504) may be loaded into the digital signal processor from external means e.g. a laptop/PC, in addition to actual settings for the system (502).
The application interfaces with the constraint solver is via an input/output variable list of variables, which are referred in the constraint definitions (504).
Claims (10)
- A method for automatic configuring and reconfiguring a multimedia reproduction system in one or more of rooms, said room(s) being enabled with audio- or video- or both types of rendering devices, and said room(s) being enabled with one or more individual sound zones such that two or more users, simultaneously, may listen to the same multimedia file or may listen to different multimedia files in each of the sound zones in which the users are present, and the method includes:• determine the physical position of the one or more listeners,• determine the physical position of the one or more loudspeaker transducer(s),• apply the physical position as information to select a set of predefined sound parameters, the set of parameters include FIR settings per loudspeaker transducer,and the method is characterized by:• apply context information related to an actual media file, said context information being object based audio metadata,• apply psycho-acoustical information related to a user perceptual experience,• provide the set of parameters per sound channel and/or per loudspeaker transducer accordingly, and apply the above parameters fully or partly as constraints in a constraint solver which upon execution finds one or more legal combination(s) among all the defined set of legal combinations.
- A method according to claim 1, where the reconfiguring process executes automatically, when a user moves from one position in a sound zone to another position in the same sound zone.
- A method according to claim 2, where the reconfiguring process executes automatically, when an audio rendering device moves from one position in a sound zone to another position in the same sound zone.
- A method according to claim 3, where the reconfiguring process executes automatically, when an audio file (sound object) actually being rendered relates to object based audio metadata that includes information about the position of the sound object versus a period of time, said information being relative correction values +, 0, -, for a set of FIR parameters being addressed.
- A method according to claim 4, where a quiet zone in any of the rooms/domains is configured, said quiet zone being relative to an active rendered sound, in a zone in the same room/domain as the quiet zone.
- A method according to claim 5, where the reconfiguring process executes automatically, when a user carried source device moves from one position in a sound zone to another position in the same sound zone.
- A method according to claim 6, where the reconfiguring process executes automatically, when a user moves from one position in a sound zone to a position in another sound zone.
- A method according to claim 7, where the reconfiguring process executes automatically, when a user moves from one position in a sound zone to a position in another sound zone that is occupied by another active user.
- A method according to all claims, where one or more of the active devices: controllers, audio rendering devices, video rendering devices that constitute an audio/video reproduction system includes a configurator.
- A method according to all claims, where the saved attributes and key parameters are loaded into the configurator supported by electronically means connected wirelessly or connected wired to the audio reproduction system.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DKPA201500024A DK178752B1 (en) | 2015-01-14 | 2015-01-14 | Adaptive System According to User Presence |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3046341A1 true EP3046341A1 (en) | 2016-07-20 |
EP3046341B1 EP3046341B1 (en) | 2019-03-06 |
Family
ID=55070831
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16020006.9A Active EP3046341B1 (en) | 2015-01-14 | 2016-01-06 | Adaptive method according to user presence |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP3046341B1 (en) |
DK (1) | DK178752B1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9980076B1 (en) | 2017-02-21 | 2018-05-22 | At&T Intellectual Property I, L.P. | Audio adjustment and profile system |
WO2018167363A1 (en) * | 2017-03-17 | 2018-09-20 | Nokia Technologies Oy | Preferential rendering of multi-user free-viewpoint audio for improved coverage of interest |
US10735885B1 (en) | 2019-10-11 | 2020-08-04 | Bose Corporation | Managing image audio sources in a virtual acoustic environment |
WO2022120091A3 (en) * | 2020-12-03 | 2022-08-25 | Dolby Laboratories Licensing Corporation | Progressive calculation and application of rendering configurations for dynamic applications |
GB2616073A (en) * | 2022-02-28 | 2023-08-30 | Audioscenic Ltd | Loudspeaker control |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2161950A2 (en) * | 2008-09-08 | 2010-03-10 | Bang & Olufsen A/S | Configuring a sound field |
US20130294618A1 (en) * | 2012-05-06 | 2013-11-07 | Mikhail LYUBACHEV | Sound reproducing intellectual system and method of control thereof |
US20140079225A1 (en) * | 2012-09-17 | 2014-03-20 | Navteq, B.V. | Method and apparatus for associating audio objects with content and geo-location |
WO2014122550A1 (en) * | 2013-02-05 | 2014-08-14 | Koninklijke Philips N.V. | An audio apparatus and method therefor |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8128342B2 (en) * | 2008-10-09 | 2012-03-06 | Manufacturing Resources International, Inc. | Multidirectional multisound information system |
US9277322B2 (en) * | 2012-03-02 | 2016-03-01 | Bang & Olufsen A/S | System for optimizing the perceived sound quality in virtual sound zones |
US9532153B2 (en) * | 2012-08-29 | 2016-12-27 | Bang & Olufsen A/S | Method and a system of providing information to a user |
EP2806664B1 (en) * | 2013-05-24 | 2020-02-26 | Harman Becker Automotive Systems GmbH | Sound system for establishing a sound zone |
-
2015
- 2015-01-14 DK DKPA201500024A patent/DK178752B1/en active
-
2016
- 2016-01-06 EP EP16020006.9A patent/EP3046341B1/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2161950A2 (en) * | 2008-09-08 | 2010-03-10 | Bang & Olufsen A/S | Configuring a sound field |
US20130294618A1 (en) * | 2012-05-06 | 2013-11-07 | Mikhail LYUBACHEV | Sound reproducing intellectual system and method of control thereof |
US20140079225A1 (en) * | 2012-09-17 | 2014-03-20 | Navteq, B.V. | Method and apparatus for associating audio objects with content and geo-location |
WO2014122550A1 (en) * | 2013-02-05 | 2014-08-14 | Koninklijke Philips N.V. | An audio apparatus and method therefor |
Non-Patent Citations (1)
Title |
---|
FÜG SIMONE ET AL: "Design, Coding and Processing of Metadata for Object-Based Interactive Audio", AES CONVENTION 137; OCTOBER 2014, AES, 60 EAST 42ND STREET, ROOM 2520 NEW YORK 10165-2520, USA, 8 October 2014 (2014-10-08), XP040639006 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9980076B1 (en) | 2017-02-21 | 2018-05-22 | At&T Intellectual Property I, L.P. | Audio adjustment and profile system |
US10313821B2 (en) | 2017-02-21 | 2019-06-04 | At&T Intellectual Property I, L.P. | Audio adjustment and profile system |
WO2018167363A1 (en) * | 2017-03-17 | 2018-09-20 | Nokia Technologies Oy | Preferential rendering of multi-user free-viewpoint audio for improved coverage of interest |
US10516961B2 (en) | 2017-03-17 | 2019-12-24 | Nokia Technologies Oy | Preferential rendering of multi-user free-viewpoint audio for improved coverage of interest |
US10735885B1 (en) | 2019-10-11 | 2020-08-04 | Bose Corporation | Managing image audio sources in a virtual acoustic environment |
WO2022120091A3 (en) * | 2020-12-03 | 2022-08-25 | Dolby Laboratories Licensing Corporation | Progressive calculation and application of rendering configurations for dynamic applications |
GB2616073A (en) * | 2022-02-28 | 2023-08-30 | Audioscenic Ltd | Loudspeaker control |
Also Published As
Publication number | Publication date |
---|---|
EP3046341B1 (en) | 2019-03-06 |
DK178752B1 (en) | 2017-01-02 |
DK201500024A1 (en) | 2016-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2161950B1 (en) | Configuring a sound field | |
EP3046341B1 (en) | Adaptive method according to user presence | |
US10536123B2 (en) | Volume interactions for connected playback devices | |
EP2867895B1 (en) | Modification of audio responsive to proximity detection | |
CN112352442B (en) | Phantom center image control | |
WO2015108794A1 (en) | Dynamic calibration of an audio system | |
KR20130048794A (en) | Dynamic adjustment of master and individual volume controls | |
CA2842003A1 (en) | Shaping sound responsive to speaker orientation | |
US11758326B2 (en) | Wearable audio device within a distributed audio playback system | |
US11395087B2 (en) | Level-based audio-object interactions | |
JP2020109968A (en) | Customized audio processing based on user-specific audio information and hardware-specific audio information | |
US20190265943A1 (en) | Content based dynamic audio settings | |
US20150030170A1 (en) | Method and apparatus for programming hearing assistance device using perceptual model | |
US11601757B2 (en) | Audio input prioritization | |
GB2550877A (en) | Object-based audio rendering | |
CN111095191A (en) | Display device and control method thereof | |
TWI842056B (en) | Audio system with dynamic target listening spot and ambient object interference cancelation | |
WO2015185406A1 (en) | Dynamic configuring of a multichannel sound system for power saving | |
US20230104774A1 (en) | Media Content Search In Connection With Multiple Media Content Services | |
US20240111484A1 (en) | Techniques for Intelligent Home Theater Configuration | |
WO2024073521A1 (en) | Dynamic volume control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20170119 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04R 27/00 20060101ALN20180910BHEP Ipc: H04S 7/00 20060101AFI20180910BHEP |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04R 27/00 20060101ALN20180920BHEP Ipc: H04S 7/00 20060101AFI20180920BHEP |
|
INTG | Intention to grant announced |
Effective date: 20181009 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP Ref country code: AT Ref legal event code: REF Ref document number: 1106218 Country of ref document: AT Kind code of ref document: T Effective date: 20190315 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602016010531 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: NV Representative=s name: INDUSTRIAL PROPERTY SERVICES GMBH, CH |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20190306 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190606 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190607 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190606 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1106218 Country of ref document: AT Kind code of ref document: T Effective date: 20190306 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190706 Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602016010531 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190706 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 |
|
26N | No opposition filed |
Effective date: 20191209 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20200131 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200106 Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200131 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200131 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200106 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20190306 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230703 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240123 Year of fee payment: 9 Ref country code: CH Payment date: 20240202 Year of fee payment: 9 Ref country code: GB Payment date: 20240122 Year of fee payment: 9 |