EP3188505B1 - Reproduction sonore pour une multiplicité d'auditeurs - Google Patents

Reproduction sonore pour une multiplicité d'auditeurs Download PDF

Info

Publication number
EP3188505B1
EP3188505B1 EP16202689.2A EP16202689A EP3188505B1 EP 3188505 B1 EP3188505 B1 EP 3188505B1 EP 16202689 A EP16202689 A EP 16202689A EP 3188505 B1 EP3188505 B1 EP 3188505B1
Authority
EP
European Patent Office
Prior art keywords
signal
listener
audio
sound
microphone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP16202689.2A
Other languages
German (de)
English (en)
Other versions
EP3188505A1 (fr
Inventor
Markus Christoph
Craig Gunther
Matthias Kronlachner
Juergen Zollner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harman Becker Automotive Systems GmbH
Original Assignee
Harman Becker Automotive Systems GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from EP16174534.4A external-priority patent/EP3188504B1/fr
Application filed by Harman Becker Automotive Systems GmbH filed Critical Harman Becker Automotive Systems GmbH
Priority to JP2016248968A priority Critical patent/JP6905824B2/ja
Priority to KR1020160183270A priority patent/KR102594086B1/ko
Priority to CN201710003824.8A priority patent/CN106941645B/zh
Priority to US15/398,139 priority patent/US10097944B2/en
Publication of EP3188505A1 publication Critical patent/EP3188505A1/fr
Application granted granted Critical
Publication of EP3188505B1 publication Critical patent/EP3188505B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/403Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2203/00Details of circuits for transducers, loudspeakers or microphones covered by H04R3/00 but not provided for in any of its subgroups
    • H04R2203/12Beamforming aspects for stereophonic sound reproduction with loudspeaker arrays

Definitions

  • the disclosure relates to sound reproduction systems and methods.
  • WO 2015/076930 A1 discloses evaluating speech signals from a listener to estimate the listener's location. By examining the location, preferred usage settings, and voice commands from listeners, the generated beam patterns are customized to the explicit and implicit preferences of the listeners.
  • US 6 741 273 B1 discloses a system for adjusting the delivery of sound by way of loudspeakers located in an area. A controller adjusts the delivery of the sound according to the relative positions of the loudspeakers and the listener.
  • US 2009/304205 A1 discloses a system for personalizing audio levels which provides different audio volumes to different locations in a room allowing for two or more users to enjoy the same audio content at different volumes.
  • WO 2007/113718 A1 discloses a data processing device with a detection unit adapted to detect individual reproduction modes indicative of a manner of reproducing the data separately for each of a plurality of human users, and a processing unit adapted to process the data to thereby generate reproducible data separately for each of the plurality of human users in accordance with the detected individual reproduction modes. It is desired to reliably detect a listener's position at any desired time including occasional or continuous detection with little complexity.
  • a sound reproduction system includes a loudspeaker arrangement configured to generate from a customized audio signal an acoustically isolated sound field at a position dependent on a sound field position control signal, and a listener evaluation block configured to provide a listening position signal representing a position of a listener and a listener identification signal representing the identity of the listener.
  • the system further includes an audio control block configured to receive and process the listening position signal, the listener identification signal and an audio signal; the audio control block being further configured to control via the sound field position control signal the position of the sound field dependent on the listening position signal so that the position of the sound field is at the position of the listener, and to process the audio signal according to an audio setting dependent on the identity of the listener to provide the customized audio signal.
  • the system further includes a microphone arrangement that is disposed at the listening position.
  • the loudspeaker arrangement is configured to generate a sound beam that sweeps from one side of an area to the other side, the area including the listening position.
  • the listener evaluation block is wirelessly connected or connected by wire to the microphone arrangement.
  • the microphone arrangement is configured to pick up the sound beam when sweeping the listening position and to provide a corresponding microphone signal.
  • the listener evaluation block is configured to evaluate the microphone signal and a corresponding beam position to provide the listening position signal.
  • a sound reproduction method includes generating from a customized audio signal an acoustically isolated sound field at a position dependent on a sound field position control signal, providing a listening position signal representing a listening position and a listener identification signal representing the identity of the listener, and processing the listening position signal, the listener identification signal and an audio signal.
  • the method further includes controlling via the sound field position control signal the position of the sound field dependent on the listening position signal so that the position of the sound field is at the listening position, and processing the audio signal according to an audio setting dependent on the identity of the listener to provide the customized audio signal.
  • the method further includes generating a sound beam sweeping from one side of an area to the other side, the area including the listening position, picking up the sound beam at the listening position and, when sweeping the listening position, providing a corresponding microphone signal, and evaluating the microphone signal and a corresponding beam position to provide the listening position signal.
  • an exemplary sound reproduction system 100 uses individually customized audio beamforming to perform personalized sound control functions such as, for example, one or more of equalization adjustment, volume adjustment, dynamic range compression adjustment etc., that adjust the loudness for individual listeners located at four listening positions 101-104.
  • Those adjustments in the following also referred to as audio settings, are "remembered" for future reference so that the next time the system can locate the same listener, e.g., in a room, and automatically engage his/her custom sound field, e.g., a sound zone which sends the individually adjusted audio only to him/her. This is achieved without the use of headphones or earbuds.
  • the exemplary system shown in Figure 1 allows for individual loudness adjustments at the four listening positions 101-104 and includes a loudspeaker arrangement 105 that generates from customized audio signals 106 three acoustically isolated sound fields, e.g., sound zones 107, 108 and 109 at listening positions 101, 103 and 104, respectively, which may be sound beams directed from the loudspeaker arrangement 105 to listening positions 101, 103 and 104, and a general sound zone 110 that includes at least listening position 102.
  • the position of the sound zones 101-104 may be steered by way of a sound zone position control signal 111.
  • the sound reproduction system 100 may include various blocks for performing certain functions, wherein blocks may be hardware, software or a combination thereof.
  • listener evaluation blocks 112, 113 and 114 one per listener with dedicated sound adjustment, provide wireless signals 115 that include listening position signals representing a position of each listener with dedicated sound adjustment and a listener identification signal identifying each listener designated for dedicated sound adjustment.
  • the sound reproduction system 100 requires information that allows for determining where a particular listener is seated, e.g., within a room. This may be done by using a tone that sweeps from one side of the room to the other and a microphone close to the individual listeners to identify when the sweep passes by them.
  • Such microphones are wirelessly connected or connected by wire to other system components and may be, for example, wired stand-alone microphones (not shown in Figure 1 ) disposed on or in the vicinity of the listeners, or microphones integrated in smartphones with a wireless Wi-Fi or Bluetooth connection.
  • a particular tone such as an inaudible tone with a frequency of >16 kHz, sweeps the room using a separate directed sound beam 116, at least one microphone detects when the maximum volume is obtained at the microphone's position; at that point in time a particular listener can be located.
  • Several listeners can be simultaneously located as long as they have their own clearly recognizable and assignable microphones.
  • the listener evaluation blocks 112, 113 and 114 are provided by smartphones with built-in microphones in connection with software applications (apps) that may evaluate signals from the built-in microphones, perform the listener identifications and establish the wireless connections.
  • a remote control with built-in microphone may provide listener identification and control of the individual adjustment of the audio in the individual sound zone.
  • An indoor positioning system is a system that locates objects or people inside a building using radio waves, magnetic fields, acoustic signals, or other sensory information collected by mobile devices. Exemplary techniques include camera based detection, Bluetooth location services, or global positioning system (GPS) location services. Indoor positioning systems may use different technologies, including distance measurement to nearby anchor nodes (i.e., nodes with known positions, e.g., WiFi access points), magnetic positioning, or dead reckoning. They either actively locate mobile devices and tags or provide ambient location or environmental context for devices to be sensed. Indoor positioning systems may make use of various technologies including optical, radio, or even acoustic technologies, i.e., additionally processing information from other systems to cope with physical ambiguities and to enable error compensation.
  • anchor nodes i.e., nodes with known positions, e.g., WiFi access points
  • dead reckoning i.e., magnetic positioning, or dead reckoning. They either actively locate mobile devices and tags or provide ambient location or environmental context for devices to be sensed.
  • Indoor positioning systems may
  • an exemplary audio control block 117 is designed to receive and process the wireless signals 115, particularly the listening position signal and the listener identification signal contained therein, and an audio signal 118 from an audio source 119.
  • the audio control block 117 may then control via the sound field position control signal 111 the position of the sound field dependent on the listening position signal so that the position of the sound field is at the position of the listener, and to process the audio signal 118 according to the adjusted audio settings, each dependent on the identity of the corresponding listener, to provide the customized audio signals 106.
  • one or more fixed sound beams e.g., related to fixed listening positions, may be employed.
  • the identity of the listener may correspond to the listening position and can be derived therefrom, or may be determined in any other suitable way.
  • Processing the audio signal 118 according to the individual audio settings may include at least one of adjusting the balance between spectral components of the audio signal with a controllable equalizer 120, adjusting the volume of the audio signal with a controllable volume control 121 and adjusting the dynamics of the audio signal 118 with a controllable dynamic range compressor 122.
  • Equalization is the process of adjusting the balance between frequency components within an electronic signal.
  • the term "equalization" (EQ) has come to include the adjustment of frequency responses for practical or aesthetic reasons, often resulting in a net response that is not truly equalized.
  • Volume control (VOL) is used for adjusting the sound level to a predetermined level.
  • Dynamic range compression or simply compression is a signal processing operation that reduces the volume of loud sounds and/or amplifies quiet sounds by narrowing or compressing an audio signal's dynamic range. For example, audio compression may reduce loud sounds that are above a certain threshold while leaving quiet sounds unaffected.
  • Customized audio signals 106 which are each the accordingly processed audio signal 118, are supplied to a beamforming (BF) processor 123 that, in turn, supplies beamformed signals 124 to the loudspeaker arrangement 105 to generate the beams for sound zones 107-110 and the sweeping beam that is sound field 116.
  • BF beamforming
  • the exemplary audio control block 117 may further include a control block (CU) 125 that is connected to a memory (M) 126, a wireless transceiver (WT) 127 and a beam sweep tone generator (BS) 128.
  • the memory 126 stores data representing identities of a multiplicity of listeners and the corresponding audio settings and, optionally, beam settings such as the beam position, beam width etc.
  • the control block 125 selects from memory 126, based on the listener identification signals, the corresponding audio settings for processing the audio signal 118 and steers, based on the listening position signals, the direction of the corresponding sound beams.
  • the listening position signals and the listener identification signals are generated by the wireless transceiver 127 from the wireless signals 115.
  • the beam sweep tone generator 128 provides the signal that is used for the sweeping beam 116 to the beamforming processor 123, and is also controlled by the control block 125.
  • Audio control block 117 may further include a video processor (VP) 129 that is connected to a camera 130 and that allows for recognizing gestures of the listeners in connection with the camera 130 and for controlling, according to the recognized gestures, at least one of processing the audio signal 118 and configuring the respective sound zone, e.g., the shape or width of the corresponding sound beam.
  • the camera 130 is directed to an area that may include the positions of the listeners, i.e., the listening positions 101-104. From this interface the listener can use gestures to widen or narrow the sound beam and/or to move the sound beam to the left or right and/or to dynamically track movements of the listeners in the individual zones. Selecting the particular sound beam would allow the user to adjust the sound setting parameters of that sound beam.
  • This interface may also allow a more experienced listener to configure the sound beam and related sound settings for another less experienced listener that is not familiar with the system. Additionally, a listener may be able to increase the volume within his/her "sound beam” to cover up other ambient noise, or reduce the volume of his/her "sound beam” so that he/she can have a conversation with someone sitting next to him/her, listen to voice mail on the smartphone, etc.
  • the exemplary sound reproduction system may be disposed in a room 131. If a particular listener leaves the room 131 the system may disable the corresponding dedicated sound beam (e.g., one of sound beams 107-109) and the ordinary sound field (e.g., provided by sound beam 110) will replace this listener's beamforming area so the next listener that occupied that particular seat would hear what is heard throughout the rest of the room 131.
  • the ordinary sound field may also be used when no sound zones are desired.
  • the corresponding dedicated sound beam can be re-enabled. Listeners have the option to adjust the configuration parameters while enjoying a program, and to discard those parameters or save them as their new personal defaults.
  • the listener's configuration information may be stored by the system and identified, e.g., by the listener's user name or face recognition data if a camera is employed. For example, the next time this listener watches a movie on a screen 132 associated with the loudspeaker arrangement 105 he/she can select his/her configuration and restore the associated customized sound beam immediately to now point at his/her current seating location.
  • the system may identify the listener when he/she enters the room 131, e.g., via an intrusion prevention systems (IPS) and smartphone proximity, and load the customized configuration automatically.
  • IPS intrusion prevention systems
  • the sound fields may be generated by way of beamforming e.g., the sound beams 107-110 and 116.
  • Beamforming or spatial filtering is a signal processing technique used in loudspeaker or microphone arrays for directional signal transmission or reception. This is achieved by combining elements in a phased array in such a way that signals at particular angles experience constructive interference while others experience destructive interference.
  • the improvement compared with omnidirectional reception/transmission is known as the directivity of the element.
  • Sound fields may also be realized using a sound field description with a technique called higher-order Ambisonics.
  • Ambisonics is a full-sphere surround sound technique which may cover, in addition to the horizontal plane, sound sources above and below the listener. Unlike other multichannel surround formats, its transmission channels do not carry loudspeaker signals. Instead, they contain a loudspeaker-independent representation of a sound field, which is then decoded to the listener's loudspeaker setup. This offers the listener a considerable degree of flexibility as to the layout and number of loudspeakers used for playback. Ambisonics can be understood as a three-dimensional extension of mid/side (M/S) stereo, adding different additional channels for height and depth.
  • M/S mid/side
  • first-order Ambisonics the resulting signal set is called B-format.
  • the spatial resolution of first-order Ambisonics is quite low. In practice, this translates to slightly blurry sources, and also to a comparably small usable listening area (also referred to as sweet spot or sweet area)
  • the resolution can be increased and the desired sound field (also referred to as sound zone) enlarged by adding groups of more selective directional components to the B-format.
  • desired sound field also referred to as sound zone
  • these no longer correspond to conventional microphone polar patterns, but look like, e.g., clover leaves.
  • the resulting signal set is then called second-order, third-order, or collectively, higher-order Ambisonics (HOA).
  • FIGS 2 and 3 illustrate a sound reproduction system 200 which includes three (or, if appropriate, only two) closely spaced steerable (higher-order) loudspeaker assemblies 201, 202, 203, here arranged, for example, in a horizontal linear array (which is referred to herein as higher-order soundbar). Loudspeaker assemblies with omnidirectional directivity characteristics, dipole directivity characteristics and/or any higher order polar responses are herein referred to also as higher-order loudspeakers. Each higher-order loudspeaker 201, 202, 203 has adjustable, controllable or steerable directivity characteristics (polar responses) as outlined further below.
  • Each higher-order loudspeaker 201, 202, 203 may include a horizontal circular array of lower-order loudspeakers (e.g., omni-directional loudspeakers).
  • the circular arrays may each include, e.g., four lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 (such as common loudspeakers and, thus, also referred to as loudspeakers), the four lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 each being directed in one of four perpendicular directions in a radial plane in this example.
  • the array of higher-order loudspeakers 201, 202, 203 may be disposed on an optional base plate 204 and may have an optional top plate 301 on top (e.g., to carry a flat screen television set).
  • an optional top plate 301 on top (e.g., to carry a flat screen television set).
  • instead of four lower-order loudspeakers only three lower-order loudspeakers per higher-order loudspeaker assembly can be employed to create a two-dimensional higher-order loudspeaker of the first order using Ambisonics technology.
  • Alternative use of the multiple-input multiple-output technology instead of the Ambisonics technology allows for creating a two-dimensional higher-order loudspeaker of the first order even with only two lower-order loudspeakers.
  • Other options include the creation of three-dimensional higher-order loudspeakers with four lower-order loudspeakers that are regularly distributed on a sphere (e.g., mounted at the centers of the four faces of a tetrahedral, which is the first representative of the, in total five, Platonic bodies) using the Ambisonics technology and with four lower-order loudspeakers that are regularly distributed on a sphere using the multiple-input multiple-output technology.
  • the higher-order loudspeaker assemblies may be arranged other than in a straight line, e.g., on an arbitrary curve in a logarithmically changing distance from each other or in a completely arbitrary, three-dimensional arrangement in a room.
  • the four lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 may be substantially the same size and have a peripheral front surface, and an enclosure having a hollow, cylindrical body and end closures.
  • the cylindrical body and end closures may be made of material that is impervious to air.
  • the cylindrical body may include openings therein.
  • the openings may be sized and shaped to correspond with the peripheral front surfaces of the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234, and have central axes.
  • the central axes of the openings may be contained in one radial plane, and the angles between adjacent axes may be identical.
  • the lower-order loudspeakers 211 to 214, 221 to 224, and 231 to 234 may be disposed in the openings and hermetically secured to the cylindrical body. However, additional loudspeakers may be disposed in more than one such radial plane, e.g., in one or more additional planes above and/or below the radial plane described above.
  • the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 may each be operated in a separate, acoustically closed volume 215 to 218, 225 to 228, 235 to 238 in order to reduce or even prevent any acoustic interactions between the lower-order loudspeakers of a particular higher-order loudspeaker assembly.
  • the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 may each be arranged in a dent, hole, recess or the like. Additionally or alternatively, a wave guiding structure such as but not limited to a horn, an inverse horn, an acoustic lens etc. may be arranged in front of the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234.
  • a control block 240 receives, e.g., three ambisonic signals 244, 245, 246 to process the ambisonic signals 244, 245, 246 in accordance with steering information 247, and to drive and steer the higher-order loudspeakers 201, 202, 203 based on the ambisonic signals 244, 245, 246 so that at least one acoustic sound field is generated at least at one position that is dependent on the steering information.
  • the control block 240 comprises beamformer blocks 241, 242, 243 that drive the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234. Examples of beamformer blocks are described further below.
  • Figure 4 depicts possibilities of how to use a horizontal linear array of high-order loudspeakers (referred to herein also as horizontal high-order soundbar or just high-order soundbar) in order to realize virtual sound sources in home entertainment.
  • a linear array may be disposed under a television (TV) set for reproducing e.g. the front channels of the commonly used layout in home cinema, the 5.1 surround sound.
  • the front channels of a 5.1 sound system include a front left (Lf) channel, a front right (Rf) channel and a center (C) channel.
  • Arranging a single high-order loudspeaker underneath the TV set instead of the horizontal high-order soundbar would mean that the C channel could be directed to the front of the TV set and the Lf and Rf channels to its sides, so that the Lf and Rf channels would not be transferred directly to a listener sitting (at the sweet spot or sweet area) in front of the TV set but only indirectly via the side walls, constituting a transfer path which depends on numerous unknown parameters and, thus, can hardly be controlled.
  • a high-order soundbar with (at least) two high-order loudspeakers that are arranged in a horizontal line allows for directly transferring front channels, e.g., the Lf and Rf channels, directly to the sweet area, i.e., the area where the listener should be.
  • a center channel e.g., the C channel
  • a center channel may be reproduced at the sweet area by way of two high-order loudspeakers.
  • a third high-order loudspeaker disposed between the two high-order loudspeakers, may be used to separately direct the Lf and Rf channels and the C channel to the sweet area. Since with three high-order loudspeakers each channel is reproduced by a separate block, the spatial sound impression of a listener at the sweet area can be further improved.
  • each additional high-order loudspeaker added to the high-order soundbar a more diffuse sound impression can be realized and further channels such as, e.g., effect channels may be radiated from the rear side of the high-order soundbar, which is in the present example from the rear side of the TV set to, e.g., the rear wall where the sound provided by the effect channels is diffused.
  • further channels such as, e.g., effect channels may be radiated from the rear side of the high-order soundbar, which is in the present example from the rear side of the TV set to, e.g., the rear wall where the sound provided by the effect channels is diffused.
  • higher-order soundbars provide more options for the positioning of the directional sound sources, e.g., on the side and rear, so that in a common listening environment such as a living room, a directivity characteristic that is almost independent from the spatial direction can be achieved with higher-order soundbars.
  • a common side bar having fourteen lower-order loudspeakers equidistantly distributed inline over a distance of 70 cm can only generate virtual sound sources in an area of maximum ⁇ 90° (degree) from the front direction, while higher-order soundbars allow for virtual sound sources in an area of ⁇ 180°.
  • Figure 4 illustrates an exemplary set-up with a higher-order soundbar including three higher-order loudspeakers 410, 411, 422.
  • An audio control block 401 that receives one or more audio signals 402 and that includes a control block such as control block 240 shown in Figure 2 drives the three higher-order loudspeakers 410, 411, 422 in a target room 413, e.g., a common living room.
  • a listening position sweet spot, sweet area
  • the sound field of at least one desired virtual source can then be generated.
  • a higher-order loudspeaker 424 for a left surround (Ls) channel e.g., a lower-order sub-woofer 423 for the low frequency effects (Sub) channel, and a higher-order loudspeaker 412 for a right surround (Rs) channel are arranged.
  • the target room 413 is acoustically very unfavorable as it includes a window 417 and a French door 418 in the left wall and a door 419 in the right wall in an unbalanced configuration.
  • a sofa 421 is disposed at the right wall and extends approximately to the center of the target room 413 and a table 420 is arranged in front of the sofa 421.
  • a television set 416 is arranged at the front wall (e.g., above the higher order soundbar) and in line of sight of the sofa 421.
  • the front left (Lf) channel higher-order loudspeaker 410 and the front right (Rf) channel higher-order loudspeaker 411 are arranged under the left and right corners of the television set 416 and the center (C) higher-order loudspeaker 422 is arranged under the middle of television set 416.
  • the low frequency effects (Sub) channel loudspeaker 423 is disposed in the corner between the front wall and the right wall.
  • the loudspeaker arrangement on the rear wall including the left surround (Ls) channel higher-order loudspeaker 424 and the right surround (Rs) channel higher-order loudspeaker 412, do not share the same center line as the loudspeaker arrangement on the front wall including the front left (Lf) channel loudspeaker 410, the front right (Rs) channel loudspeaker 411, and low frequency effects (Sub) channel loudspeaker 423.
  • An exemplary sweet area 414 may be on the sofa 421 with the table 420 and the television set 416 in front.
  • the loudspeaker setup shown in Figure 4 is not based on a cylindrical or spherical base configuration and employs no regular distribution.
  • sweet areas 414 and 425 may receive direct sound beams from the soundbar to allow for the preset individual acoustic impressions at those sweet areas 414 and 425.
  • the surround impression can be further enhanced. Furthermore, it has been found that the number of (lower-order) loudspeakers can be significantly reduced.
  • sound fields can be approximated similar to those achieved with forty-five lower-order loudspeakers surrounding the sweet area, or, in the exemplary environment shown in Figure 4 , a higher-order soundbar with three higher-order loudspeakers, which is built from twelve lower-order loudspeakers in total, and exhibits a better spatial sound impression than with the common soundbar with fourteen lower-order loudspeakers in line at comparable dimensions of the two soundbars.
  • a beamformer block 500 or 600 as depicted in Figure 5 or 6 (e.g., applicable as beamformers 241, 242, 243 in Figures 2 and 3 ) may be employed.
  • the beamforming block 500 may further include a modal weighting sub-block 503, a dynamic wave-field manipulation sub-block 505, a regularization sub-block 509 and a matrixing sub-block 507.
  • the modal weighting sub-block 503 is supplied with the input signal 502 [x(n)] which is weighted with modal weighting coefficients, i.e., filter coefficients C 0 ( ⁇ ), C 1 ( ⁇ ) ...
  • C N ( ⁇ ) in the modal weighting sub-block 503 to provide a desired beam pattern, i.e., radiation pattern ⁇ Des ( ⁇ , ⁇ ), based on the N spherical harmonics Y n , m ⁇ ⁇ ⁇ to deliver N weighted ambisonic signals 504, also referred to as C n , m ⁇ Y n , m ⁇ ⁇ ⁇ .
  • the weighted ambisonic signals 504 are transformed by the dynamic wave-field manipulation sub-block 505 using N ⁇ 1 weighting coefficients, e.g. to rotate the desired beam pattern ⁇ Des ( ⁇ , ⁇ ) to a desired position ⁇ Des , ⁇ Des .
  • the N modified and weighted ambisonic signals 506 are then input into the regularization sub-block 509, which includes the regularized radial equalizing filter W n , m ⁇ ⁇ for considering the susceptibility of the playback device Higher-Order-Loudspeaker (HOL) preventing e.g. a given White-Noise-Gain (WNG) threshold from being undercut.
  • Output signals 510 W n , m ⁇ ⁇ C n , m ⁇ Y n , m ⁇ ⁇ Des ⁇ Des of the regularization sub-block 509 are then transformed, e.g.
  • the Q loudspeaker signals 508 may be generated from the N regularized, modified and weighted ambisonic signals 510 by a multiple-input multiple-output sub-block 601 using an N ⁇ Q filter matrix as shown in Figure 6 .
  • the systems shown in Figures 5 and 6 may be employed to realize two-dimensional or three-dimensional audio using a sound field description such as Higher-Order Ambisonics.
  • the W channel Being omnidirectional, the W channel always delivers the same signal, regardless of the listening angle. In order that it may have more-or-less the same average energy as the other channels, W is attenuated by w, i.e., by about 3 dB (precisely, divided by the square root of two).
  • w i.e., by about 3 dB (precisely, divided by the square root of two).
  • X, Y, Z may produce the polar patterns of figure-of-eight.
  • the output sums end up in a figure-of-eight radiation pattern now pointing to the desired direction, given by the azimuth ⁇ and elevation ⁇ , utilized in the calculation of the weighting values x, y and z, and having an energy content that can cope with the W component, weighted by w.
  • the B-format components can be combined to derive virtual radiation patterns that can cope with any first-order polar pattern (omnidirectional, cardioid, hypercardioid, figure-of-eight or anything in between) and point in any three-dimensional direction.
  • the matrixing block 601 may be implemented as a multiple-input multiple-output system that provides an adjustment of the output signals of the higher-order loudspeakers so that the radiation patterns approximate as closely as possible the desired spherical harmonics.
  • WDAF Wave-Domain Adaptive Filtering
  • WDAF is a known efficient spatio-temporal generalization of the also known Frequency-Domain Adaptive Filtering (FDAF).
  • FDAF Frequency-Domain Adaptive Filtering
  • wave domain adaptive filtering the directional characteristics of the higher-order loudspeakers are adaptively determined so that the superposition of the individual sound beams in the sweet area(s) approximates the desired sound field.
  • the sound field needs to be measured and quantified. This may be accomplished by way of an array of microphones (microphone array) and a signal processing block able to decode the given sound field, that, e.g., form a higher-order ambisonic system to determine the sound field in three dimensions or, which may be sufficient in many cases, in two dimensions, which requires fewer microphones.
  • microphone array microphone array
  • signal processing block able to decode the given sound field, that, e.g., form a higher-order ambisonic system to determine the sound field in three dimensions or, which may be sufficient in many cases, in two dimensions, which requires fewer microphones.
  • S microphones are required to measure sound fields up to the Mth order, wherein S ⁇ 2M + 1. In contrast, for a three-dimensional sound field, S ⁇ (2M + 1) 2 microphones are required. Furthermore, in many cases it is sufficient to dispose the microphones (equidistantly) on a circle line.
  • the microphones may be disposed on a rigid or open sphere or cylinder, and may be operated, if needed, in connection with an ambisonic decoder.
  • the microphone array at sweet spot 414 may be integrated in one of the higher-order loudspeakers (not shown).
  • a microphone array similar to microphone array at sweet spot 414 may be disposed at a sweet spot 425.
  • the microphones or microphone arrays at sweet spots 414 and 425 may be used for locating listeners at the sweet spots 414 and 425.
  • a camera such as camera 130 shown in Figure 1 may not only serve to recognize gestures of the listeners but also to detect the positions of the listener and to reposition the sound zones by steering the direction of the higher-order loudspeakers.
  • An exemplary optical detector is shown in Figure 7 .
  • a camera 701 with a lens 702 may be disposed at an appropriate distance above (or below) a mirrored hemisphere 703 with the lens 702 pointing to the curved, mirrored surface of the hemisphere 703, and may provide a 360° view 704 in a horizontal plane.
  • a so-called fisheye lens may be used (as lens 702) that also provides a 360° view in a horizontal plane so that the mirrored hemisphere 703 can be omitted.
  • Figure 8 depicts an exemplary sound reproduction method in which an acoustically isolated sound field is generated from a customized audio signal at a position dependent on a sound field position control signal (procedure 801).
  • a listening position signal representing a position of a listener and a listener identification signal representing the identity of the listener is provided (procedure 802).
  • the listening position signal, the listener identification signal and an audio signal are processed to provide the customized audio signal (procedure 803) to control via the sound field position control signal the position of the sound field dependent on the listening position signal so that the position of the sound field is at the position of the listener (procedure 804), and to process the audio signal according to an audio setting dependent on the identity of the listener to provide the customized audio signal (procedure 805).
  • an array of higher-order loudspeakers e.g., in form of a higher-order soundbar
  • each of them having versatile directivity
  • arbitrary sound fields can be approximated, even in reflective venues such as living rooms where home audio systems are typically installed.
  • This is possible because, due to the use of higher-order loudspeakers, versatile directivities can be created, radiating the sound only in directions where no reflective surfaces exists, or deliberately making use of certain reflections if those turn out to positively contribute to the creation of a desired, enveloping sound field to be approximated.
  • the approximation of the desired sound field at a desired position within the target room e.g.
  • a certain region at the couch in the living room can be achieved by using adaptive methods, such as an adaptive multiple-input multiple-output (MIMO) system, given e.g. by the multiple-FXLMS filtered input least mean squared (multiple-FXLMS) algorithm, which could also operate not just in the time or spectral domain, but also in the so-called wave-domain.
  • MIMO multiple-input multiple-output
  • WDAF wave domain adaptive filters
  • the recording device fulfills certain requirements.
  • circular (for 2D) or spherical microphone arrays (3D), equipped with regularly or quasi-regularly distributed microphones at the surface may be used to record the sound field, having, depending on the desired order in which the sound field has to be recorded, respectively reproduced a minimum number of microphones that have to be accordingly chosen.
  • beamforming filters are calculated using e.g. a MIMO system, arbitrary microphone arrays having different shapes and microphone distributions can be used as well to measure the sound field, leading to high flexibility in the recording device.
  • the recording device can be integrated in a main block of the complete new acoustic system.
  • it can be used not only for the already mentioned recording task, but also for other needed purposes, such as enabling a speech control of the acoustic system to verbally control e.g. the volume, switching of titles, and so on.
  • the main block to which the microphone array is attached could also be used as a stand-alone device e.g. as a teleconferencing hub or as a portable music device with the ability to adjust the acoustic in dependence of the relative position of the listener to the device, which is only possible if a video camera is integrated in the main block as well.
  • Loudspeaker arrangements with adjustable, controllable or steerable directivity characteristics include at least two identical or similar loudspeakers, which may be arranged in one, two or more loudspeaker assemblies, e.g. one loudspeaker assembly with two loudspeakers or two loudspeaker assemblies with one loudspeaker each.
  • the loudspeaker assemblies may be distributed somewhere around the display(s), e.g., in a room.
  • arrays of higher-order loudspeakers it is possible to create sound fields of the same quality, but using fewer devices as compared with ordinary loudspeakers.
  • An array of higher-order loudspeakers can be used to create an arbitrary sound field in real, e.g., reflective environments.
  • the system shown in Figure 1 can be altered to use as an alternative for (or in addition to) the camera 130 a microphone array 901 that is positioned, e.g., at the loudspeaker arrangement 105 and is able to detect the acoustic direction-of-arrival (DOA).
  • the loudspeaker arrangement 105 may have a multiplicity of directional microphones and/or may include a (microphone) beamforming functionality.
  • the smartphones 112, 113 and 114 may have loudspeakers that are able to send non-audible tones 902, 903 and 904 which are picked up by a microphone array 901.
  • the microphone array 901 may be part of a far field microphone system and identifies in connection with a DOA processing block 905, which substitutes wireless transceiver 127 shown in Figure 1 , the directions from which the tones originate.
  • the tones may further include information that allows for identifying the listener associated with the particular smartphone. For example, different frequencies of the tones may be associated with different listeners. Instead of smartphones, accordingly adapted remote control blocks may be used as well.
  • the tones may also include information about the specific sound settings of the associated listener or instructions to alter the corresponding sound settings. If coupled with a speech recognition block 906, microphone array 901 allows for detecting individual listeners or listening positions if a listener talks at one of the listening positions. Thereby, if utilizing different keywords, e.g., the name of the user, individually adjusted audio is available at any sound zone within the room 131. Speech recognition can further be utilized to alter the corresponding sound settings.
  • the far field microphone system shown in Figure 10 further includes an acoustic echo cancellation (AEC) block 1002, a subsequent fix beamformer (FB) block 1003, a subsequent beam steering block 1004, a subsequent adaptive blocking filter (ABF) block 1005, a subsequent adaptive interference canceller block 1006, and a subsequent adaptive post filter block 1010.
  • AEC acoustic echo cancellation
  • FB fix beamformer
  • ABSF adaptive blocking filter
  • N source signals filtered by the RIRs ( h 1 , ⁇ , h M ), and eventually overlaid by noise, serve as an input to the AEC block 1002.
  • Each signal from the fix beamformer block 1003 is taken from a different room direction and may have a different SNR level.
  • the BS block 1004 delivers an output signal b(n) which represents the signal of the fix beamformer block 1003 pointing into room direction with the best/highest current SNR value, referred to as positive beam, and a signal b n (n), representing the current signal of the fix beamformer block 1003 with the least/lowest SNR value, referred to as negative beam.
  • the adaptive blocking filter (ABF) block 1005 calculates an output signal e(n) which ideally solely contains the current noise signal, but no useful signal parts anymore.
  • the expression “adaptive blocking filter” comes from its purpose to block, in an adaptive way, useful signal parts still contained in the signal of the negative beam b n (n).
  • the output signal e(n) enters, together with the optionally, by delay (D) line 1008, delayed signal representative of the positive beam b(n- ⁇ ) the AIC block 1006 including, from a structural perspective, also a subtractor block 1009.
  • the AIC block 1006 Based on these two input signals e(n) and b(n- ⁇ ), the AIC block 1006 generates an output signal which is, on the one hand acting as an input signal to a successive adaptive post filter (PF) block 1010 and on the other hand is fed back to the AIC block 1006, acting thereby as an error signal for the adaptation process, which also employs AIC block 1006.
  • the purpose of this adaptation process is to generate a signal which, if subtracted from the delayed, positive beam signal, reduces, mainly harmonic noise signals, therefrom.
  • the AIC block 1006 also generates time-varying filter coefficients for the adaptive PF block 1010 which is designed to remove mainly statistical noise components from the output signal of subtractor block 1009 and eventually generates a total output signal
  • a signal flow chart may describe a system, method or software implementing the method dependent on the type of realization. e.g., as hardware, software or a combination thereof.
  • a block may be implemented as hardware, software or a combination thereof.

Landscapes

  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • Circuit For Audible Band Transducer (AREA)

Claims (13)

  1. Système de reproduction sonore comprenant :
    un agencement de haut-parleurs (105, 200, 410, 411, 412, 422, 424, 501) configuré pour générer, à partir d'un signal audio personnalisé, un champ sonore isolé acoustiquement à une position qui dépend d'un signal de commande de position de champ sonore ;
    un bloc d'évaluation d'auditeur (112, 113, 114) configuré pour fournir un signal de position d'écoute représentant une position d'un auditeur et un signal d'identification d'auditeur représentant l'identité de l'auditeur ; et
    un bloc de commande audio (117, 401) configuré pour recevoir et traiter le signal de position d'écoute, le signal d'identification d'auditeur et un signal audio ; le bloc de commande audio (117, 401) étant en outre configuré pour commander, via le signal de commande de position de champ sonore, la position du champ sonore qui dépend du signal de position d'écoute de sorte que la position du champ sonore se trouve à la position de l'auditeur, et pour traiter le signal audio selon un paramètre audio qui dépend de l'identité de l'auditeur pour fournir le signal audio personnalisé ;
    caractérisé par
    un agencement de microphones disposé à la position d'écoute, dans lequel :
    l'agencement de haut-parleurs est configuré pour générer un faisceau sonore qui balaie d'un côté d'une zone à l'autre côté, la zone comportant la position d'écoute ;
    le bloc d'évaluation d'auditeur (112, 113, 114) est connecté sans fil ou connecté par fil à l'agencement de microphones ;
    l'agencement de microphones est configuré pour capter le faisceau sonore lors du balayage de la position d'écoute et pour fournir un signal de microphone correspondant ; et
    le bloc d'évaluation d'auditeur (112, 113, 114) est configuré pour évaluer le signal de microphone et une position de faisceau correspondante afin de fournir le signal de position d'écoute.
  2. Système selon la revendication 1, dans lequel le traitement du signal audio selon le paramètre audio comporte au moins l'un du réglage de l'équilibre entre les composantes spectrales du signal audio, du réglage du volume du signal audio et du réglage de la dynamique du signal audio.
  3. Système selon la revendication 1 ou 2, dans lequel
    l'agencement de microphones est en outre configuré pour fournir un signal d'identification de microphone correspondant à l'identité d'un auditeur spécifique ; et
    le bloc d'évaluation d'auditeur (112, 113, 114) est en outre configuré pour identifier l'auditeur spécifique à partir du signal d'identification de microphone et pour générer le signal d'identification d'auditeur correspondant.
  4. Système selon l'une quelconque des revendications 1 à 3, comprenant en outre une mémoire (126) configurée pour stocker des données représentant les identités d'une multiplicité d'auditeurs et les paramètres audio correspondants, dans lequel le bloc de commande audio (117, 401) est en outre configuré pour sélectionner, sur la base du signal d'identification d'auditeur, le réglage audio correspondant pour traiter le signal audio.
  5. Système selon l'une quelconque des revendications 1 à 4, comprenant en outre une configuration audio par défaut et une zone sonore par défaut qui sont utilisées si aucun auditeur connu n'est identifié.
  6. Système selon l'une quelconque des revendications 1 à 5, comprenant en outre une caméra (130, 701) connectée au bloc de commande audio et dirigée vers une zone comportant la position d'écoute, le bloc de commande audio (117, 401) étant en outre configuré pour reconnaître les gestes de l'auditeur via la caméra (130, 701) et pour commander, selon les gestes reconnus, au moins l'un du traitement du signal audio et de la configuration de la zone sonore.
  7. Système selon la revendication 1 ou 2, dans lequel l'agencement de microphones comprend en outre un ensemble de microphones qui a une multiplicité de microphones, dans lequel :
    l'ensemble de microphones est configuré pour capter le son de la position d'écoute et pour fournir un signal de microphone correspondant ; et
    le bloc d'évaluation d'auditeur (112, 113, 114) est connecté à l'ensemble de microphones, le bloc d'évaluation d'auditeur étant configuré pour évaluer le signal de microphone afin d'évaluer la direction de la position d'écoute.
  8. Procédé de reproduction sonore comprenant :
    la génération, à partir d'un signal audio personnalisé, d'un champ sonore isolé acoustiquement à une position qui dépend d'un signal de commande de position de champ sonore ;
    la fourniture d'un signal de position d'écoute représentant une position d'écoute et d'un signal d'identification d'auditeur représentant l'identité de l'auditeur ;
    le traitement du signal de position d'écoute, du signal d'identification d'auditeur et d'un signal audio ;
    la commande, via le signal de commande de position de champ sonore, de la position du champ sonore qui dépend du signal de position d'écoute de sorte que la position du champ sonore se trouve à la position d'écoute ; et
    le traitement du signal audio selon un paramètre audio qui dépend de l'identité de l'auditeur pour fournir le signal audio personnalisé ; caractérisé par
    la génération d'un faisceau sonore balayant d'un côté d'une zone à l'autre côté, la zone comportant la position d'écoute ;
    la capture du faisceau sonore à la position d'écoute et, lors du balayage de la position d'écoute, la fourniture d'un signal de microphone correspondant ; et
    l'évaluation du signal de microphone et d'une position de faisceau correspondante pour fournir le signal de position d'écoute.
  9. Procédé selon la revendication 8, dans lequel le traitement du signal audio selon le paramètre audio comporte au moins l'un du réglage de l'équilibre entre les composantes spectrales du signal audio, du réglage du volume du signal audio et du réglage de la dynamique du signal audio.
  10. Procédé selon la revendication 9, comprenant en outre :
    la fourniture, avec le signal de microphone, d'un signal d'identification de microphone correspondant à l'identité d'un auditeur spécifique ;
    l'identification de l'auditeur spécifique à partir du signal d'identification de microphone ; et la génération du signal d'identification d'auditeur correspondant.
  11. Procédé selon l'une quelconque des revendications 8 à 10, comprenant en outre :
    le stockage de données représentant les identités d'une multiplicité d'auditeurs et les paramètres audio correspondants ; et
    la sélection, sur la base du signal d'identification d'auditeur, des paramètres audio correspondants pour traiter le signal audio.
  12. Procédé selon l'une quelconque des revendications 8 à 10, comprenant en outre
    la capture du son de la position d'écoute et la fourniture d'un signal de microphone correspondant ; et
    l'évaluation du signal du microphone pour évaluer la direction de la position d'écoute.
  13. Procédé selon l'une quelconque des revendications 8 à 12, comprenant en outre :
    la reconnaissance, avec une caméra dirigée vers une zone comportant la position d'écoute, des gestes de l'auditeur ; et
    la commande, selon les gestes reconnus, d'au moins l'un du traitement du signal audio et de la configuration du champ sonore.
EP16202689.2A 2016-01-04 2016-12-07 Reproduction sonore pour une multiplicité d'auditeurs Active EP3188505B1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2016248968A JP6905824B2 (ja) 2016-01-04 2016-12-22 非常に多数のリスナのための音響再生
KR1020160183270A KR102594086B1 (ko) 2016-01-04 2016-12-30 다수의 청취자를 위한 사운드 재생
CN201710003824.8A CN106941645B (zh) 2016-01-04 2017-01-04 大量听众的声音再现的系统和方法
US15/398,139 US10097944B2 (en) 2016-01-04 2017-01-04 Sound reproduction for a multiplicity of listeners

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP16150043 2016-01-04
EP16174534.4A EP3188504B1 (fr) 2016-01-04 2016-06-15 Reproduction multimédia pour une pluralité de destinataires
EP16199773 2016-11-21

Publications (2)

Publication Number Publication Date
EP3188505A1 EP3188505A1 (fr) 2017-07-05
EP3188505B1 true EP3188505B1 (fr) 2020-04-01

Family

ID=57517801

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16202689.2A Active EP3188505B1 (fr) 2016-01-04 2016-12-07 Reproduction sonore pour une multiplicité d'auditeurs

Country Status (1)

Country Link
EP (1) EP3188505B1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022117480A1 (fr) * 2020-12-03 2022-06-09 Interdigital Ce Patent Holdings, Sas Procédé et dispositif de pointage audio utilisant la reconnaissance de geste

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130259238A1 (en) * 2012-04-02 2013-10-03 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for gestural manipulation of a sound field

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6741273B1 (en) * 1999-08-04 2004-05-25 Mitsubishi Electric Research Laboratories Inc Video camera controlled surround sound
ES2381765T3 (es) * 2006-03-31 2012-05-31 Koninklijke Philips Electronics N.V. Dispositivo y método para procesar datos
US20090304205A1 (en) * 2008-06-10 2009-12-10 Sony Corporation Of Japan Techniques for personalizing audio levels
KR102012612B1 (ko) * 2013-11-22 2019-08-20 애플 인크. 핸즈프리 빔 패턴 구성

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130259238A1 (en) * 2012-04-02 2013-10-03 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for gestural manipulation of a sound field

Also Published As

Publication number Publication date
EP3188505A1 (fr) 2017-07-05

Similar Documents

Publication Publication Date Title
US10097944B2 (en) Sound reproduction for a multiplicity of listeners
CN109637528B (zh) 使用多个语音命令装置的设备和方法
US20210243522A1 (en) Microphone Array System
AU2018200212B2 (en) Handsfree beam pattern configuration
US9769552B2 (en) Method and apparatus for estimating talker distance
US11304003B2 (en) Loudspeaker array
JP6193468B2 (ja) スピーカアレイを用いた堅牢なクロストークキャンセル
Coleman et al. Personal audio with a planar bright zone
CN112335261A (zh) 图案形成麦克风阵列
Kyriakakis et al. Surrounded by sound
CN108370470A (zh) 具有麦克风阵列系统的会议系统以及会议系统中的语音获取方法
CN114051738A (zh) 可操纵扬声器阵列、系统及其方法
EP3188505B1 (fr) Reproduction sonore pour une multiplicité d'auditeurs
US20200267490A1 (en) Sound wave field generation
Jeong et al. Global active noise control using collocation of noise source and control speakers
Tsakalides Surrounded by Sound-Acquisition and Rendering
Kyriakakis et al. A processing addrcsses two major aspects of spatial filtering, namely localization of a signal of in-«s AAAAA terest, and adaptation of the spatial response ofan array ofsensors to achieve steering in rection. The achieved spatial focusing in the direction of interest makes array signal processing

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180104

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180417

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

INTG Intention to grant announced

Effective date: 20200129

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1252907

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200415

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602016032933

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200701

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20200401

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200817

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200701

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200702

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200801

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1252907

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200401

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602016032933

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

26N No opposition filed

Effective date: 20210112

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20201231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201207

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201231

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201207

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201231

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201231

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230526

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20231121

Year of fee payment: 8

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20231121

Year of fee payment: 8