EP3188505B1 - Sound reproduction for a multiplicity of listeners - Google Patents

Sound reproduction for a multiplicity of listeners Download PDF

Info

Publication number
EP3188505B1
EP3188505B1 EP16202689.2A EP16202689A EP3188505B1 EP 3188505 B1 EP3188505 B1 EP 3188505B1 EP 16202689 A EP16202689 A EP 16202689A EP 3188505 B1 EP3188505 B1 EP 3188505B1
Authority
EP
European Patent Office
Prior art keywords
signal
listener
audio
sound
microphone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP16202689.2A
Other languages
German (de)
French (fr)
Other versions
EP3188505A1 (en
Inventor
Markus Christoph
Craig Gunther
Matthias Kronlachner
Juergen Zollner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harman Becker Automotive Systems GmbH
Original Assignee
Harman Becker Automotive Systems GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from EP16174534.4A external-priority patent/EP3188504B1/en
Application filed by Harman Becker Automotive Systems GmbH filed Critical Harman Becker Automotive Systems GmbH
Priority to JP2016248968A priority Critical patent/JP6905824B2/en
Priority to KR1020160183270A priority patent/KR102594086B1/en
Priority to US15/398,139 priority patent/US10097944B2/en
Priority to CN201710003824.8A priority patent/CN106941645B/en
Publication of EP3188505A1 publication Critical patent/EP3188505A1/en
Application granted granted Critical
Publication of EP3188505B1 publication Critical patent/EP3188505B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/403Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2203/00Details of circuits for transducers, loudspeakers or microphones covered by H04R3/00 but not provided for in any of its subgroups
    • H04R2203/12Beamforming aspects for stereophonic sound reproduction with loudspeaker arrays

Definitions

  • the disclosure relates to sound reproduction systems and methods.
  • WO 2015/076930 A1 discloses evaluating speech signals from a listener to estimate the listener's location. By examining the location, preferred usage settings, and voice commands from listeners, the generated beam patterns are customized to the explicit and implicit preferences of the listeners.
  • US 6 741 273 B1 discloses a system for adjusting the delivery of sound by way of loudspeakers located in an area. A controller adjusts the delivery of the sound according to the relative positions of the loudspeakers and the listener.
  • US 2009/304205 A1 discloses a system for personalizing audio levels which provides different audio volumes to different locations in a room allowing for two or more users to enjoy the same audio content at different volumes.
  • WO 2007/113718 A1 discloses a data processing device with a detection unit adapted to detect individual reproduction modes indicative of a manner of reproducing the data separately for each of a plurality of human users, and a processing unit adapted to process the data to thereby generate reproducible data separately for each of the plurality of human users in accordance with the detected individual reproduction modes. It is desired to reliably detect a listener's position at any desired time including occasional or continuous detection with little complexity.
  • a sound reproduction system includes a loudspeaker arrangement configured to generate from a customized audio signal an acoustically isolated sound field at a position dependent on a sound field position control signal, and a listener evaluation block configured to provide a listening position signal representing a position of a listener and a listener identification signal representing the identity of the listener.
  • the system further includes an audio control block configured to receive and process the listening position signal, the listener identification signal and an audio signal; the audio control block being further configured to control via the sound field position control signal the position of the sound field dependent on the listening position signal so that the position of the sound field is at the position of the listener, and to process the audio signal according to an audio setting dependent on the identity of the listener to provide the customized audio signal.
  • the system further includes a microphone arrangement that is disposed at the listening position.
  • the loudspeaker arrangement is configured to generate a sound beam that sweeps from one side of an area to the other side, the area including the listening position.
  • the listener evaluation block is wirelessly connected or connected by wire to the microphone arrangement.
  • the microphone arrangement is configured to pick up the sound beam when sweeping the listening position and to provide a corresponding microphone signal.
  • the listener evaluation block is configured to evaluate the microphone signal and a corresponding beam position to provide the listening position signal.
  • a sound reproduction method includes generating from a customized audio signal an acoustically isolated sound field at a position dependent on a sound field position control signal, providing a listening position signal representing a listening position and a listener identification signal representing the identity of the listener, and processing the listening position signal, the listener identification signal and an audio signal.
  • the method further includes controlling via the sound field position control signal the position of the sound field dependent on the listening position signal so that the position of the sound field is at the listening position, and processing the audio signal according to an audio setting dependent on the identity of the listener to provide the customized audio signal.
  • the method further includes generating a sound beam sweeping from one side of an area to the other side, the area including the listening position, picking up the sound beam at the listening position and, when sweeping the listening position, providing a corresponding microphone signal, and evaluating the microphone signal and a corresponding beam position to provide the listening position signal.
  • an exemplary sound reproduction system 100 uses individually customized audio beamforming to perform personalized sound control functions such as, for example, one or more of equalization adjustment, volume adjustment, dynamic range compression adjustment etc., that adjust the loudness for individual listeners located at four listening positions 101-104.
  • Those adjustments in the following also referred to as audio settings, are "remembered" for future reference so that the next time the system can locate the same listener, e.g., in a room, and automatically engage his/her custom sound field, e.g., a sound zone which sends the individually adjusted audio only to him/her. This is achieved without the use of headphones or earbuds.
  • the exemplary system shown in Figure 1 allows for individual loudness adjustments at the four listening positions 101-104 and includes a loudspeaker arrangement 105 that generates from customized audio signals 106 three acoustically isolated sound fields, e.g., sound zones 107, 108 and 109 at listening positions 101, 103 and 104, respectively, which may be sound beams directed from the loudspeaker arrangement 105 to listening positions 101, 103 and 104, and a general sound zone 110 that includes at least listening position 102.
  • the position of the sound zones 101-104 may be steered by way of a sound zone position control signal 111.
  • the sound reproduction system 100 may include various blocks for performing certain functions, wherein blocks may be hardware, software or a combination thereof.
  • listener evaluation blocks 112, 113 and 114 one per listener with dedicated sound adjustment, provide wireless signals 115 that include listening position signals representing a position of each listener with dedicated sound adjustment and a listener identification signal identifying each listener designated for dedicated sound adjustment.
  • the sound reproduction system 100 requires information that allows for determining where a particular listener is seated, e.g., within a room. This may be done by using a tone that sweeps from one side of the room to the other and a microphone close to the individual listeners to identify when the sweep passes by them.
  • Such microphones are wirelessly connected or connected by wire to other system components and may be, for example, wired stand-alone microphones (not shown in Figure 1 ) disposed on or in the vicinity of the listeners, or microphones integrated in smartphones with a wireless Wi-Fi or Bluetooth connection.
  • a particular tone such as an inaudible tone with a frequency of >16 kHz, sweeps the room using a separate directed sound beam 116, at least one microphone detects when the maximum volume is obtained at the microphone's position; at that point in time a particular listener can be located.
  • Several listeners can be simultaneously located as long as they have their own clearly recognizable and assignable microphones.
  • the listener evaluation blocks 112, 113 and 114 are provided by smartphones with built-in microphones in connection with software applications (apps) that may evaluate signals from the built-in microphones, perform the listener identifications and establish the wireless connections.
  • a remote control with built-in microphone may provide listener identification and control of the individual adjustment of the audio in the individual sound zone.
  • An indoor positioning system is a system that locates objects or people inside a building using radio waves, magnetic fields, acoustic signals, or other sensory information collected by mobile devices. Exemplary techniques include camera based detection, Bluetooth location services, or global positioning system (GPS) location services. Indoor positioning systems may use different technologies, including distance measurement to nearby anchor nodes (i.e., nodes with known positions, e.g., WiFi access points), magnetic positioning, or dead reckoning. They either actively locate mobile devices and tags or provide ambient location or environmental context for devices to be sensed. Indoor positioning systems may make use of various technologies including optical, radio, or even acoustic technologies, i.e., additionally processing information from other systems to cope with physical ambiguities and to enable error compensation.
  • anchor nodes i.e., nodes with known positions, e.g., WiFi access points
  • dead reckoning i.e., magnetic positioning, or dead reckoning. They either actively locate mobile devices and tags or provide ambient location or environmental context for devices to be sensed.
  • Indoor positioning systems may
  • an exemplary audio control block 117 is designed to receive and process the wireless signals 115, particularly the listening position signal and the listener identification signal contained therein, and an audio signal 118 from an audio source 119.
  • the audio control block 117 may then control via the sound field position control signal 111 the position of the sound field dependent on the listening position signal so that the position of the sound field is at the position of the listener, and to process the audio signal 118 according to the adjusted audio settings, each dependent on the identity of the corresponding listener, to provide the customized audio signals 106.
  • one or more fixed sound beams e.g., related to fixed listening positions, may be employed.
  • the identity of the listener may correspond to the listening position and can be derived therefrom, or may be determined in any other suitable way.
  • Processing the audio signal 118 according to the individual audio settings may include at least one of adjusting the balance between spectral components of the audio signal with a controllable equalizer 120, adjusting the volume of the audio signal with a controllable volume control 121 and adjusting the dynamics of the audio signal 118 with a controllable dynamic range compressor 122.
  • Equalization is the process of adjusting the balance between frequency components within an electronic signal.
  • the term "equalization" (EQ) has come to include the adjustment of frequency responses for practical or aesthetic reasons, often resulting in a net response that is not truly equalized.
  • Volume control (VOL) is used for adjusting the sound level to a predetermined level.
  • Dynamic range compression or simply compression is a signal processing operation that reduces the volume of loud sounds and/or amplifies quiet sounds by narrowing or compressing an audio signal's dynamic range. For example, audio compression may reduce loud sounds that are above a certain threshold while leaving quiet sounds unaffected.
  • Customized audio signals 106 which are each the accordingly processed audio signal 118, are supplied to a beamforming (BF) processor 123 that, in turn, supplies beamformed signals 124 to the loudspeaker arrangement 105 to generate the beams for sound zones 107-110 and the sweeping beam that is sound field 116.
  • BF beamforming
  • the exemplary audio control block 117 may further include a control block (CU) 125 that is connected to a memory (M) 126, a wireless transceiver (WT) 127 and a beam sweep tone generator (BS) 128.
  • the memory 126 stores data representing identities of a multiplicity of listeners and the corresponding audio settings and, optionally, beam settings such as the beam position, beam width etc.
  • the control block 125 selects from memory 126, based on the listener identification signals, the corresponding audio settings for processing the audio signal 118 and steers, based on the listening position signals, the direction of the corresponding sound beams.
  • the listening position signals and the listener identification signals are generated by the wireless transceiver 127 from the wireless signals 115.
  • the beam sweep tone generator 128 provides the signal that is used for the sweeping beam 116 to the beamforming processor 123, and is also controlled by the control block 125.
  • Audio control block 117 may further include a video processor (VP) 129 that is connected to a camera 130 and that allows for recognizing gestures of the listeners in connection with the camera 130 and for controlling, according to the recognized gestures, at least one of processing the audio signal 118 and configuring the respective sound zone, e.g., the shape or width of the corresponding sound beam.
  • the camera 130 is directed to an area that may include the positions of the listeners, i.e., the listening positions 101-104. From this interface the listener can use gestures to widen or narrow the sound beam and/or to move the sound beam to the left or right and/or to dynamically track movements of the listeners in the individual zones. Selecting the particular sound beam would allow the user to adjust the sound setting parameters of that sound beam.
  • This interface may also allow a more experienced listener to configure the sound beam and related sound settings for another less experienced listener that is not familiar with the system. Additionally, a listener may be able to increase the volume within his/her "sound beam” to cover up other ambient noise, or reduce the volume of his/her "sound beam” so that he/she can have a conversation with someone sitting next to him/her, listen to voice mail on the smartphone, etc.
  • the exemplary sound reproduction system may be disposed in a room 131. If a particular listener leaves the room 131 the system may disable the corresponding dedicated sound beam (e.g., one of sound beams 107-109) and the ordinary sound field (e.g., provided by sound beam 110) will replace this listener's beamforming area so the next listener that occupied that particular seat would hear what is heard throughout the rest of the room 131.
  • the ordinary sound field may also be used when no sound zones are desired.
  • the corresponding dedicated sound beam can be re-enabled. Listeners have the option to adjust the configuration parameters while enjoying a program, and to discard those parameters or save them as their new personal defaults.
  • the listener's configuration information may be stored by the system and identified, e.g., by the listener's user name or face recognition data if a camera is employed. For example, the next time this listener watches a movie on a screen 132 associated with the loudspeaker arrangement 105 he/she can select his/her configuration and restore the associated customized sound beam immediately to now point at his/her current seating location.
  • the system may identify the listener when he/she enters the room 131, e.g., via an intrusion prevention systems (IPS) and smartphone proximity, and load the customized configuration automatically.
  • IPS intrusion prevention systems
  • the sound fields may be generated by way of beamforming e.g., the sound beams 107-110 and 116.
  • Beamforming or spatial filtering is a signal processing technique used in loudspeaker or microphone arrays for directional signal transmission or reception. This is achieved by combining elements in a phased array in such a way that signals at particular angles experience constructive interference while others experience destructive interference.
  • the improvement compared with omnidirectional reception/transmission is known as the directivity of the element.
  • Sound fields may also be realized using a sound field description with a technique called higher-order Ambisonics.
  • Ambisonics is a full-sphere surround sound technique which may cover, in addition to the horizontal plane, sound sources above and below the listener. Unlike other multichannel surround formats, its transmission channels do not carry loudspeaker signals. Instead, they contain a loudspeaker-independent representation of a sound field, which is then decoded to the listener's loudspeaker setup. This offers the listener a considerable degree of flexibility as to the layout and number of loudspeakers used for playback. Ambisonics can be understood as a three-dimensional extension of mid/side (M/S) stereo, adding different additional channels for height and depth.
  • M/S mid/side
  • first-order Ambisonics the resulting signal set is called B-format.
  • the spatial resolution of first-order Ambisonics is quite low. In practice, this translates to slightly blurry sources, and also to a comparably small usable listening area (also referred to as sweet spot or sweet area)
  • the resolution can be increased and the desired sound field (also referred to as sound zone) enlarged by adding groups of more selective directional components to the B-format.
  • desired sound field also referred to as sound zone
  • these no longer correspond to conventional microphone polar patterns, but look like, e.g., clover leaves.
  • the resulting signal set is then called second-order, third-order, or collectively, higher-order Ambisonics (HOA).
  • FIGS 2 and 3 illustrate a sound reproduction system 200 which includes three (or, if appropriate, only two) closely spaced steerable (higher-order) loudspeaker assemblies 201, 202, 203, here arranged, for example, in a horizontal linear array (which is referred to herein as higher-order soundbar). Loudspeaker assemblies with omnidirectional directivity characteristics, dipole directivity characteristics and/or any higher order polar responses are herein referred to also as higher-order loudspeakers. Each higher-order loudspeaker 201, 202, 203 has adjustable, controllable or steerable directivity characteristics (polar responses) as outlined further below.
  • Each higher-order loudspeaker 201, 202, 203 may include a horizontal circular array of lower-order loudspeakers (e.g., omni-directional loudspeakers).
  • the circular arrays may each include, e.g., four lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 (such as common loudspeakers and, thus, also referred to as loudspeakers), the four lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 each being directed in one of four perpendicular directions in a radial plane in this example.
  • the array of higher-order loudspeakers 201, 202, 203 may be disposed on an optional base plate 204 and may have an optional top plate 301 on top (e.g., to carry a flat screen television set).
  • an optional top plate 301 on top (e.g., to carry a flat screen television set).
  • instead of four lower-order loudspeakers only three lower-order loudspeakers per higher-order loudspeaker assembly can be employed to create a two-dimensional higher-order loudspeaker of the first order using Ambisonics technology.
  • Alternative use of the multiple-input multiple-output technology instead of the Ambisonics technology allows for creating a two-dimensional higher-order loudspeaker of the first order even with only two lower-order loudspeakers.
  • Other options include the creation of three-dimensional higher-order loudspeakers with four lower-order loudspeakers that are regularly distributed on a sphere (e.g., mounted at the centers of the four faces of a tetrahedral, which is the first representative of the, in total five, Platonic bodies) using the Ambisonics technology and with four lower-order loudspeakers that are regularly distributed on a sphere using the multiple-input multiple-output technology.
  • the higher-order loudspeaker assemblies may be arranged other than in a straight line, e.g., on an arbitrary curve in a logarithmically changing distance from each other or in a completely arbitrary, three-dimensional arrangement in a room.
  • the four lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 may be substantially the same size and have a peripheral front surface, and an enclosure having a hollow, cylindrical body and end closures.
  • the cylindrical body and end closures may be made of material that is impervious to air.
  • the cylindrical body may include openings therein.
  • the openings may be sized and shaped to correspond with the peripheral front surfaces of the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234, and have central axes.
  • the central axes of the openings may be contained in one radial plane, and the angles between adjacent axes may be identical.
  • the lower-order loudspeakers 211 to 214, 221 to 224, and 231 to 234 may be disposed in the openings and hermetically secured to the cylindrical body. However, additional loudspeakers may be disposed in more than one such radial plane, e.g., in one or more additional planes above and/or below the radial plane described above.
  • the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 may each be operated in a separate, acoustically closed volume 215 to 218, 225 to 228, 235 to 238 in order to reduce or even prevent any acoustic interactions between the lower-order loudspeakers of a particular higher-order loudspeaker assembly.
  • the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 may each be arranged in a dent, hole, recess or the like. Additionally or alternatively, a wave guiding structure such as but not limited to a horn, an inverse horn, an acoustic lens etc. may be arranged in front of the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234.
  • a control block 240 receives, e.g., three ambisonic signals 244, 245, 246 to process the ambisonic signals 244, 245, 246 in accordance with steering information 247, and to drive and steer the higher-order loudspeakers 201, 202, 203 based on the ambisonic signals 244, 245, 246 so that at least one acoustic sound field is generated at least at one position that is dependent on the steering information.
  • the control block 240 comprises beamformer blocks 241, 242, 243 that drive the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234. Examples of beamformer blocks are described further below.
  • Figure 4 depicts possibilities of how to use a horizontal linear array of high-order loudspeakers (referred to herein also as horizontal high-order soundbar or just high-order soundbar) in order to realize virtual sound sources in home entertainment.
  • a linear array may be disposed under a television (TV) set for reproducing e.g. the front channels of the commonly used layout in home cinema, the 5.1 surround sound.
  • the front channels of a 5.1 sound system include a front left (Lf) channel, a front right (Rf) channel and a center (C) channel.
  • Arranging a single high-order loudspeaker underneath the TV set instead of the horizontal high-order soundbar would mean that the C channel could be directed to the front of the TV set and the Lf and Rf channels to its sides, so that the Lf and Rf channels would not be transferred directly to a listener sitting (at the sweet spot or sweet area) in front of the TV set but only indirectly via the side walls, constituting a transfer path which depends on numerous unknown parameters and, thus, can hardly be controlled.
  • a high-order soundbar with (at least) two high-order loudspeakers that are arranged in a horizontal line allows for directly transferring front channels, e.g., the Lf and Rf channels, directly to the sweet area, i.e., the area where the listener should be.
  • a center channel e.g., the C channel
  • a center channel may be reproduced at the sweet area by way of two high-order loudspeakers.
  • a third high-order loudspeaker disposed between the two high-order loudspeakers, may be used to separately direct the Lf and Rf channels and the C channel to the sweet area. Since with three high-order loudspeakers each channel is reproduced by a separate block, the spatial sound impression of a listener at the sweet area can be further improved.
  • each additional high-order loudspeaker added to the high-order soundbar a more diffuse sound impression can be realized and further channels such as, e.g., effect channels may be radiated from the rear side of the high-order soundbar, which is in the present example from the rear side of the TV set to, e.g., the rear wall where the sound provided by the effect channels is diffused.
  • further channels such as, e.g., effect channels may be radiated from the rear side of the high-order soundbar, which is in the present example from the rear side of the TV set to, e.g., the rear wall where the sound provided by the effect channels is diffused.
  • higher-order soundbars provide more options for the positioning of the directional sound sources, e.g., on the side and rear, so that in a common listening environment such as a living room, a directivity characteristic that is almost independent from the spatial direction can be achieved with higher-order soundbars.
  • a common side bar having fourteen lower-order loudspeakers equidistantly distributed inline over a distance of 70 cm can only generate virtual sound sources in an area of maximum ⁇ 90° (degree) from the front direction, while higher-order soundbars allow for virtual sound sources in an area of ⁇ 180°.
  • Figure 4 illustrates an exemplary set-up with a higher-order soundbar including three higher-order loudspeakers 410, 411, 422.
  • An audio control block 401 that receives one or more audio signals 402 and that includes a control block such as control block 240 shown in Figure 2 drives the three higher-order loudspeakers 410, 411, 422 in a target room 413, e.g., a common living room.
  • a listening position sweet spot, sweet area
  • the sound field of at least one desired virtual source can then be generated.
  • a higher-order loudspeaker 424 for a left surround (Ls) channel e.g., a lower-order sub-woofer 423 for the low frequency effects (Sub) channel, and a higher-order loudspeaker 412 for a right surround (Rs) channel are arranged.
  • the target room 413 is acoustically very unfavorable as it includes a window 417 and a French door 418 in the left wall and a door 419 in the right wall in an unbalanced configuration.
  • a sofa 421 is disposed at the right wall and extends approximately to the center of the target room 413 and a table 420 is arranged in front of the sofa 421.
  • a television set 416 is arranged at the front wall (e.g., above the higher order soundbar) and in line of sight of the sofa 421.
  • the front left (Lf) channel higher-order loudspeaker 410 and the front right (Rf) channel higher-order loudspeaker 411 are arranged under the left and right corners of the television set 416 and the center (C) higher-order loudspeaker 422 is arranged under the middle of television set 416.
  • the low frequency effects (Sub) channel loudspeaker 423 is disposed in the corner between the front wall and the right wall.
  • the loudspeaker arrangement on the rear wall including the left surround (Ls) channel higher-order loudspeaker 424 and the right surround (Rs) channel higher-order loudspeaker 412, do not share the same center line as the loudspeaker arrangement on the front wall including the front left (Lf) channel loudspeaker 410, the front right (Rs) channel loudspeaker 411, and low frequency effects (Sub) channel loudspeaker 423.
  • An exemplary sweet area 414 may be on the sofa 421 with the table 420 and the television set 416 in front.
  • the loudspeaker setup shown in Figure 4 is not based on a cylindrical or spherical base configuration and employs no regular distribution.
  • sweet areas 414 and 425 may receive direct sound beams from the soundbar to allow for the preset individual acoustic impressions at those sweet areas 414 and 425.
  • the surround impression can be further enhanced. Furthermore, it has been found that the number of (lower-order) loudspeakers can be significantly reduced.
  • sound fields can be approximated similar to those achieved with forty-five lower-order loudspeakers surrounding the sweet area, or, in the exemplary environment shown in Figure 4 , a higher-order soundbar with three higher-order loudspeakers, which is built from twelve lower-order loudspeakers in total, and exhibits a better spatial sound impression than with the common soundbar with fourteen lower-order loudspeakers in line at comparable dimensions of the two soundbars.
  • a beamformer block 500 or 600 as depicted in Figure 5 or 6 (e.g., applicable as beamformers 241, 242, 243 in Figures 2 and 3 ) may be employed.
  • the beamforming block 500 may further include a modal weighting sub-block 503, a dynamic wave-field manipulation sub-block 505, a regularization sub-block 509 and a matrixing sub-block 507.
  • the modal weighting sub-block 503 is supplied with the input signal 502 [x(n)] which is weighted with modal weighting coefficients, i.e., filter coefficients C 0 ( ⁇ ), C 1 ( ⁇ ) ...
  • C N ( ⁇ ) in the modal weighting sub-block 503 to provide a desired beam pattern, i.e., radiation pattern ⁇ Des ( ⁇ , ⁇ ), based on the N spherical harmonics Y n , m ⁇ ⁇ ⁇ to deliver N weighted ambisonic signals 504, also referred to as C n , m ⁇ Y n , m ⁇ ⁇ ⁇ .
  • the weighted ambisonic signals 504 are transformed by the dynamic wave-field manipulation sub-block 505 using N ⁇ 1 weighting coefficients, e.g. to rotate the desired beam pattern ⁇ Des ( ⁇ , ⁇ ) to a desired position ⁇ Des , ⁇ Des .
  • the N modified and weighted ambisonic signals 506 are then input into the regularization sub-block 509, which includes the regularized radial equalizing filter W n , m ⁇ ⁇ for considering the susceptibility of the playback device Higher-Order-Loudspeaker (HOL) preventing e.g. a given White-Noise-Gain (WNG) threshold from being undercut.
  • Output signals 510 W n , m ⁇ ⁇ C n , m ⁇ Y n , m ⁇ ⁇ Des ⁇ Des of the regularization sub-block 509 are then transformed, e.g.
  • the Q loudspeaker signals 508 may be generated from the N regularized, modified and weighted ambisonic signals 510 by a multiple-input multiple-output sub-block 601 using an N ⁇ Q filter matrix as shown in Figure 6 .
  • the systems shown in Figures 5 and 6 may be employed to realize two-dimensional or three-dimensional audio using a sound field description such as Higher-Order Ambisonics.
  • the W channel Being omnidirectional, the W channel always delivers the same signal, regardless of the listening angle. In order that it may have more-or-less the same average energy as the other channels, W is attenuated by w, i.e., by about 3 dB (precisely, divided by the square root of two).
  • w i.e., by about 3 dB (precisely, divided by the square root of two).
  • X, Y, Z may produce the polar patterns of figure-of-eight.
  • the output sums end up in a figure-of-eight radiation pattern now pointing to the desired direction, given by the azimuth ⁇ and elevation ⁇ , utilized in the calculation of the weighting values x, y and z, and having an energy content that can cope with the W component, weighted by w.
  • the B-format components can be combined to derive virtual radiation patterns that can cope with any first-order polar pattern (omnidirectional, cardioid, hypercardioid, figure-of-eight or anything in between) and point in any three-dimensional direction.
  • the matrixing block 601 may be implemented as a multiple-input multiple-output system that provides an adjustment of the output signals of the higher-order loudspeakers so that the radiation patterns approximate as closely as possible the desired spherical harmonics.
  • WDAF Wave-Domain Adaptive Filtering
  • WDAF is a known efficient spatio-temporal generalization of the also known Frequency-Domain Adaptive Filtering (FDAF).
  • FDAF Frequency-Domain Adaptive Filtering
  • wave domain adaptive filtering the directional characteristics of the higher-order loudspeakers are adaptively determined so that the superposition of the individual sound beams in the sweet area(s) approximates the desired sound field.
  • the sound field needs to be measured and quantified. This may be accomplished by way of an array of microphones (microphone array) and a signal processing block able to decode the given sound field, that, e.g., form a higher-order ambisonic system to determine the sound field in three dimensions or, which may be sufficient in many cases, in two dimensions, which requires fewer microphones.
  • microphone array microphone array
  • signal processing block able to decode the given sound field, that, e.g., form a higher-order ambisonic system to determine the sound field in three dimensions or, which may be sufficient in many cases, in two dimensions, which requires fewer microphones.
  • S microphones are required to measure sound fields up to the Mth order, wherein S ⁇ 2M + 1. In contrast, for a three-dimensional sound field, S ⁇ (2M + 1) 2 microphones are required. Furthermore, in many cases it is sufficient to dispose the microphones (equidistantly) on a circle line.
  • the microphones may be disposed on a rigid or open sphere or cylinder, and may be operated, if needed, in connection with an ambisonic decoder.
  • the microphone array at sweet spot 414 may be integrated in one of the higher-order loudspeakers (not shown).
  • a microphone array similar to microphone array at sweet spot 414 may be disposed at a sweet spot 425.
  • the microphones or microphone arrays at sweet spots 414 and 425 may be used for locating listeners at the sweet spots 414 and 425.
  • a camera such as camera 130 shown in Figure 1 may not only serve to recognize gestures of the listeners but also to detect the positions of the listener and to reposition the sound zones by steering the direction of the higher-order loudspeakers.
  • An exemplary optical detector is shown in Figure 7 .
  • a camera 701 with a lens 702 may be disposed at an appropriate distance above (or below) a mirrored hemisphere 703 with the lens 702 pointing to the curved, mirrored surface of the hemisphere 703, and may provide a 360° view 704 in a horizontal plane.
  • a so-called fisheye lens may be used (as lens 702) that also provides a 360° view in a horizontal plane so that the mirrored hemisphere 703 can be omitted.
  • Figure 8 depicts an exemplary sound reproduction method in which an acoustically isolated sound field is generated from a customized audio signal at a position dependent on a sound field position control signal (procedure 801).
  • a listening position signal representing a position of a listener and a listener identification signal representing the identity of the listener is provided (procedure 802).
  • the listening position signal, the listener identification signal and an audio signal are processed to provide the customized audio signal (procedure 803) to control via the sound field position control signal the position of the sound field dependent on the listening position signal so that the position of the sound field is at the position of the listener (procedure 804), and to process the audio signal according to an audio setting dependent on the identity of the listener to provide the customized audio signal (procedure 805).
  • an array of higher-order loudspeakers e.g., in form of a higher-order soundbar
  • each of them having versatile directivity
  • arbitrary sound fields can be approximated, even in reflective venues such as living rooms where home audio systems are typically installed.
  • This is possible because, due to the use of higher-order loudspeakers, versatile directivities can be created, radiating the sound only in directions where no reflective surfaces exists, or deliberately making use of certain reflections if those turn out to positively contribute to the creation of a desired, enveloping sound field to be approximated.
  • the approximation of the desired sound field at a desired position within the target room e.g.
  • a certain region at the couch in the living room can be achieved by using adaptive methods, such as an adaptive multiple-input multiple-output (MIMO) system, given e.g. by the multiple-FXLMS filtered input least mean squared (multiple-FXLMS) algorithm, which could also operate not just in the time or spectral domain, but also in the so-called wave-domain.
  • MIMO multiple-input multiple-output
  • WDAF wave domain adaptive filters
  • the recording device fulfills certain requirements.
  • circular (for 2D) or spherical microphone arrays (3D), equipped with regularly or quasi-regularly distributed microphones at the surface may be used to record the sound field, having, depending on the desired order in which the sound field has to be recorded, respectively reproduced a minimum number of microphones that have to be accordingly chosen.
  • beamforming filters are calculated using e.g. a MIMO system, arbitrary microphone arrays having different shapes and microphone distributions can be used as well to measure the sound field, leading to high flexibility in the recording device.
  • the recording device can be integrated in a main block of the complete new acoustic system.
  • it can be used not only for the already mentioned recording task, but also for other needed purposes, such as enabling a speech control of the acoustic system to verbally control e.g. the volume, switching of titles, and so on.
  • the main block to which the microphone array is attached could also be used as a stand-alone device e.g. as a teleconferencing hub or as a portable music device with the ability to adjust the acoustic in dependence of the relative position of the listener to the device, which is only possible if a video camera is integrated in the main block as well.
  • Loudspeaker arrangements with adjustable, controllable or steerable directivity characteristics include at least two identical or similar loudspeakers, which may be arranged in one, two or more loudspeaker assemblies, e.g. one loudspeaker assembly with two loudspeakers or two loudspeaker assemblies with one loudspeaker each.
  • the loudspeaker assemblies may be distributed somewhere around the display(s), e.g., in a room.
  • arrays of higher-order loudspeakers it is possible to create sound fields of the same quality, but using fewer devices as compared with ordinary loudspeakers.
  • An array of higher-order loudspeakers can be used to create an arbitrary sound field in real, e.g., reflective environments.
  • the system shown in Figure 1 can be altered to use as an alternative for (or in addition to) the camera 130 a microphone array 901 that is positioned, e.g., at the loudspeaker arrangement 105 and is able to detect the acoustic direction-of-arrival (DOA).
  • the loudspeaker arrangement 105 may have a multiplicity of directional microphones and/or may include a (microphone) beamforming functionality.
  • the smartphones 112, 113 and 114 may have loudspeakers that are able to send non-audible tones 902, 903 and 904 which are picked up by a microphone array 901.
  • the microphone array 901 may be part of a far field microphone system and identifies in connection with a DOA processing block 905, which substitutes wireless transceiver 127 shown in Figure 1 , the directions from which the tones originate.
  • the tones may further include information that allows for identifying the listener associated with the particular smartphone. For example, different frequencies of the tones may be associated with different listeners. Instead of smartphones, accordingly adapted remote control blocks may be used as well.
  • the tones may also include information about the specific sound settings of the associated listener or instructions to alter the corresponding sound settings. If coupled with a speech recognition block 906, microphone array 901 allows for detecting individual listeners or listening positions if a listener talks at one of the listening positions. Thereby, if utilizing different keywords, e.g., the name of the user, individually adjusted audio is available at any sound zone within the room 131. Speech recognition can further be utilized to alter the corresponding sound settings.
  • the far field microphone system shown in Figure 10 further includes an acoustic echo cancellation (AEC) block 1002, a subsequent fix beamformer (FB) block 1003, a subsequent beam steering block 1004, a subsequent adaptive blocking filter (ABF) block 1005, a subsequent adaptive interference canceller block 1006, and a subsequent adaptive post filter block 1010.
  • AEC acoustic echo cancellation
  • FB fix beamformer
  • ABSF adaptive blocking filter
  • N source signals filtered by the RIRs ( h 1 , ⁇ , h M ), and eventually overlaid by noise, serve as an input to the AEC block 1002.
  • Each signal from the fix beamformer block 1003 is taken from a different room direction and may have a different SNR level.
  • the BS block 1004 delivers an output signal b(n) which represents the signal of the fix beamformer block 1003 pointing into room direction with the best/highest current SNR value, referred to as positive beam, and a signal b n (n), representing the current signal of the fix beamformer block 1003 with the least/lowest SNR value, referred to as negative beam.
  • the adaptive blocking filter (ABF) block 1005 calculates an output signal e(n) which ideally solely contains the current noise signal, but no useful signal parts anymore.
  • the expression “adaptive blocking filter” comes from its purpose to block, in an adaptive way, useful signal parts still contained in the signal of the negative beam b n (n).
  • the output signal e(n) enters, together with the optionally, by delay (D) line 1008, delayed signal representative of the positive beam b(n- ⁇ ) the AIC block 1006 including, from a structural perspective, also a subtractor block 1009.
  • the AIC block 1006 Based on these two input signals e(n) and b(n- ⁇ ), the AIC block 1006 generates an output signal which is, on the one hand acting as an input signal to a successive adaptive post filter (PF) block 1010 and on the other hand is fed back to the AIC block 1006, acting thereby as an error signal for the adaptation process, which also employs AIC block 1006.
  • the purpose of this adaptation process is to generate a signal which, if subtracted from the delayed, positive beam signal, reduces, mainly harmonic noise signals, therefrom.
  • the AIC block 1006 also generates time-varying filter coefficients for the adaptive PF block 1010 which is designed to remove mainly statistical noise components from the output signal of subtractor block 1009 and eventually generates a total output signal
  • a signal flow chart may describe a system, method or software implementing the method dependent on the type of realization. e.g., as hardware, software or a combination thereof.
  • a block may be implemented as hardware, software or a combination thereof.

Description

    TECHNICAL FIELD
  • The disclosure relates to sound reproduction systems and methods.
  • BACKGROUND
  • People with hearing impairments often miss out, for example, on the enjoyment of a television program or movie because they cannot understand the dialog in the program material. These impairments may be significant enough to require hearing aids, or they may be less severe and merely entail slight hearing damage or hearing loss associated with age. Regardless of the reason for the hearing loss, the enjoyment of sharing time with others can be dramatically affected. Turning the volume up can make it uncomfortable for others in the same area. Some individuals may prefer a quieter listening experience than others in the room. Turning the volume down for a single individual may not be acceptable to the rest of the people watching the movie. Therefore, a personalized sound reproduction for a multiplicity of listeners is desirable. WO 2015/076930 A1 discloses evaluating speech signals from a listener to estimate the listener's location. By examining the location, preferred usage settings, and voice commands from listeners, the generated beam patterns are customized to the explicit and implicit preferences of the listeners. US 6 741 273 B1 discloses a system for adjusting the delivery of sound by way of loudspeakers located in an area. A controller adjusts the delivery of the sound according to the relative positions of the loudspeakers and the listener. US 2009/304205 A1 discloses a system for personalizing audio levels which provides different audio volumes to different locations in a room allowing for two or more users to enjoy the same audio content at different volumes. WO 2007/113718 A1 discloses a data processing device with a detection unit adapted to detect individual reproduction modes indicative of a manner of reproducing the data separately for each of a plurality of human users, and a processing unit adapted to process the data to thereby generate reproducible data separately for each of the plurality of human users in accordance with the detected individual reproduction modes. It is desired to reliably detect a listener's position at any desired time including occasional or continuous detection with little complexity.
  • SUMMARY
  • A sound reproduction system includes a loudspeaker arrangement configured to generate from a customized audio signal an acoustically isolated sound field at a position dependent on a sound field position control signal, and a listener evaluation block configured to provide a listening position signal representing a position of a listener and a listener identification signal representing the identity of the listener. The system further includes an audio control block configured to receive and process the listening position signal, the listener identification signal and an audio signal; the audio control block being further configured to control via the sound field position control signal the position of the sound field dependent on the listening position signal so that the position of the sound field is at the position of the listener, and to process the audio signal according to an audio setting dependent on the identity of the listener to provide the customized audio signal. The system further includes a microphone arrangement that is disposed at the listening position. The loudspeaker arrangement is configured to generate a sound beam that sweeps from one side of an area to the other side, the area including the listening position. The listener evaluation block is wirelessly connected or connected by wire to the microphone arrangement. The microphone arrangement is configured to pick up the sound beam when sweeping the listening position and to provide a corresponding microphone signal. The listener evaluation block is configured to evaluate the microphone signal and a corresponding beam position to provide the listening position signal.
  • A sound reproduction method includes generating from a customized audio signal an acoustically isolated sound field at a position dependent on a sound field position control signal, providing a listening position signal representing a listening position and a listener identification signal representing the identity of the listener, and processing the listening position signal, the listener identification signal and an audio signal. The method further includes controlling via the sound field position control signal the position of the sound field dependent on the listening position signal so that the position of the sound field is at the listening position, and processing the audio signal according to an audio setting dependent on the identity of the listener to provide the customized audio signal. The method further includes generating a sound beam sweeping from one side of an area to the other side, the area including the listening position, picking up the sound beam at the listening position and, when sweeping the listening position, providing a corresponding microphone signal, and evaluating the microphone signal and a corresponding beam position to provide the listening position signal.
  • Other systems, methods, features and advantages will be, or will become, apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description and be protected by the following claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The systems and methods may be better understood with reference to the following drawings and description. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like referenced numerals designate corresponding parts throughout the different views.
    • Figure 1 is a schematic diagram illustrating an exemplary listening environment with four listening positions and a sound reproduction system that provides personalized sound reproduction for listeners located at these positions.
    • Figure 2 is a schematic top view illustrating an exemplary soundbar based on three higher-order loudspeaker assemblies for creating a two-dimensional acoustic sound field at a desired position in a room.
    • Figure 3 is a schematic side view illustrating the soundbar shown in Figure 2.
    • Figure 4 is a schematic diagram illustrating another exemplary listening environment with two listening positions and a sound reproduction system that provides personalized sound reproduction for listeners located at these positions.
    • Figure 5 is a signal flow chart illustrating an exemplary modal beamformer employing a weighting matrix for matrixing.
    • Figure 6 is a signal flow chart illustrating an exemplary modal beamformer employing a multiple-input multiple-output block for matrixing.
    • Figure 7 is a schematic diagram illustrating an exemplary optical detector for gesture evaluation and optional listening position evaluation.
    • Figure 8 is a diagram illustrating an exemplary sound reproduction method that provides personalized sound reproduction for a multiplicity of listeners.
    • Figure 9 is a schematic diagram illustrating modifications of the exemplary listening environment shown in Figure 1; and
    • Figure 10 is a schematic diagram illustrating an exemplary far field microphone system.
    DETAILED DESCRIPTION
  • Referring to Figure 1, an exemplary sound reproduction system 100 uses individually customized audio beamforming to perform personalized sound control functions such as, for example, one or more of equalization adjustment, volume adjustment, dynamic range compression adjustment etc., that adjust the loudness for individual listeners located at four listening positions 101-104. Those adjustments, in the following also referred to as audio settings, are "remembered" for future reference so that the next time the system can locate the same listener, e.g., in a room, and automatically engage his/her custom sound field, e.g., a sound zone which sends the individually adjusted audio only to him/her. This is achieved without the use of headphones or earbuds. The exemplary system shown in Figure 1 allows for individual loudness adjustments at the four listening positions 101-104 and includes a loudspeaker arrangement 105 that generates from customized audio signals 106 three acoustically isolated sound fields, e.g., sound zones 107, 108 and 109 at listening positions 101, 103 and 104, respectively, which may be sound beams directed from the loudspeaker arrangement 105 to listening positions 101, 103 and 104, and a general sound zone 110 that includes at least listening position 102. The position of the sound zones 101-104 may be steered by way of a sound zone position control signal 111.
  • The sound reproduction system 100 may include various blocks for performing certain functions, wherein blocks may be hardware, software or a combination thereof. For example, listener evaluation blocks 112, 113 and 114, one per listener with dedicated sound adjustment, provide wireless signals 115 that include listening position signals representing a position of each listener with dedicated sound adjustment and a listener identification signal identifying each listener designated for dedicated sound adjustment. The sound reproduction system 100 requires information that allows for determining where a particular listener is seated, e.g., within a room. This may be done by using a tone that sweeps from one side of the room to the other and a microphone close to the individual listeners to identify when the sweep passes by them. Such microphones are wirelessly connected or connected by wire to other system components and may be, for example, wired stand-alone microphones (not shown in Figure 1) disposed on or in the vicinity of the listeners, or microphones integrated in smartphones with a wireless Wi-Fi or Bluetooth connection. As a particular tone, such as an inaudible tone with a frequency of >16 kHz, sweeps the room using a separate directed sound beam 116, at least one microphone detects when the maximum volume is obtained at the microphone's position; at that point in time a particular listener can be located. Several listeners can be simultaneously located as long as they have their own clearly recognizable and assignable microphones. In the exemplary sound reproduction system shown in Figure 1, the listener evaluation blocks 112, 113 and 114 are provided by smartphones with built-in microphones in connection with software applications (apps) that may evaluate signals from the built-in microphones, perform the listener identifications and establish the wireless connections. In another option, a remote control with built-in microphone may provide listener identification and control of the individual adjustment of the audio in the individual sound zone.
  • However, any other positioning systems such as indoor positioning systems (IPS) may be applied. An indoor positioning system is a system that locates objects or people inside a building using radio waves, magnetic fields, acoustic signals, or other sensory information collected by mobile devices. Exemplary techniques include camera based detection, Bluetooth location services, or global positioning system (GPS) location services. Indoor positioning systems may use different technologies, including distance measurement to nearby anchor nodes (i.e., nodes with known positions, e.g., WiFi access points), magnetic positioning, or dead reckoning. They either actively locate mobile devices and tags or provide ambient location or environmental context for devices to be sensed. Indoor positioning systems may make use of various technologies including optical, radio, or even acoustic technologies, i.e., additionally processing information from other systems to cope with physical ambiguities and to enable error compensation.
  • Once located by the listener evaluation blocks 112, 113 and 114, the listeners can then configure the audio settings to their particular preferences. This configuration may be done with manual controls, or a remote control for the sound reproduction system 100, or with an application on a smart phone, tablet or computer. The listeners may also configure the width of the sound beam to cover the area they are seated in. The configuration can then be "remembered" by the sound reproduction system 100 and associated with the users' names or some other type of identification. In the sound reproduction system shown in Figure 1, an exemplary audio control block 117 is designed to receive and process the wireless signals 115, particularly the listening position signal and the listener identification signal contained therein, and an audio signal 118 from an audio source 119. The audio control block 117 may then control via the sound field position control signal 111 the position of the sound field dependent on the listening position signal so that the position of the sound field is at the position of the listener, and to process the audio signal 118 according to the adjusted audio settings, each dependent on the identity of the corresponding listener, to provide the customized audio signals 106. However, instead of tracking the listener(s) and repositioning the sound beam(s), one or more fixed sound beams, e.g., related to fixed listening positions, may be employed. The identity of the listener may correspond to the listening position and can be derived therefrom, or may be determined in any other suitable way.
  • Processing the audio signal 118 according to the individual audio settings may include at least one of adjusting the balance between spectral components of the audio signal with a controllable equalizer 120, adjusting the volume of the audio signal with a controllable volume control 121 and adjusting the dynamics of the audio signal 118 with a controllable dynamic range compressor 122. Equalization is the process of adjusting the balance between frequency components within an electronic signal. However, the term "equalization" (EQ) has come to include the adjustment of frequency responses for practical or aesthetic reasons, often resulting in a net response that is not truly equalized. Volume control (VOL) is used for adjusting the sound level to a predetermined level. Dynamic range compression (DRC) or simply compression is a signal processing operation that reduces the volume of loud sounds and/or amplifies quiet sounds by narrowing or compressing an audio signal's dynamic range. For example, audio compression may reduce loud sounds that are above a certain threshold while leaving quiet sounds unaffected. Customized audio signals 106, which are each the accordingly processed audio signal 118, are supplied to a beamforming (BF) processor 123 that, in turn, supplies beamformed signals 124 to the loudspeaker arrangement 105 to generate the beams for sound zones 107-110 and the sweeping beam that is sound field 116.
  • The exemplary audio control block 117 may further include a control block (CU) 125 that is connected to a memory (M) 126, a wireless transceiver (WT) 127 and a beam sweep tone generator (BS) 128. The memory 126 stores data representing identities of a multiplicity of listeners and the corresponding audio settings and, optionally, beam settings such as the beam position, beam width etc. The control block 125 selects from memory 126, based on the listener identification signals, the corresponding audio settings for processing the audio signal 118 and steers, based on the listening position signals, the direction of the corresponding sound beams. The listening position signals and the listener identification signals are generated by the wireless transceiver 127 from the wireless signals 115. The beam sweep tone generator 128 provides the signal that is used for the sweeping beam 116 to the beamforming processor 123, and is also controlled by the control block 125.
  • Audio control block 117 may further include a video processor (VP) 129 that is connected to a camera 130 and that allows for recognizing gestures of the listeners in connection with the camera 130 and for controlling, according to the recognized gestures, at least one of processing the audio signal 118 and configuring the respective sound zone, e.g., the shape or width of the corresponding sound beam. The camera 130 is directed to an area that may include the positions of the listeners, i.e., the listening positions 101-104. From this interface the listener can use gestures to widen or narrow the sound beam and/or to move the sound beam to the left or right and/or to dynamically track movements of the listeners in the individual zones. Selecting the particular sound beam would allow the user to adjust the sound setting parameters of that sound beam. This interface may also allow a more experienced listener to configure the sound beam and related sound settings for another less experienced listener that is not familiar with the system. Additionally, a listener may be able to increase the volume within his/her "sound beam" to cover up other ambient noise, or reduce the volume of his/her "sound beam" so that he/she can have a conversation with someone sitting next to him/her, listen to voice mail on the smartphone, etc.
  • The exemplary sound reproduction system may be disposed in a room 131. If a particular listener leaves the room 131 the system may disable the corresponding dedicated sound beam (e.g., one of sound beams 107-109) and the ordinary sound field (e.g., provided by sound beam 110) will replace this listener's beamforming area so the next listener that occupied that particular seat would hear what is heard throughout the rest of the room 131. The ordinary sound field may also be used when no sound zones are desired. When the particular listener reenters the room 131, the corresponding dedicated sound beam can be re-enabled. Listeners have the option to adjust the configuration parameters while enjoying a program, and to discard those parameters or save them as their new personal defaults. The listener's configuration information may be stored by the system and identified, e.g., by the listener's user name or face recognition data if a camera is employed. For example, the next time this listener watches a movie on a screen 132 associated with the loudspeaker arrangement 105 he/she can select his/her configuration and restore the associated customized sound beam immediately to now point at his/her current seating location. The system may identify the listener when he/she enters the room 131, e.g., via an intrusion prevention systems (IPS) and smartphone proximity, and load the customized configuration automatically.
  • As already mentioned, the sound fields may be generated by way of beamforming e.g., the sound beams 107-110 and 116. Beamforming or spatial filtering is a signal processing technique used in loudspeaker or microphone arrays for directional signal transmission or reception. This is achieved by combining elements in a phased array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. The improvement compared with omnidirectional reception/transmission is known as the directivity of the element.
  • Sound fields may also be realized using a sound field description with a technique called higher-order Ambisonics. Ambisonics is a full-sphere surround sound technique which may cover, in addition to the horizontal plane, sound sources above and below the listener. Unlike other multichannel surround formats, its transmission channels do not carry loudspeaker signals. Instead, they contain a loudspeaker-independent representation of a sound field, which is then decoded to the listener's loudspeaker setup. This offers the listener a considerable degree of flexibility as to the layout and number of loudspeakers used for playback. Ambisonics can be understood as a three-dimensional extension of mid/side (M/S) stereo, adding different additional channels for height and depth. In terms of first-order Ambisonics, the resulting signal set is called B-format. The spatial resolution of first-order Ambisonics is quite low. In practice, this translates to slightly blurry sources, and also to a comparably small usable listening area (also referred to as sweet spot or sweet area)
  • The resolution can be increased and the desired sound field (also referred to as sound zone) enlarged by adding groups of more selective directional components to the B-format. In terms of second-order Ambisonics, these no longer correspond to conventional microphone polar patterns, but look like, e.g., clover leaves. The resulting signal set is then called second-order, third-order, or collectively, higher-order Ambisonics (HOA). However, common applications of the HOA technique require, dependent on whether a two-dimensional (2D) and three-dimensional (3D) sound field is processed, specific spatial configurations and notwithstanding whether the sound field is measured (encoded/coded) or reproduced (decoded): Processing of 2D sound fields requires cylindrical configurations and processing of 3D sound fields requires spherical configurations, each with a regular or quasi-regular distribution of the microphones or loudspeakers, in order to keep the number of sensors necessary to realize a certain order as low as possible.
  • Figures 2 and 3 illustrate a sound reproduction system 200 which includes three (or, if appropriate, only two) closely spaced steerable (higher-order) loudspeaker assemblies 201, 202, 203, here arranged, for example, in a horizontal linear array (which is referred to herein as higher-order soundbar). Loudspeaker assemblies with omnidirectional directivity characteristics, dipole directivity characteristics and/or any higher order polar responses are herein referred to also as higher-order loudspeakers. Each higher- order loudspeaker 201, 202, 203 has adjustable, controllable or steerable directivity characteristics (polar responses) as outlined further below. Each higher- order loudspeaker 201, 202, 203 may include a horizontal circular array of lower-order loudspeakers (e.g., omni-directional loudspeakers). For example, the circular arrays may each include, e.g., four lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 (such as common loudspeakers and, thus, also referred to as loudspeakers), the four lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 each being directed in one of four perpendicular directions in a radial plane in this example. The array of higher- order loudspeakers 201, 202, 203 may be disposed on an optional base plate 204 and may have an optional top plate 301 on top (e.g., to carry a flat screen television set). Alternatively, instead of four lower-order loudspeakers only three lower-order loudspeakers per higher-order loudspeaker assembly can be employed to create a two-dimensional higher-order loudspeaker of the first order using Ambisonics technology.
  • Alternative use of the multiple-input multiple-output technology instead of the Ambisonics technology allows for creating a two-dimensional higher-order loudspeaker of the first order even with only two lower-order loudspeakers. Other options include the creation of three-dimensional higher-order loudspeakers with four lower-order loudspeakers that are regularly distributed on a sphere (e.g., mounted at the centers of the four faces of a tetrahedral, which is the first representative of the, in total five, Platonic bodies) using the Ambisonics technology and with four lower-order loudspeakers that are regularly distributed on a sphere using the multiple-input multiple-output technology. Furthermore, the higher-order loudspeaker assemblies may be arranged other than in a straight line, e.g., on an arbitrary curve in a logarithmically changing distance from each other or in a completely arbitrary, three-dimensional arrangement in a room.
  • The four lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 may be substantially the same size and have a peripheral front surface, and an enclosure having a hollow, cylindrical body and end closures. The cylindrical body and end closures may be made of material that is impervious to air. The cylindrical body may include openings therein. The openings may be sized and shaped to correspond with the peripheral front surfaces of the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234, and have central axes. The central axes of the openings may be contained in one radial plane, and the angles between adjacent axes may be identical. The lower-order loudspeakers 211 to 214, 221 to 224, and 231 to 234 may be disposed in the openings and hermetically secured to the cylindrical body. However, additional loudspeakers may be disposed in more than one such radial plane, e.g., in one or more additional planes above and/or below the radial plane described above. Optionally, the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 may each be operated in a separate, acoustically closed volume 215 to 218, 225 to 228, 235 to 238 in order to reduce or even prevent any acoustic interactions between the lower-order loudspeakers of a particular higher-order loudspeaker assembly. Further, the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234 may each be arranged in a dent, hole, recess or the like. Additionally or alternatively, a wave guiding structure such as but not limited to a horn, an inverse horn, an acoustic lens etc. may be arranged in front of the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234.
  • A control block 240 receives, e.g., three ambisonic signals 244, 245, 246 to process the ambisonic signals 244, 245, 246 in accordance with steering information 247, and to drive and steer the higher- order loudspeakers 201, 202, 203 based on the ambisonic signals 244, 245, 246 so that at least one acoustic sound field is generated at least at one position that is dependent on the steering information. The control block 240 comprises beamformer blocks 241, 242, 243 that drive the lower-order loudspeakers 211 to 214, 221 to 224, 231 to 234. Examples of beamformer blocks are described further below.
  • Figure 4 depicts possibilities of how to use a horizontal linear array of high-order loudspeakers (referred to herein also as horizontal high-order soundbar or just high-order soundbar) in order to realize virtual sound sources in home entertainment. For example, such a linear array may be disposed under a television (TV) set for reproducing e.g. the front channels of the commonly used layout in home cinema, the 5.1 surround sound. The front channels of a 5.1 sound system include a front left (Lf) channel, a front right (Rf) channel and a center (C) channel. Arranging a single high-order loudspeaker underneath the TV set instead of the horizontal high-order soundbar would mean that the C channel could be directed to the front of the TV set and the Lf and Rf channels to its sides, so that the Lf and Rf channels would not be transferred directly to a listener sitting (at the sweet spot or sweet area) in front of the TV set but only indirectly via the side walls, constituting a transfer path which depends on numerous unknown parameters and, thus, can hardly be controlled. Therefore, in a multichannel system with at least two channels to be reproduced, a high-order soundbar with (at least) two high-order loudspeakers that are arranged in a horizontal line allows for directly transferring front channels, e.g., the Lf and Rf channels, directly to the sweet area, i.e., the area where the listener should be.
  • Furthermore, a center channel, e.g., the C channel, may be reproduced at the sweet area by way of two high-order loudspeakers. Alternatively, a third high-order loudspeaker, disposed between the two high-order loudspeakers, may be used to separately direct the Lf and Rf channels and the C channel to the sweet area. Since with three high-order loudspeakers each channel is reproduced by a separate block, the spatial sound impression of a listener at the sweet area can be further improved. Furthermore, with each additional high-order loudspeaker added to the high-order soundbar a more diffuse sound impression can be realized and further channels such as, e.g., effect channels may be radiated from the rear side of the high-order soundbar, which is in the present example from the rear side of the TV set to, e.g., the rear wall where the sound provided by the effect channels is diffused.
  • In contrast to common soundbars in which the lower-order loudspeakers are arranged in line, higher-order soundbars provide more options for the positioning of the directional sound sources, e.g., on the side and rear, so that in a common listening environment such as a living room, a directivity characteristic that is almost independent from the spatial direction can be achieved with higher-order soundbars. For example, a common side bar having fourteen lower-order loudspeakers equidistantly distributed inline over a distance of 70 cm can only generate virtual sound sources in an area of maximum ± 90° (degree) from the front direction, while higher-order soundbars allow for virtual sound sources in an area of ± 180°.
  • Figure 4 illustrates an exemplary set-up with a higher-order soundbar including three higher- order loudspeakers 410, 411, 422. An audio control block 401 that receives one or more audio signals 402 and that includes a control block such as control block 240 shown in Figure 2 drives the three higher- order loudspeakers 410, 411, 422 in a target room 413, e.g., a common living room. At a listening position (sweet spot, sweet area) represented by a microphone array at sweet spot 414, the sound field of at least one desired virtual source can then be generated. In the target room 413, further higher-order loudspeakers, e.g., a higher-order loudspeaker 424 for a left surround (Ls) channel, a lower-order sub-woofer 423 for the low frequency effects (Sub) channel, and a higher-order loudspeaker 412 for a right surround (Rs) channel are arranged. The target room 413 is acoustically very unfavorable as it includes a window 417 and a French door 418 in the left wall and a door 419 in the right wall in an unbalanced configuration. Furthermore, a sofa 421 is disposed at the right wall and extends approximately to the center of the target room 413 and a table 420 is arranged in front of the sofa 421.
  • A television set 416 is arranged at the front wall (e.g., above the higher order soundbar) and in line of sight of the sofa 421. The front left (Lf) channel higher-order loudspeaker 410 and the front right (Rf) channel higher-order loudspeaker 411 are arranged under the left and right corners of the television set 416 and the center (C) higher-order loudspeaker 422 is arranged under the middle of television set 416. The low frequency effects (Sub) channel loudspeaker 423 is disposed in the corner between the front wall and the right wall. The loudspeaker arrangement on the rear wall, including the left surround (Ls) channel higher-order loudspeaker 424 and the right surround (Rs) channel higher-order loudspeaker 412, do not share the same center line as the loudspeaker arrangement on the front wall including the front left (Lf) channel loudspeaker 410, the front right (Rs) channel loudspeaker 411, and low frequency effects (Sub) channel loudspeaker 423. An exemplary sweet area 414 may be on the sofa 421 with the table 420 and the television set 416 in front. As can be seen, the loudspeaker setup shown in Figure 4 is not based on a cylindrical or spherical base configuration and employs no regular distribution. In the exemplary setup shown in Figure 4, sweet areas 414 and 425 may receive direct sound beams from the soundbar to allow for the preset individual acoustic impressions at those sweet areas 414 and 425.
  • If further (higher-order) loudspeakers are used, e.g., for the surround channels Ls and Rs, behind the sweet area and in front of the rear wall, or somewhere above (not shown) the level of the soundbar, the surround impression can be further enhanced. Furthermore, it has been found that the number of (lower-order) loudspeakers can be significantly reduced. For example, with five virtual sources of 4th order surrounding the sweet area, sound fields can be approximated similar to those achieved with forty-five lower-order loudspeakers surrounding the sweet area, or, in the exemplary environment shown in Figure 4, a higher-order soundbar with three higher-order loudspeakers, which is built from twelve lower-order loudspeakers in total, and exhibits a better spatial sound impression than with the common soundbar with fourteen lower-order loudspeakers in line at comparable dimensions of the two soundbars.
  • For each of the higher-order loudspeakers of the soundbar (and the other higher-order loudspeakers) a beamformer block 500 or 600 as depicted in Figure 5 or 6 (e.g., applicable as beamformers 241, 242, 243 in Figures 2 and 3) may be employed. The beamforming block 500 shown in Figure 5 controls a loudspeaker assembly with Q loudspeakers 501 (or Q groups of loudspeakers each with a multiplicity of loudspeakers such as tweeters, mid-frequency range loudspeakers and/or woofers) dependent on N (Ambisonics) input signals 502, also referred to as input signals x(n) or ambisonic signals Y n , m σ θ φ ,
    Figure imgb0001
    wherein for two dimensions N is N2D = (2M+1) and for three dimensions N3D = (M+1)2. The beamforming block 500 may further include a modal weighting sub-block 503, a dynamic wave-field manipulation sub-block 505, a regularization sub-block 509 and a matrixing sub-block 507. The modal weighting sub-block 503 is supplied with the input signal 502 [x(n)] which is weighted with modal weighting coefficients, i.e., filter coefficients C0(ω), C1(ω) ... CN(ω) in the modal weighting sub-block 503 to provide a desired beam pattern, i.e., radiation pattern ψDes (θ,ϕ), based on the N spherical harmonics Y n , m σ θ φ
    Figure imgb0002
    to deliver N weighted ambisonic signals 504, also referred to as C n , m σ Y n , m σ θ φ .
    Figure imgb0003
    The weighted ambisonic signals 504 are transformed by the dynamic wave-field manipulation sub-block 505 using N×1 weighting coefficients, e.g. to rotate the desired beam pattern ψDes (θ,ϕ) to a desired position Θ DesDes. Thus N modified (e.g., rotated, focused and/or zoomed) and weighted ambisonic signals 506, also referred to as C n , m σ Y n , m σ θ Des φ Des ,
    Figure imgb0004
    are output by the dynamic wave-field manipulation sub-block 505.
  • The N modified and weighted ambisonic signals 506 are then input into the regularization sub-block 509, which includes the regularized radial equalizing filter W n , m σ ω
    Figure imgb0005
    for considering the susceptibility of the playback device Higher-Order-Loudspeaker (HOL) preventing e.g. a given White-Noise-Gain (WNG) threshold from being undercut. Output signals 510 W n , m σ ω C n , m σ Y n , m σ θ Des φ Des
    Figure imgb0006
    of the regularization sub-block 509 are then transformed, e.g. by pseudo-inverse Y+ = (YTY)-1YT, which simplifies to Y + = 1 Q Y T ,
    Figure imgb0007
    if the Q lower-order loudspeakers are arranged at the body of the higher-order loudspeakers in a regular fashion, into Q loudspeaker signals 508 [y1(n),...,yQ(n)] by the matrixing sub-block 507 using a N×Q weighting matrix as shown in Figure 5. Alternatively, the Q loudspeaker signals 508 may be generated from the N regularized, modified and weighted ambisonic signals 510 by a multiple-input multiple-output sub-block 601 using an N×Q filter matrix as shown in Figure 6. The systems shown in Figures 5 and 6 may be employed to realize two-dimensional or three-dimensional audio using a sound field description such as Higher-Order Ambisonics.
  • An example of a simple ambisonic panner (or decoder) takes an input signal, e.g., a source signal S and two parameters, the horizontal angle θ and the elevation angle ϕ. It positions the source at the desired angle by distributing the signal over the ambisonic components with different gains for the corresponding ambisonic signals W Y 0,0 + 1 θ φ ,
    Figure imgb0008
    X Y 1,1 + 1 θ φ ,
    Figure imgb0009
    Y Y 1,1 1 θ φ
    Figure imgb0010
    and Z Y 1,0 + 1 θ φ :
    Figure imgb0011
    W = S 1 2 ,
    Figure imgb0012
    X = S cos θ cos φ ,
    Figure imgb0013
    Y = S sin θ cos φ ,
    Figure imgb0014
    and Z = S sin φ .
    Figure imgb0015
    Being omnidirectional, the W channel always delivers the same signal, regardless of the listening angle. In order that it may have more-or-less the same average energy as the other channels, W is attenuated by w, i.e., by about 3 dB (precisely, divided by the square root of two). The terms for X, Y, Z may produce the polar patterns of figure-of-eight. Taking their desired weighting values at angles θ and ϕ(x, y, z), and multiplying the result with the corresponding ambisonic signals (X, Y, Z), the output sums end up in a figure-of-eight radiation pattern now pointing to the desired direction, given by the azimuth θ and elevation ϕ, utilized in the calculation of the weighting values x, y and z, and having an energy content that can cope with the W component, weighted by w. The B-format components can be combined to derive virtual radiation patterns that can cope with any first-order polar pattern (omnidirectional, cardioid, hypercardioid, figure-of-eight or anything in between) and point in any three-dimensional direction. Several such beam patterns with different parameters can be derived at the same time to create coincident stereo pairs or surround arrays. Higher-order loudspeakers or loudspeaker assemblies like those described above in connection with Figure 2 to 4, including beamformer blocks such as those shown in Figure 5 and 6, allow for approximating any desired directivity characteristic by superimposing the basic functions, i.e., the spherical harmonics.
  • The matrixing block 601 may be implemented as a multiple-input multiple-output system that provides an adjustment of the output signals of the higher-order loudspeakers so that the radiation patterns approximate as closely as possible the desired spherical harmonics. To generate a desired sound field at a certain position or area in the room utilizing several higher-order loudspeakers, it may be sufficient in the adaptation process to adapt only the modal weights C n , m σ
    Figure imgb0016
    of the individual higher-order loudspeakers employed, i.e. to run the adaptation directly in the wave domain. Because of this adaptation in the sound field (wave field) domain, such a process is called Wave-Domain Adaptive Filtering (WDAF). WDAF is a known efficient spatio-temporal generalization of the also known Frequency-Domain Adaptive Filtering (FDAF). Through incorporation of the mathematical fundamentals of sound fields, WDAF is suitable even for massive multiple-input multiple-output systems with highly cross-correlated broadband input signals. With wave domain adaptive filtering, the directional characteristics of the higher-order loudspeakers are adaptively determined so that the superposition of the individual sound beams in the sweet area(s) approximates the desired sound field.
  • To adjust or (singularly or permanently) adapt the sound reproduced by the soundbar to the specific room conditions and the specific requirements of the sweet area of the loudspeaker set-up, which includes the high-order soundbar and, possibly, other (high-order) loudspeakers, the sound field needs to be measured and quantified. This may be accomplished by way of an array of microphones (microphone array) and a signal processing block able to decode the given sound field, that, e.g., form a higher-order ambisonic system to determine the sound field in three dimensions or, which may be sufficient in many cases, in two dimensions, which requires fewer microphones. For the measurement of a two-dimensional sound field, S microphones are required to measure sound fields up to the Mth order, wherein S ≥ 2M + 1. In contrast, for a three-dimensional sound field, S ≥ (2M + 1)2 microphones are required. Furthermore, in many cases it is sufficient to dispose the microphones (equidistantly) on a circle line. The microphones may be disposed on a rigid or open sphere or cylinder, and may be operated, if needed, in connection with an ambisonic decoder. In an alternative example, the microphone array at sweet spot 414 may be integrated in one of the higher-order loudspeakers (not shown). A microphone array similar to microphone array at sweet spot 414 may be disposed at a sweet spot 425. The microphones or microphone arrays at sweet spots 414 and 425 may be used for locating listeners at the sweet spots 414 and 425.
  • A camera such as camera 130 shown in Figure 1 may not only serve to recognize gestures of the listeners but also to detect the positions of the listener and to reposition the sound zones by steering the direction of the higher-order loudspeakers. An exemplary optical detector is shown in Figure 7. As shown, a camera 701 with a lens 702 may be disposed at an appropriate distance above (or below) a mirrored hemisphere 703 with the lens 702 pointing to the curved, mirrored surface of the hemisphere 703, and may provide a 360° view 704 in a horizontal plane. For example, when such a detector is mounted, e.g., on the ceiling of the room, the position of the listener can be spotted everywhere in the room. Alternatively, a so-called fisheye lens may be used (as lens 702) that also provides a 360° view in a horizontal plane so that the mirrored hemisphere 703 can be omitted.
  • Figure 8 depicts an exemplary sound reproduction method in which an acoustically isolated sound field is generated from a customized audio signal at a position dependent on a sound field position control signal (procedure 801). A listening position signal representing a position of a listener and a listener identification signal representing the identity of the listener is provided (procedure 802). The listening position signal, the listener identification signal and an audio signal are processed to provide the customized audio signal (procedure 803) to control via the sound field position control signal the position of the sound field dependent on the listening position signal so that the position of the sound field is at the position of the listener (procedure 804), and to process the audio signal according to an audio setting dependent on the identity of the listener to provide the customized audio signal (procedure 805).
  • The techniques described above use individually customized audio beamforming to perform (basic) sound setting functions that adjust, e.g., the loudness of certain frequencies for individual listeners. Those adjustments would be "remembered" for future reference so that the next time that individual sits down the system can locate that person in the room and automatically engage their customized "sound beam" that sends the adjusted audio to only that listener. This can all be achieved without the use of headphones or earbuds.
  • By using, e.g., an array of higher-order loudspeakers (e.g., in form of a higher-order soundbar), each of them having versatile directivity, arbitrary sound fields can be approximated, even in reflective venues such as living rooms where home audio systems are typically installed. This is possible because, due to the use of higher-order loudspeakers, versatile directivities can be created, radiating the sound only in directions where no reflective surfaces exists, or deliberately making use of certain reflections if those turn out to positively contribute to the creation of a desired, enveloping sound field to be approximated. Thus, the approximation of the desired sound field at a desired position within the target room (e.g. a certain region at the couch in the living room) can be achieved by using adaptive methods, such as an adaptive multiple-input multiple-output (MIMO) system, given e.g. by the multiple-FXLMS filtered input least mean squared (multiple-FXLMS) algorithm, which could also operate not just in the time or spectral domain, but also in the so-called wave-domain.
  • Utilizing wave domain adaptive filters (WDAF) is of special interest, since this promises very good results in the approximation of the desired sound field. WDAF can be used if the recording device fulfills certain requirements. For example, circular (for 2D) or spherical microphone arrays (3D), equipped with regularly or quasi-regularly distributed microphones at the surface, may be used to record the sound field, having, depending on the desired order in which the sound field has to be recorded, respectively reproduced a minimum number of microphones that have to be accordingly chosen. However, if beamforming filters are calculated using e.g. a MIMO system, arbitrary microphone arrays having different shapes and microphone distributions can be used as well to measure the sound field, leading to high flexibility in the recording device. The recording device can be integrated in a main block of the complete new acoustic system. Thus it can be used not only for the already mentioned recording task, but also for other needed purposes, such as enabling a speech control of the acoustic system to verbally control e.g. the volume, switching of titles, and so on. Furthermore, the main block to which the microphone array is attached could also be used as a stand-alone device e.g. as a teleconferencing hub or as a portable music device with the ability to adjust the acoustic in dependence of the relative position of the listener to the device, which is only possible if a video camera is integrated in the main block as well.
  • Loudspeaker arrangements with adjustable, controllable or steerable directivity characteristics include at least two identical or similar loudspeakers, which may be arranged in one, two or more loudspeaker assemblies, e.g. one loudspeaker assembly with two loudspeakers or two loudspeaker assemblies with one loudspeaker each. The loudspeaker assemblies may be distributed somewhere around the display(s), e.g., in a room. With the help of arrays of higher-order loudspeakers, it is possible to create sound fields of the same quality, but using fewer devices as compared with ordinary loudspeakers. An array of higher-order loudspeakers can be used to create an arbitrary sound field in real, e.g., reflective environments.
  • Referring to Figure 9, the system shown in Figure 1 can be altered to use as an alternative for (or in addition to) the camera 130 a microphone array 901 that is positioned, e.g., at the loudspeaker arrangement 105 and is able to detect the acoustic direction-of-arrival (DOA). The loudspeaker arrangement 105 may have a multiplicity of directional microphones and/or may include a (microphone) beamforming functionality. The smartphones 112, 113 and 114 may have loudspeakers that are able to send non-audible tones 902, 903 and 904 which are picked up by a microphone array 901. The microphone array 901may be part of a far field microphone system and identifies in connection with a DOA processing block 905, which substitutes wireless transceiver 127 shown in Figure 1, the directions from which the tones originate. The tones may further include information that allows for identifying the listener associated with the particular smartphone. For example, different frequencies of the tones may be associated with different listeners. Instead of smartphones, accordingly adapted remote control blocks may be used as well. Furthermore, the tones may also include information about the specific sound settings of the associated listener or instructions to alter the corresponding sound settings. If coupled with a speech recognition block 906, microphone array 901 allows for detecting individual listeners or listening positions if a listener talks at one of the listening positions. Thereby, if utilizing different keywords, e.g., the name of the user, individually adjusted audio is available at any sound zone within the room 131. Speech recognition can further be utilized to alter the corresponding sound settings.
  • Referring to Figure 10, in an exemplary far field microphone system applicable in the system shown in Figure 9, sound from a desired sound source 1007 is radiated via one loudspeaker or a plurality of loudspeakers, travels through the room, where it is filtered with the corresponding room impulse responses (RIRs) 1001 and may eventually be corrupted by noise, before the corresponding signals are picked up by M microphones 1011 of the far field microphone system. The far field microphone system shown in Figure 10 further includes an acoustic echo cancellation (AEC) block 1002, a subsequent fix beamformer (FB) block 1003, a subsequent beam steering block 1004, a subsequent adaptive blocking filter (ABF) block 1005, a subsequent adaptive interference canceller block 1006, and a subsequent adaptive post filter block 1010. As can be seen from Figure 10, N source signals, filtered by the RIRs (h 1,···,hM ), and eventually overlaid by noise, serve as an input to the AEC block 1002. The output signals of the fix DS beamformer block 1003 serve as an input bi (n), wherein i = 1,2,...B, to the beam steering (BS) block 1004. Each signal from the fix beamformer block 1003 is taken from a different room direction and may have a different SNR level.
  • The BS block 1004 delivers an output signal b(n) which represents the signal of the fix beamformer block 1003 pointing into room direction with the best/highest current SNR value, referred to as positive beam, and a signal bn(n), representing the current signal of the fix beamformer block 1003 with the least/lowest SNR value, referred to as negative beam. Based on these two signals b(n) and bn(n), the adaptive blocking filter (ABF) block 1005 calculates an output signal e(n) which ideally solely contains the current noise signal, but no useful signal parts anymore. The expression "adaptive blocking filter" comes from its purpose to block, in an adaptive way, useful signal parts still contained in the signal of the negative beam bn(n). The output signal e(n) enters, together with the optionally, by delay (D) line 1008, delayed signal representative of the positive beam b(n-γ) the AIC block 1006 including, from a structural perspective, also a subtractor block 1009. Based on these two input signals e(n) and b(n-γ), the AIC block 1006 generates an output signal which is, on the one hand acting as an input signal to a successive adaptive post filter (PF) block 1010 and on the other hand is fed back to the AIC block 1006, acting thereby as an error signal for the adaptation process, which also employs AIC block 1006. The purpose of this adaptation process is to generate a signal which, if subtracted from the delayed, positive beam signal, reduces, mainly harmonic noise signals, therefrom. In addition, the AIC block 1006 also generates time-varying filter coefficients for the adaptive PF block 1010 which is designed to remove mainly statistical noise components from the output signal of subtractor block 1009 and eventually generates a total output signal y(n).
  • The description of embodiments has been presented for purposes of illustration and description. Suitable modifications and variations to the embodiments may be performed in light of the above description. The described assemblies, systems and methods are exemplary in nature, and may include additional elements or steps and/or omit elements or steps. As used in this application, an element or step recited in the singular and proceeded with the word "a" or "an" should be understood as not excluding plural of said elements or steps, unless such exclusion is stated. Furthermore, references to "one embodiment" or "one example" of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. The terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements or a particular positional order on their objects. A signal flow chart may describe a system, method or software implementing the method dependent on the type of realization. e.g., as hardware, software or a combination thereof. A block may be implemented as hardware, software or a combination thereof.

Claims (13)

  1. A sound reproduction system comprising:
    a loudspeaker arrangement (105, 200, 410, 411, 412,422, 424, 501) configured to generate from a customized audio signal an acoustically isolated sound field at a position dependent on a sound field position control signal;
    a listener evaluation block (112, 113, 114) configured to provide a listening position signal representing a position of a listener and a listener identification signal representing the identity of the listener; and
    an audio control block (117, 401) configured to receive and process the listening position signal, the listener identification signal and an audio signal; the audio control block (117, 401) being further configured to control via the sound field position control signal the position of the sound field dependent on the listening position signal so that the position of the sound field is at the position of the listener, and to process the audio signal according to an audio setting dependent on the identity of the listener to provide the customized audio signal; characterized by
    a microphone arrangement disposed at the listening position, wherein:
    the loudspeaker arrangement is configured to generate a sound beam that sweeps from one side of an area to the other side, the area including the listening position;
    the listener evaluation block (112, 113, 114) is wirelessly connected or connected by wire to the microphone arrangement;
    the microphone arrangement is configured to pick up the sound beam when sweeping the listening position and to provide a corresponding microphone signal; and
    the listener evaluation block (112, 113, 114) is configured to evaluate the microphone signal and a corresponding beam position to provide the listening position signal.
  2. The system of claim 1, wherein processing the audio signal according to the audio setting includes at least one of adjusting the balance between spectral components of the audio signal, adjusting the volume of the audio signal and adjusting the dynamics of the audio signal.
  3. The system of claim 1 or 2, wherein
    the microphone arrangement is further configured to provide a microphone identification signal corresponding to the identity of a specific listener; and
    the listener evaluation block (112, 113, 114) is further configured to identify the specific listener from the microphone identification signal and to generate the corresponding listener identification signal.
  4. The system of any of claims 1 to 3, further comprising a memory (126) configured to store data representing identities of a multiplicity of listeners and corresponding audio settings, wherein the audio control block (117, 401) is further configured to select based on the listener identification signal the corresponding audio setting for processing the audio signal.
  5. The system of any of claims 1 to 4, further comprising a default audio setup and a default sound zone that are employed if no known listener is identified.
  6. The system of any of claims 1 to 5, further comprising a camera (130, 701) connected to the audio control block and directed to an area including the listening position, the audio control block (117, 401) being further configured to recognize gestures of the listener via the camera (130, 701) and to control according to the recognized gestures at least one of processing the audio signal and configuring the sound zone.
  7. The system of claim 1 or 2, wherein the microphone arrangement further comprises a microphone array that has a multiplicity of microphones, wherein:
    the microphone array is configured to pick up sound from the listening position and to provide a corresponding microphone signal; and
    the listener evaluation block (112, 113, 114) is connected to the microphone array, the listener evaluation block being configured to evaluate the microphone signal to evaluate the direction of the listening position.
  8. A sound reproduction method comprising:
    generating from a customized audio signal an acoustically isolated sound field at a position dependent on a sound field position control signal;
    providing a listening position signal representing a listening position and a listener identification signal representing the identity of the listener;
    processing the listening position signal, the listener identification signal and an audio signal;
    controlling via the sound field position control signal the position of the sound field dependent on the listening position signal so that the position of the sound field is at the listening position; and
    processing the audio signal according to an audio setting dependent on the identity of the listener to provide the customized audio signal; characterized by
    generating a sound beam sweeping from one side of an area to the other side, the area including the listening position;
    picking up the sound beam at the listening position and, when sweeping the listening position, providing a corresponding microphone signal; and
    evaluating the microphone signal and a corresponding beam position to provide the listening position signal.
  9. The method of claim 8, wherein processing the audio signal according to the audio setting includes at least one of adjusting the balance between spectral components of the audio signal, adjusting the volume of the audio signal and adjusting the dynamics of the audio signal.
  10. The method of claim 9, further comprising:
    providing with the microphone signal a microphone identification signal corresponding to the identity of a specific listener;
    identifying the specific listener from the microphone identification signal; and generating the corresponding listener identification signal.
  11. The method of any of claims 8 to 10 further comprising:
    storing data representing identities of a multiplicity of listeners and corresponding audio settings; and
    selecting based on the listener identification signal the corresponding audio settings for processing the audio signal.
  12. The method of any of claims 8 to 10, further comprising
    picking up sound from the listening position and providing a corresponding microphone signal; and
    evaluating the microphone signal to evaluate the direction of the listening position.
  13. The method of any of claims 8 to 12, further comprising:
    recognizing with a camera directed to an area including the listening position, gestures of the listener; and
    controlling according to the recognized gestures at least one of processing the audio signal and configuring the sound field.
EP16202689.2A 2016-01-04 2016-12-07 Sound reproduction for a multiplicity of listeners Active EP3188505B1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2016248968A JP6905824B2 (en) 2016-01-04 2016-12-22 Sound reproduction for a large number of listeners
KR1020160183270A KR102594086B1 (en) 2016-01-04 2016-12-30 Sound reproduction for a multiplicity of listeners
US15/398,139 US10097944B2 (en) 2016-01-04 2017-01-04 Sound reproduction for a multiplicity of listeners
CN201710003824.8A CN106941645B (en) 2016-01-04 2017-01-04 System and method for sound reproduction of a large audience

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP16150043 2016-01-04
EP16174534.4A EP3188504B1 (en) 2016-01-04 2016-06-15 Multi-media reproduction for a multiplicity of recipients
EP16199773 2016-11-21

Publications (2)

Publication Number Publication Date
EP3188505A1 EP3188505A1 (en) 2017-07-05
EP3188505B1 true EP3188505B1 (en) 2020-04-01

Family

ID=57517801

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16202689.2A Active EP3188505B1 (en) 2016-01-04 2016-12-07 Sound reproduction for a multiplicity of listeners

Country Status (1)

Country Link
EP (1) EP3188505B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116547977A (en) * 2020-12-03 2023-08-04 交互数字Ce专利控股有限公司 Method and apparatus for audio guidance using gesture recognition

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130259238A1 (en) * 2012-04-02 2013-10-03 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for gestural manipulation of a sound field

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6741273B1 (en) * 1999-08-04 2004-05-25 Mitsubishi Electric Research Laboratories Inc Video camera controlled surround sound
EP2005414B1 (en) * 2006-03-31 2012-02-22 Koninklijke Philips Electronics N.V. A device for and a method of processing data
US20090304205A1 (en) * 2008-06-10 2009-12-10 Sony Corporation Of Japan Techniques for personalizing audio levels
CN105794231B (en) * 2013-11-22 2018-11-06 苹果公司 Hands-free beam pattern configuration

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130259238A1 (en) * 2012-04-02 2013-10-03 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for gestural manipulation of a sound field

Also Published As

Publication number Publication date
EP3188505A1 (en) 2017-07-05

Similar Documents

Publication Publication Date Title
US10097944B2 (en) Sound reproduction for a multiplicity of listeners
JP6615300B2 (en) Hands-free beam pattern configuration
CN108370470B (en) Conference system and voice acquisition method in conference system
US9769552B2 (en) Method and apparatus for estimating talker distance
JP6193468B2 (en) Robust crosstalk cancellation using speaker array
US11304003B2 (en) Loudspeaker array
Coleman et al. Personal audio with a planar bright zone
KR20190039646A (en) Apparatus and Method Using Multiple Voice Command Devices
Kyriakakis et al. Surrounded by sound
JP2008543143A (en) Acoustic transducer assembly, system and method
CN114051738A (en) Steerable speaker array, system and method thereof
EP3188505B1 (en) Sound reproduction for a multiplicity of listeners
US20200267490A1 (en) Sound wave field generation
Tsakalides Surrounded by Sound-Acquisition and Rendering
Kyriakakis et al. A processing addrcsses two major aspects of spatial filtering, namely localization of a signal of in-«s AAAAA terest, and adaptation of the spatial response ofan array ofsensors to achieve steering in rection. The achieved spatial focusing in the direction of interest makes array signal processing

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180104

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180417

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

INTG Intention to grant announced

Effective date: 20200129

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1252907

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200415

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602016032933

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200701

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20200401

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200817

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200701

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200702

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200801

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1252907

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200401

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602016032933

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

26N No opposition filed

Effective date: 20210112

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20201231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201207

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201231

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201207

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201231

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200401

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201231

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230526

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20231121

Year of fee payment: 8

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20231121

Year of fee payment: 8