EP2898707B1 - Optimierte kalibrierung eines klangwiedergabesystems mit mehreren lautsprechern - Google Patents

Optimierte kalibrierung eines klangwiedergabesystems mit mehreren lautsprechern Download PDF

Info

Publication number
EP2898707B1
EP2898707B1 EP13774728.3A EP13774728A EP2898707B1 EP 2898707 B1 EP2898707 B1 EP 2898707B1 EP 13774728 A EP13774728 A EP 13774728A EP 2898707 B1 EP2898707 B1 EP 2898707B1
Authority
EP
European Patent Office
Prior art keywords
reflections
impulse responses
signal
amplitude
direct wave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP13774728.3A
Other languages
English (en)
French (fr)
Other versions
EP2898707A1 (de
Inventor
Romain DEPREZ
Rozenn Nicol
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
Orange SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orange SA filed Critical Orange SA
Publication of EP2898707A1 publication Critical patent/EP2898707A1/de
Application granted granted Critical
Publication of EP2898707B1 publication Critical patent/EP2898707B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/11Application of ambisonics in stereophonic audio systems

Definitions

  • the present invention relates to a method and a device for calibrating a sound reproduction system comprising a plurality of loudspeakers or sound reproduction elements.
  • the calibration makes it possible to optimize the quality of listening of the restitution system that constitutes all the restitution elements, including the loudspeaker device and the listening room.
  • the restitution systems particularly concerned are the sound reproduction systems of the multichannel type (5.1, 7.1, 10.2, 22.2, etc.) or of the ambisonic type (Ambisonics in English or Higher Order Ambisonics (HOA)).
  • current devices for calibrating the acoustics of the listening environment are based on a general method of the "multichannel equalization" type in which the impulse responses of each loudspeaker of the restitution are measured using one or more microphones at one or more points of the listening location and a frequency equalization filtering is performed on each speaker, independently, by reversing all or part of the measured impulse response for the speaker concerned.
  • the inversion aims to correct the response of the loudspeaker so that it approaches as well as possible a "target" curve generally defined in the frequency domain to improve the rendering of the timbre of the sound sources.
  • This type of calibration or correction focuses on the correction of the frequency aspect of the response of the system of reproduction of the listening location without exploiting the temporal information such as the phenomena of reflections and in particular the first reflections of the sound signals.
  • the analysis of the impulse responses carried out in the existing calibration methods is of monophonic type, that is to say that it does not take into account either the spatial information of the reflections such as the direction of incidence. .
  • the techniques of the state of the art are based on the application of correction filters on each of the channels of the multi-channel signal, that is to say that each speaker of the system of restitution is corrected individually without taking into account the whole network of speakers.
  • the present invention improves the situation.
  • the effect of the first reflections of the sound waves diffused by the reproduction system on the auditory perception of direct waves is evaluated and taken into account to adapt the processing.
  • applied to the channels of the multi-channel signal according to the specific perceptual effect associated with each reflection.
  • the filtering of the channels of the multi-channel signal thus takes into account exclusively the reflections which have an impact on the auditory perception of direct waves.
  • the constraints of the correction are lightened by the fact that they take into account perceptual impulse responses instead of raw impulse responses.
  • some of the non-perceptible reflections which are eliminated from the impulse responses obtained correspond to components of the impulse response which are precisely at the origin of processing instabilities (in particular components with non-minimal phase). With perceptual impulse responses, the risks of instabilities and artefacts which can be generated during treatments taking into account all of the reflections are thus reduced.
  • the error signal thus determined makes it possible to take into account in the calculation of the filtering matrix, only the reflections which have an impact on the auditory perception of the direct wave. Indeed, only the reflections which are not perceptible are removed for the determination of the error signal.
  • the perceptibility threshold can be obtained from characteristics determined by the step of analyzing the multidirectional impulse responses of the loudspeakers.
  • the perceptibility threshold is determined as a function of the direction of incidence of the direct wave and / or of its amplitude, and of the directions of incidence of the first reflections and / or of their arrival times relative to the direct wave.
  • the effect of a reflection on the perception of the direct wave generally depends on five parameters in total; on the one hand it depends on two characteristics of the direct wave: its amplitude and its direction; on the other hand it depends on three characteristics of the reflection: its amplitude, its instant of arrival and its incidence.
  • the perceptual effect of the reflection by fixing the missing characteristic at an arbitrary value, by taking for example the value corresponding to the case more unfavorable in order to increase the perceptibility.
  • the direction value can be fixed and the perceptibility threshold determined only according to the value of the time of arrival.
  • the target response signal corresponds to the response of the direct wave alone without any reflection.
  • the target response signal corresponds to the response of a direct wave associated with reflections representative of a predetermined listening location.
  • the reference response can then be voluntarily chosen as a desired listening location in which the sound is at a desired quality.
  • the target response signal corresponds to the response of a direct wave associated with reflections representative of a different restitution set.
  • the reference response is here chosen as a function of a chosen reference rendering system, in which the number and the position of the loudspeakers can be different from the restitution system being corrected.
  • This device has the same advantages as the method described above, which it implements.
  • the invention also relates to an audio decoder comprising a calibration device as described.
  • It relates to a computer program comprising code instructions for implementing the steps of the calibration method as described, when these instructions are executed by a processor.
  • the invention relates to a storage medium, readable by a processor, integrated or not into the calibration device, possibly removable, storing a computer program implementing a calibration method as described above.
  • the figure 1 therefore illustrates an example of a sound reproduction system in which the calibration method according to an embodiment of the invention is implemented.
  • This system includes a processing device 100 comprising a calibration device E according to an embodiment of the invention controlling a rendering unit 180 which comprises a plurality of rendering elements (speakers, loudspeakers, etc.) represented here by loudspeakers HP 1 , HP 2 , HP 3 , HP i and HP N.
  • a rendering unit 180 which comprises a plurality of rendering elements (speakers, loudspeakers, etc.) represented here by loudspeakers HP 1 , HP 2 , HP 3 , HP i and HP N.
  • These loudspeakers are arranged in a listening location in which a microphone or set of MA microphones is also provided.
  • a processing device 100 which can be a decoder such as a living room decoder of the "set top box" type for reading or broadcasting audio or video content, a processing server capable of processing audio and video content and to retransmit them to the restitution unit, a conference bridge capable of processing the audio signals from different conference locations or any multi-channel audio signal processing device.
  • a decoder such as a living room decoder of the "set top box” type for reading or broadcasting audio or video content
  • a processing server capable of processing audio and video content and to retransmit them to the restitution unit
  • a conference bridge capable of processing the audio signals from different conference locations or any multi-channel audio signal processing device.
  • the processing device 100 comprises a calibration device E according to an embodiment of the invention and a filtering matrix 170 composed of a plurality of processing filters which are determined by the calibration device according to a calibration method such that '' illustrated later with reference to figure 2 .
  • This filtering matrix receives as input a multi-channel signal Si and transmits as output the signals SC 1 , SC 2 , SC i , SC N capable of being restored by the restitution assembly 180.
  • the calibration device E comprises a reception and transmission module 110 capable of transmitting on the one hand reference audio signals (Sref) to the various speakers of the reproduction unit 180 and to receive by the microphone or the 'set of microphones MA, the multidirectional impulse responses (RIs) of these different speakers corresponding to the broadcasting of these reference signals.
  • Sref one hand reference audio signals
  • RIs multidirectional impulse responses
  • a multidirectional impulse response contains temporal and spatial information relating to all of the sound waves induced by the loudspeaker considered in the reproduction room.
  • the reference signals are for example signals whose frequency increases logarithmically over time, these signals being called in English “chirps” or “logarithmic sweeps”.
  • the microphone capable of measuring the multidirectional impulse responses of the loudspeakers is a HOA type microphone placed at a point of the listening place, for example in the center of the loudspeakers of the reproduction unit.
  • This microphone will receive, for each speaker reproducing a reference audio signal, the sound reproduced in several directions.
  • the HOA microphone is made up of a plurality of microphones.
  • the spatial information of the different sounds picked up can be extracted.
  • this type of microphone one can refer to the document entitled " Study and realization of advanced spatial encoding tools for the sound spatialization technique Higher Order Ambisonics: 3D microphone and distance control "by S. Moreau cited in Univ. Of Maine, PhD thesis, 2006 .
  • the HOA microphone then retrieves the multidirectional impulse responses from each of the speakers to transmit them to the calibration device or to store them in memory in a local or remote memory space.
  • the analysis module 120 of the device E performs a joint analysis of the impulse responses obtained, which makes it possible to obtain these characteristics and in particular the characteristics of the first reflections of the restored signals.
  • the multidirectional impulse responses are obtained in a spatio-temporal representation where the spatial information is described on the basis of the spherical harmonics and makes it possible to identify the directions of incidence different sound components.
  • the analysis of impulse responses is done on a predetermined time scale, including the moments of the first reflections.
  • this time window is between 50 and 100 ms in length, which corresponds to the time scale of the instants of arrival of the first reflections.
  • the embodiment thus described is adapted to the field of representation of spherical harmonics but it is entirely possible to carry out these same steps in a field of WFS representation (for "Wave Field Synthesis" in English) or in the plane wave domain.
  • the means for capturing the signals reproduced by the loudspeakers will have to be adapted to these areas of representation in order to obtain multidirectional impulse responses, without departing from the scope of the invention.
  • the calibration device E also includes a module 130 for comparing and identifying non-perceptible reflections.
  • This module implements a step of comparing the amplitudes of the reflections, obtained by the analysis module 120, with a predetermined perceptibility threshold Se.
  • This perceptibility threshold is determined by the module 140 from a predefined table of values stored in a memory space.
  • a step of identifying these "non-perceptible" reflections is then implemented by the module 130. These identified reflections make it possible to implement by the module 150 a step of determining perceptual impulse responses which are deduced from the impulse responses obtained by module 110 by suppressing reflections judged as not perceptible.
  • the figure 2 illustrates in the form of a flowchart, the main steps implemented in an embodiment of the calibration method according to the invention.
  • step E201 the multidirectional impulse responses from the various loudspeakers of the reproduction unit as described with reference to the figure 1 , are obtained. They are obtained by the calibration device, either by simple reading in memory if these were saved beforehand, either by receiving the microphone or a set of microphones having carried out the measurement.
  • These multidirectional impulse responses are the responses of each loudspeaker following the reproduction of a reference signal as described with reference to the figure 1 .
  • a step E202 of analysis of the multidirectional impulse responses thus obtained is then implemented.
  • This analysis is carried out in a space-time representation domain.
  • Spatial information can for example be described in the field of representation of spherical harmonics.
  • each point has as spherical coordinates, a distance r from the origin 0, an angle ⁇ of azimuth or orientation in the horizontal plane and an angle ⁇ of elevation or orientation in the vertical plane.
  • an acoustic wave is perfectly described if we define at any point at each time t, the acoustic pressure noted p (r, ⁇ , ⁇ , t) whose temporal Fourier transform is noted P (r, ⁇ , ⁇ , f) where f denotes the time frequency.
  • the spatial components are ambisonic components B min ⁇ which correspond to the decomposition of the sound pressure wave p on the basis of spherical harmonics.
  • B min ⁇ S t .
  • the P min (sin ⁇ ) are the associated Legendre functions.
  • FIG. 3b An illustration of the spherical harmonic functions is represented in figure 3b .
  • the omnidirectional component Y 00 1 (designated as the “W component” in ambisonic terminology) corresponding to the order 0
  • the bidirectional components Y 10 1 , Y 11 1 , Y 11 - 1 (designated respectively as the “Z, X and Y components” in ambisonic terminology) corresponding to order 1, and the components of higher orders.
  • Decomposition on the basis of spherical harmonics can be considered as the dual transform between spatial coordinates and spatial frequencies.
  • Components B min ⁇ therefore define a spatial spectrum.
  • a multidirectional impulse response is obtained which is made up of K impulse responses corresponding to the K components of the chosen spatial representation.
  • K impulse responses
  • the multidirectional impulse response which is associated with it thus consists of K elementary responses H jl (t) where the index l locates the index of the spatial component and t corresponds to the time sample.
  • the reproduction system comprises a total of N loudspeakers
  • the set of multidirectional impulse responses measured for the N loudspeakers and the K spatial components defines a matrix H of size KxN, in which the jth column corresponds to the impulse response multidirectional associated with the jth speaker.
  • the K spatial components contained in the vector h j (t) represent the spatial spectrum of the sounds picked up by the microphone.
  • This inverse transformation is carried out by reconstructing the pressure wave p (r, ⁇ , ⁇ , t) by linear combination of spherical harmonics, each harmonic being weighted by the amplitude of the component associated with it.
  • This spatial decoding step is for example described in the document entitled “ Ambisonics encoding of other audio formats for multiple listening conditions "by the authors Jérians Daniel, Jean-Bernard Rault and Jean-Dominique Polack in AES 105th Convention, September 1998 .
  • this transformation of the spatial frequencies (ambisonic components) towards the spatial coordinates is carried out by multiplying, for each loudspeaker and each temporal sample t, the vector h j (t) by a decoding matrix D.
  • each column consists of the values of the K spherical harmonics for a given loudspeaker.
  • the precision of estimation of these characteristics therefore depends on the number P of virtual loudspeakers used for this analysis.
  • the characteristics of the direct wave are determined on the one hand, such as its amplitude A D (j), its instant of arrival on the microphone T D (j) or its direction of incidence.
  • the characteristics of the reflections such as their amplitudes A Ri (j), their instants of arrival on the microphone T Ri (j) or their directions of incidences C Ri (j).
  • the first reflections of a reproduced audio signal depend on the listening location in which the reproduction unit is placed. Generally, these first reflections appear in a time located in a range from 50 to 100 ms after the direct wave.
  • the analysis time window of step E202 will, in an adapted embodiment, be between 50 and 100 ms.
  • Step E203 compares the amplitudes obtained by the analysis step with a perceptibility threshold Se of the reflections which has been defined beforehand and stored in memory.
  • Step E204 makes it possible to find the predefined threshold value as a function of characteristics of each reflection and of the associated direct wave, obtained in the analysis step E202.
  • the value of the characteristic of the time of arrival of the reflection is fixed, for example the most critical value (the one which gives maximum perceptibility) and the value of the perceptibility threshold is determined only relative to the value of leadership.
  • the direction value can be fixed, for example the most critical value (the one which gives maximum perceptibility), and determine the perceptibility threshold according to the value of the arrival time.
  • the threshold value can be determined, with better precision, as a function of these two characteristics.
  • an array of perceptibility threshold values is stored in memory.
  • An example of such a table is illustrated with reference to the figure 4 .
  • the threshold is defined as the relative level of the reflection, that is to say it represents the difference between the amplitude values (expressed in dB) of the reflection and of the direct wave considered.
  • This table of values is an example of threshold values defined from psycho-acoustic experiences carried out by considering different types of sound signal (speech, clicks, music, etc.), different angles of incidence and different times of arrival of reflections and of the direct wave.
  • a threshold of perceptibility of these reflections is defined according to these parameters.
  • the figure 5 shows different perceptibility threshold curves expressed in dB (which always corresponds to the relative threshold corresponding to the difference between the level of the reflection and that of the direct wave). These different curves correspond to different positions of the direct wave (azimuth of 0 ° for D1, 60 ° for D2, 90 ° for D3 and 150 ° for D4) and represent the thresholds of perceptibility as a function of the direction of reflection, this for a fixed arrival time (corresponding in this case to 15 ms).
  • step E204 the threshold value corresponding to the characteristics obtained in the analysis step is recovered.
  • This threshold value is compared to the amplitude value of each reflection in step E203.
  • the value of the amplitude of the reflection is referenced to that of the associated direct wave and expressed in dB: 20log ( AN Ri ( j )).
  • Step E203 thus makes it possible to identify all the reflections which have no impact on the perception of the direct wave. Step E203 therefore identifies all the reflections for which the amplitude is below the perceptibility threshold.
  • the figure 6 represents an example of impulse response, for a given direction, of one of the loudspeakers of the reproduction unit in comparison with the curve in broken line representing the perceptibility threshold (RMT for "Reflection Masked Threshold") obtained by the table described above with reference to the figure 4 . Reflections whose level is lower than the threshold curve are thus identified. Note that in the illustrated case, the first reflections occurring in the first 15 ms are not perceptible.
  • RTT perceptibility threshold
  • this operation is carried out for example by a thresholding operation.
  • the value of the perceptibility threshold Se is subtracted from the impulse response signal which was obtained in step E201.
  • the processing can also be applied in the dual domain of space coordinates. In the following, we will describe the operation performed in the case of the spatial spectrum.
  • the thresholding operation consists in comparing for each identified reflection its amplitude with the perceptibility threshold Se associated with its characteristics.
  • the perceptual impulse responses only retain the reflections having a significant impact on the perception of the direct wave.
  • step E206 This filtering matrix is then used to process the multi-channel audio signal before its sound reproduction by the system reproduction unit.
  • a possible embodiment includes a step of determining an error signal defined by the difference between a predetermined target response signal from the set of restitution and a response signal reconstructed from the perceptual impulse responses and a multichannel inversion step by minimization of the error signal thus determined.
  • the error signal thus obtained therefore only takes into account the perceptible reflections since it is calculated from a reconstructed signal based on the perceptual impulse responses.
  • the inversion can be performed by a gradient descent algorithm or its variants.
  • An example of a possible inversion algorithm is that of the ISTA type (for “Iterative Shrinkage-Thresholding algorithm) as described in the document entitled” A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems "by the authors Amir Beck & Marc Teboulle, published in SIAM J. IMAGING SCIENCES, Vol. 2, No. 1, pp. 183-202 in 2009 .
  • the problem which arises to calculate the filters of the processing matrix is as follows.
  • N loudspeakers which constitute the real reproduction system.
  • the space of spatial representation is of dimension K. Spatial information is therefore described by K coefficients.
  • the objective is to reproduce with the N speaker system, a set of V signals defining the input multichannel audio signal.
  • V signals are dedicated to an ideal reproduction system consisting of V loudspeakers.
  • This ideal system defines the V target signals which one wishes to reproduce and which therefore correspond to the responses of a fictitious system of V virtual loudspeakers.
  • the resolution of this operation can be carried out in two stages.
  • the correction filters are calculated by correcting only the room effect of the restitution place, that is to say that we take into account the actual loudspeaker device, ie N loud- speakers.
  • the arrangement of the loudspeakers is compensated for in order to adapt the V signals to a restitution according to a non-ideal configuration of N loudspeakers.
  • the V signals are distributed by matrixing on the N channels associated with the real reproduction system in order to emulate a system of V virtual loudspeakers.
  • the elements of the matrix H include the perceptual impulse responses as obtained in step E205.
  • the target responses may vary depending on the expected sound reproduction result.
  • this target response corresponds to the impulse response given by the direct wave alone without any reflection. This is equivalent to suppressing all the room effect in the expected signal.
  • the target response signal corresponds to the response of a direct wave associated with reflections representative of a predetermined listening location.
  • a characteristic listening location with good listening quality may be desired (for example the listening location in the Pleyel TM room).
  • the processing filters will be calculated to obtain a sound reproduction close to this listening quality.
  • the target response signal corresponds to the response of a direct wave associated with reflections representative of a restitution set different from that used to restore the resulting signal.
  • a desired restitution system for example comprising more speakers, is taken as a reference for obtaining a restitution close to that which would have been obtained with such a system.
  • the implementation of the described method makes it possible to obtain a better quality of listening during the restitution of a multi-channel audio signal thanks to the taking into account only of the perceptible reflections of the signals by the restitution unit in the listening place.
  • the figure 7 shows an example of a hardware embodiment of a calibration device according to the invention. This can be an integral part of an audio / video decoder, a processing server, a conference bridge or any other audio or video playback or broadcasting equipment.
  • This type of device comprises a ⁇ P processor cooperating with a memory block MEM comprising a storage and / or working memory.
  • the memory block can advantageously include a computer program comprising code instructions for implementing the steps of the calibration method within the meaning of the invention, when these instructions are executed by the processor, and in particular the steps for obtaining responses.
  • multidirectional impulse signals from the loudspeakers of the reproduction unit for reproducing a predetermined audio signal, for analyzing the multidirectional impulse responses obtained, in a space-time representation domain, over at least one time window including the instants d arrival of the first reflections of the predetermined audio signal reproduced to determine a set of characteristics of the first reflections, of comparing the amplitude of each of the reflections at a predetermined perceptibility threshold and of identifying the non-perceptible reflections for which the amplitude is less than the predetermined threshold, modification of the impulse responses obtained to obtain perceptual impulse responses, by deletion of the reflections identified as non-perceptible and determination of a filtering matrix from the perceptual impulse responses for an application of this filtering matrix to the multi-channel audio signal before sound reproduction.
  • the description of the figure 2 takes the steps of an algorithm of such a computer program.
  • the computer program can also be stored on a memory medium readable by a reader of the device or downloadable in the memory space of the latter.
  • the memory MEM stores a table of perceptibility threshold values as a function of characteristics of the sound components consisting of the direct wave and of the reflections used in the method according to an embodiment of the invention and in general, all the necessary data. to the implementation of the process.
  • Such a device comprises an input module I capable of receiving impulse responses from a restitution assembly and an output module S capable of transmitting to a processing module, the filters calculated from a filtering matrix.
  • the device thus described may also include the processing functions by implementing the processing matrix on reception at I of a multi-channel signal Si to output processed signals SCi capable of be returned by the return package.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Stereophonic System (AREA)

Claims (9)

  1. Verfahren zur Kalibrierung einer akustischen Wiedergabeeinheit eines akustischen Mehrkanalsignals, die eine Vielzahl von Lautsprechern aufweist, wobei das Verfahren die folgenden Schritte aufweist:
    - Erhalt (E201) von multidirektionalen Impulsantworten der Lautsprecher der Wiedergabeeinheit bei der Reproduktion eines Bezugs-Audiosignals;
    - Analyse (E202) der erhaltenen multidirektionalen Impulsantworten in einem Bereich raum-zeitlicher Darstellung, in mindestens einem Zeitfenster, das die Ankunftsmomente der ersten Reflexionen des reproduzierten Bezugs-Audiosignals umfasst, um eine Einheit von Merkmalen der direkten Wellen und der zugeordneten ersten Reflexionen zu bestimmen, die mindestens die Amplitude enthält;
    wobei das Verfahren dadurch gekennzeichnet ist, dass es die folgenden Schritte aufweist:
    - Vergleich (E203) der Amplitude jeder der Reflexionen mit einer bestimmten Wahrnehmbarkeitsschwelle (E204) abhängig von Merkmalen der direkten Welle und der ersten Reflexionen des Bezugs-Audiosignals und Erkennung (E203) der nicht wahrnehmbaren Reflexionen, für die die Amplitude niedriger als die bestimmte Schwelle ist;
    - Änderung (E205) der erhaltenen Impulsantworten, um wahrnehmungsfähige Impulsantworten zu erhalten, durch Unterdrückung der als nicht wahrnehmbar erkannten Reflexionen;
    - Bestimmung (E206) einer Filtermatrix durch die folgenden Schritte:
    - Bestimmung eines Fehlersignals definiert durch die Differenz zwischen einem Ziel-Antwortsignal einer Wiedergabeeinheit und einem ausgehend von den wahrnehmungsfähigen Impulsantworten rekonstruierten Antwortsignal;
    - Mehrkanalumschaltung durch Minimierung des so bestimmten Fehlersignals, um die Filter der Filtermatrix zu erhalten,
    für eine Anwendung dieser Filtermatrix an das Mehrkanal-Audiosignal vor der akustischen Wiedergabe.
  2. Verfahren nach Anspruch 1, dadurch gekennzeichnet, dass die Wahrnehmbarkeitsschwelle abhängig von der Einfallrichtung der direkten Welle (CD) und/oder ihrer Amplitude (AD) und den Einfallrichtungen der ersten Reflexionen (CRi) und/oder ihren Ankunftsverzögerungen (τRi) bezüglich der direkten Welle bestimmt wird.
  3. Verfahren nach Anspruch 1, dadurch gekennzeichnet, dass das Ziel-Antwortsignal der Antwort der direkten Welle alleine ohne jede Reflexion entspricht.
  4. Verfahren nach Anspruch 1, dadurch gekennzeichnet, dass das Ziel-Antwortsignal der Antwort einer direkten Welle zugeordnet zu Reflexionen entspricht, die für einen vorbestimmten Hörort repräsentativ sind.
  5. Verfahren nach Anspruch 1, dadurch gekennzeichnet, dass das Ziel-Antwortsignal der Antwort einer direkten Welle zugeordnet zu Reflexionen entspricht, die für eine andere Wiedergabeeinheit repräsentativ sind.
  6. Vorrichtung zur Kalibrierung einer akustischen Wiedergabeeinheit eines akustischen Mehrkanalsignals, die eine Vielzahl von Lautsprechern aufweist, wobei die Vorrichtung aufweist:
    - ein Modul (110) zum Erhalt multidirektionaler Impulsantworten der Lautsprecher der Wiedergabeeinheit bei der Reproduktion eines Bezugs-Audiosignals;
    - ein Modul (120) zur Analyse der erhaltenen multidirektionalen Impulsantworten in einem Bereich raum-zeitlicher Darstellung, in mindestens einem Zeitfenster, das die Ankunftsmomente der ersten Reflexionen des reproduzierten Bezugs-Audiosignals umfasst, um eine Einheit von Merkmalen der direkten Wellen und der zugeordneten ersten Reflexionen zu bestimmen, die mindestens die Amplitude enthält;
    wobei die Vorrichtung dadurch gekennzeichnet ist, dass sie aufweist:
    - ein Modul (120) zum Vergleich der Amplitude jeder der Reflexionen mit einer bestimmten Wahrnehmbarkeitsschwelle (140) abhängig von Merkmalen der direkten Welle und der ersten Reflexionen des Bezugs-Audiosignals und zur Erkennung (120) der nicht wahrnehmbaren Reflexionen, für die die Amplitude niedriger als die bestimmte Schwelle ist;
    - ein Modul (150) zur Änderung der erhaltenen Impulsantworten, um wahrnehmungsfähige Impulsantworten zu erhalten, durch Unterdrückung der vom Erkennungsmodul als nicht wahrnehmbar erkannten Reflexionen;
    - ein Modul (130) zur Berechnung einer Filtermatrix, das die folgenden Schritte durchführen kann:
    - Bestimmung eines Fehlersignals definiert durch die Differenz zwischen einem Ziel-Antwortsignal einer Wiedergabeeinheit und einem ausgehend von den wahrnehmungsfähigen Impulsantworten rekonstruierten Antwortsignal;
    - Mehrkanalumschaltung durch Minimierung des so bestimmten Fehlersignals, um die Filter der Filtermatrix zu erhalten,
    für eine Anwendung dieser Filtermatrix an das Mehrkanal-Audiosignal vor der akustischen Wiedergabe.
  7. Audio-Decoder, der eine Kalibriervorrichtung nach Anspruch 6 aufweist.
  8. EDV-Programm, das Codeanweisungen zur Durchführung der Schritte des Kalibrierverfahrens nach einem der Ansprüche 1 bis 5 aufweist, wenn diese Anweisungen von einem Prozessor ausgeführt werden.
  9. Prozessorlesbarer Speicherträger, auf dem ein EDV-Programm gespeichert ist, das Codeanweisungen zur Ausführung der Schritte des Kalibrierverfahrens nach einem der Ansprüche 1 bis 5 enthält.
EP13774728.3A 2012-09-18 2013-09-05 Optimierte kalibrierung eines klangwiedergabesystems mit mehreren lautsprechern Active EP2898707B1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1258760A FR2995754A1 (fr) 2012-09-18 2012-09-18 Calibration optimisee d'un systeme de restitution sonore multi haut-parleurs
PCT/FR2013/052047 WO2014044948A1 (fr) 2012-09-18 2013-09-05 Calibration optimisee d'un systeme de restitution sonore multi haut-parleurs

Publications (2)

Publication Number Publication Date
EP2898707A1 EP2898707A1 (de) 2015-07-29
EP2898707B1 true EP2898707B1 (de) 2020-04-22

Family

ID=47215616

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13774728.3A Active EP2898707B1 (de) 2012-09-18 2013-09-05 Optimierte kalibrierung eines klangwiedergabesystems mit mehreren lautsprechern

Country Status (4)

Country Link
US (1) US9584947B2 (de)
EP (1) EP2898707B1 (de)
FR (1) FR2995754A1 (de)
WO (1) WO2014044948A1 (de)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084058B2 (en) 2011-12-29 2015-07-14 Sonos, Inc. Sound field calibration using listener localization
US9106192B2 (en) 2012-06-28 2015-08-11 Sonos, Inc. System and method for device playback calibration
US9219460B2 (en) 2014-03-17 2015-12-22 Sonos, Inc. Audio settings based on environment
US9565497B2 (en) * 2013-08-01 2017-02-07 Caavo Inc. Enhancing audio using a mobile device
US9264839B2 (en) 2014-03-17 2016-02-16 Sonos, Inc. Playback device configuration based on proximity detection
US9952825B2 (en) 2014-09-09 2018-04-24 Sonos, Inc. Audio processing algorithms
EP3329486B1 (de) * 2015-07-30 2020-07-29 Dolby International AB Verfahren und vorrichtung zur erzeugung einer mezzanin-hoa-signalrepräsentation aus einer hoa-signalrepräsentation
US9779759B2 (en) * 2015-09-17 2017-10-03 Sonos, Inc. Device impairment detection
EP3531714B1 (de) 2015-09-17 2022-02-23 Sonos Inc. Erleichtern der kalibrierung einer audiowiedergabevorrichtung
US9693165B2 (en) 2015-09-17 2017-06-27 Sonos, Inc. Validation of audio calibration using multi-dimensional motion check
US9743207B1 (en) 2016-01-18 2017-08-22 Sonos, Inc. Calibration using multiple recording devices
US10003899B2 (en) 2016-01-25 2018-06-19 Sonos, Inc. Calibration with particular locations
US9860662B2 (en) 2016-04-01 2018-01-02 Sonos, Inc. Updating playback device configuration information based on calibration data
US9864574B2 (en) 2016-04-01 2018-01-09 Sonos, Inc. Playback device calibration based on representation spectral characteristics
US9763018B1 (en) 2016-04-12 2017-09-12 Sonos, Inc. Calibration of audio playback devices
CN112492502B (zh) * 2016-07-15 2022-07-19 搜诺思公司 联网麦克风设备及其方法以及媒体回放系统
US9794710B1 (en) 2016-07-15 2017-10-17 Sonos, Inc. Spatial audio correction
US10372406B2 (en) 2016-07-22 2019-08-06 Sonos, Inc. Calibration interface
US10459684B2 (en) 2016-08-05 2019-10-29 Sonos, Inc. Calibration of a playback device based on an estimated frequency response
EP3520443B1 (de) * 2016-10-19 2021-02-17 Huawei Technologies Co., Ltd. Verfahren und vorrichtung zur kontrolle akustische signale zur aufnahme und/oder wiedergabe durch ein elektroakustisches tonsystem
US10299061B1 (en) 2018-08-28 2019-05-21 Sonos, Inc. Playback device calibration
US10734965B1 (en) 2019-08-12 2020-08-04 Sonos, Inc. Audio calibration of a portable playback device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10351793B4 (de) * 2003-11-06 2006-01-12 Herbert Buchner Adaptive Filtervorrichtung und Verfahren zum Verarbeiten eines akustischen Eingangssignals

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
US9584947B2 (en) 2017-02-28
US20150223004A1 (en) 2015-08-06
WO2014044948A1 (fr) 2014-03-27
EP2898707A1 (de) 2015-07-29
FR2995754A1 (fr) 2014-03-21

Similar Documents

Publication Publication Date Title
EP2898707B1 (de) Optimierte kalibrierung eines klangwiedergabesystems mit mehreren lautsprechern
EP2374124B1 (de) Verwaltete codierung von mehrkanaligen digitalen audiosignalen
EP2374123B1 (de) Verbesserte codierung von mehrkanaligen digitalen audiosignalen
EP1836876B1 (de) Verfahren und vorrichtung zur individualisierung von hrtfs durch modellierung
EP1992198B1 (de) Optimierung des binauralen raumklangeffektes durch mehrkanalkodierung
EP1946612B1 (de) Hrtfs-individualisierung durch modellierung mit finiten elementen gekoppelt mit einem korrekturmodell
EP2002424B1 (de) Vorrichtung und verfahren zur skalierbaren kodierung eines mehrkanaligen audiosignals auf der basis einer hauptkomponentenanalyse
EP3807669B1 (de) Ortung von schallquellen in einer bestimmten akustischen umgebung
EP3427260B1 (de) Optimierte codierung und decodierung von verräumlichungsinformationen zur parametrischen codierung und decodierung eines mehrkanaligen audiosignals
EP1479266B1 (de) Verfahren und vorrichtung zur steuerung einer anordnung zur wiedergabe eines schallfeldes
EP1586220B1 (de) Verfahren und einrichtung zur steuerung einer wiedergabeeinheitdurch verwendung eines mehrkanalsignals
FR2899424A1 (fr) Procede de synthese binaurale prenant en compte un effet de salle
EP3706119A1 (de) Räumliche audiocodierung mit interpolation und quantifizierung der drehungen
EP3895446B1 (de) Verfahren zur interpolation eines schallfeldes und zugehöriges computerprogrammprodukt und vorrichtung
EP1652406B1 (de) System und verfahren zur bestimmung einer repräsentation eines akustischen feldes
EP3559947B1 (de) Verarbeitung in subbändern eines aktuellen ambisonic-inhalts zur verbesserten dekodierung
EP3025514B1 (de) Klangverräumlichung mit raumwirkung
WO2018050292A1 (fr) Dispositif et procede de captation et traitement d'un champ acoustique tridimensionnel
EP3384688B1 (de) Aufeinanderfolgende dekompositionen von audiofiltern
EP3934282A1 (de) Verfahren zur umwandlung eines ersten satzes repräsentativer signale eines schallfelds in einen zweiten satz von signalen und entsprechende elektronische vorrichtung
WO2009081002A1 (fr) Traitement d'un flux audio 3d en fonction d'un niveau de presence de composantes spatiales
WO2024126242A1 (fr) Obtention d'une réponse impulsionnelle d'une salle
FR2943867A1 (fr) Traitement d'egalisation de composantes spatiales d'un signal audio 3d
WO2014102199A1 (fr) Dispositif et procede d'interpolation spatiale de sons

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150417

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RIN1 Information on inventor provided before grant (corrected)

Inventor name: NICOL, ROZENN

Inventor name: DEPREZ, ROMAIN

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180912

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20191220

RIN1 Information on inventor provided before grant (corrected)

Inventor name: NICOL, ROZENN

Inventor name: DEPREZ, ROMAIN

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602013068191

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1261742

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200515

RAP2 Party data changed (patent owner data changed or rights of a patent transferred)

Owner name: ORANGE

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20200422

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200824

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200722

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200822

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200723

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1261742

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200422

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200722

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602013068191

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20210125

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20200930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200905

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200930

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200905

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200930

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200422

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230823

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230822

Year of fee payment: 11

Ref country code: DE

Payment date: 20230822

Year of fee payment: 11