EP3400599B1 - Encodeur ambisonique ameliore d'une source sonore a pluralite de reflexions - Google Patents

Encodeur ambisonique ameliore d'une source sonore a pluralite de reflexions Download PDF

Info

Publication number
EP3400599B1
EP3400599B1 EP16808645.2A EP16808645A EP3400599B1 EP 3400599 B1 EP3400599 B1 EP 3400599B1 EP 16808645 A EP16808645 A EP 16808645A EP 3400599 B1 EP3400599 B1 EP 3400599B1
Authority
EP
European Patent Office
Prior art keywords
reflections
sound
sound wave
ambisonic
logic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP16808645.2A
Other languages
German (de)
English (en)
French (fr)
Other versions
EP3400599A1 (fr
Inventor
Pierre Berthet
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mimi Hearing Technologies GmbH
Original Assignee
Mimi Hearing Technologies GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimi Hearing Technologies GmbH filed Critical Mimi Hearing Technologies GmbH
Publication of EP3400599A1 publication Critical patent/EP3400599A1/fr
Application granted granted Critical
Publication of EP3400599B1 publication Critical patent/EP3400599B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/01Multi-channel, i.e. more than two input channels, sound reproduction with two speakers wherein the multi-channel information is substantially preserved
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/11Application of ambisonics in stereophonic audio systems

Definitions

  • the present invention relates to the ambisonic encoding of sound sources. It relates more specifically to improving the efficiency of this coding, in the case where a sound source is affected by reflections in a sound scene.
  • Spatialized representations of sound bring together techniques for synthesizing and reproducing a sound environment, allowing the listener to be much more immersed in a sound environment. They allow a user in particular to discern a number of sound sources greater than the number of loudspeakers at his disposal, and to locate these sound sources precisely in 3D, even when their direction is not that of a loudspeaker.
  • the applications of spatialized representations of sound are numerous, and include the precise localization of sound sources in 3 dimensions by a user from sound from a stereo headset, or the localization of sound sources in 3 dimensions by users in a room, the sound being produced by speakers, for example 5.1 speakers.
  • the spatialized representations of sound allow the creation of new sound effects. For example, they allow the rotation of a sound scene or the application of reflection of a sound source to simulate the rendering of a given sound environment, for example a cinema hall or a concert hall.
  • ambisonic encoding Spatialized representations are carried out in two main stages: ambisonic encoding, and ambisonic decoding.
  • real-time ambisonic decoding is always necessary.
  • Real-time sound production or processing may also involve real-time ambisonic encoding thereof.
  • Ambisonic encoding being a complex task, real-time ambisonic encoding capabilities may be limited. For example, a given computation capacity will only be able to encode in real time a limited number of sound sources.
  • j m represents the spherical Bessel function of order m.
  • Y mn ( ⁇ , ⁇ ) represents the spherical harmonic of order mn in the directions ( ⁇ , ⁇ ) . defined by management r .
  • the symbol B mn ( t ) defines the ambisonic coefficients corresponding to the various spherical harmonics, at an instant t.
  • the ambisonic coefficients therefore define, at each instant, the entire sound field surrounding a point.
  • the processing of sound fields in the ambisonic domain has particularly interesting properties. In particular, it is very easy to rotate the entire sound field.
  • HRTF Head-Related Transfer Functions
  • HOA decomposition from the English acronym Higher Order Ambisonics, or Higher Order Ambisonia
  • the ambisonic coefficients describing the sound scene are calculated as the sum of the ambisonic coefficients of each of the sources, each source i having an orientation ( ⁇ si , ⁇ si ):
  • This problem is even stronger when reflections are calculated in a sound scene.
  • Tsingos makes it possible to reduce the number of sound sources, and therefore the complexity of the overall processing when reverbs are used.
  • this technique has several drawbacks. It does not improve the complexity of processing the reverbs themselves. The problem encountered would therefore arise again if, with a reduced number of sources, one wishes to increase the number of reverberations.
  • the processing for determining the sound power of each source, and merging the sources by clusters themselves have a significant computational load.
  • the experiments described are limited to cases where the sound sources are known in advance, and their respective powers are pre-calculated. In the case of sound scenes for which several sources of variable intensities are present, and whose powers must be recalculated, the associated calculation load would, at least partially, cancel out the calculation gain obtained by limiting the number of sources.
  • the document US 6021206 discloses filtering of virtual sound sources corresponding to reflections including delay and attenuation.
  • the document US 2011/305344 discloses a method of transforming sound tracks before binaural encoding, in order to minimize the need for a “sweet spot”, in particular by converting certain tracks into mono.
  • the invention relates to an ambisonic encoder for a sound wave with a plurality of reflections, comprising: a logic for frequency transformation of the sound wave; logic for calculating spherical harmonics of the sound wave and the plurality of reflections from a position of a source of the sound wave and positions of obstacles to propagation of the sound wave; a plurality of filtering logics in the frequency domain receiving as input spherical harmonics of the plurality of reflections, each filtering logic consisting of an attenuation and a delay of a reflection, and being parameterized by an acoustic coefficient and a delay of said reflection; a logic of adding spherical harmonics of the sound wave and of the outputs of the filtering logics, into a set of spherical harmonics representative both of the sound wave and of the plurality of reflections in the frequency domain; a logic for multiplying said set of spherical harmonics representative both of the sound wave and of the plurality of reflections in the frequency domain
  • the logic for calculating spherical harmonics of the sound wave is configured to calculate the spherical harmonics of the sound wave and of the plurality of reflections from a fixed position of the source of the sound wave.
  • the logic for calculating spherical harmonics of the sound wave is configured to iteratively calculate the spherical harmonics of the sound wave and of the plurality of reflections from successive positions of the source of the sound wave.
  • each reflection is characterized by a single acoustic coefficient.
  • each reflection is characterized by an acoustic coefficient for each frequency of said frequency sampling.
  • the reflections are represented by virtual sound sources.
  • the ambisonic encoder further comprises a logic for calculating the acoustic coefficients, the delays and the position of the virtual sound sources of the reflections, said calculation logic being configured to calculate the acoustic coefficients and the delays of the reflections as a function of '' estimates of a difference in the distance traveled by sound between the position of the source of the sound wave and an estimated position of a user on the one hand, and of a distance traveled by the sound between the positions of the virtual sound sources of the reflections and the estimated position of the user on the other hand.
  • the logic for calculating the acoustic coefficients, the delays and the positions of the virtual sound sources of the reflections is further configured to calculate the acoustic coefficients of the reflections as a function of at least one acoustic coefficient of at least one obstacle to propagation. of sound waves, on which the sound is reflected.
  • the logic for calculating the acoustic coefficients, the delays and the positions of the virtual sound sources of the reflections is further configured to calculate the acoustic coefficients of the reflections as a function of an acoustic coefficient of at least one obstacle to the propagation of sound waves, on which sound is reflected.
  • the logic for calculating spherical harmonics of the sound wave and of the plurality of reflections is further configured to calculate spherical harmonics of the sound wave and of the plurality of reflections at each output frequency of the transformation circuit.
  • said ambisonic encoder further comprising a logic for calculating binaural coefficients of the sound wave, configured to calculate binaural coefficients of the sound wave by multiplying at each output frequency of the frequency transformation circuit of the sound wave the signal of the sound wave by the spherical harmonics of the sound wave and the plurality of reflections at this frequency.
  • the logic for calculating the acoustic coefficients, the delays and the positions of the virtual sound sources of the reflections is configured to calculate the acoustic coefficients and the delays of a plurality of late reflections.
  • the invention also relates to a method of ambisonically encoding a sound wave with a plurality of reflections, as defined by claim 12.
  • the invention also relates to a computer program for ambisonic encoding of a plurality of reflections sound wave, as defined by claim 13.
  • the ambisonic encoder according to the invention makes it possible to improve the feeling of immersion in a 3D audio scene.
  • the complexity of encoding the reflections of sound sources from an ambisonic encoder according to the invention is less than the complexity of encoding the reflections of sound sources from an ambisonic encoder according to the state of the art.
  • the ambisonic encoder according to the invention makes it possible to encode a greater number of reflections from a sound source in real time.
  • the ambisonic encoder according to the invention makes it possible to reduce the power consumption associated with ambisonic encoding, and to increase the life of a battery of a mobile device used for this application.
  • the figures 1a and 1b show two examples of sound wave listening systems, according to two embodiments of the invention.
  • the figure 1a represents an example of a sound wave listening system, according to one embodiment of the invention.
  • the system 100a comprises a touch pad 110a, a headset 120a to allow a user 130a to listen to a sound wave.
  • the system 100a comprises, by way of example only, a touchscreen tablet. However, this example is also applicable to a smartphone, or to any other mobile device having display and sound broadcasting capabilities.
  • the sound wave can for example come from playing a movie or a game.
  • the system 100a can be configured to listen to several sound waves. For example, when the system 100a is configured for playing a movie comprising a 5.1 multichannel sound track, 6 sound waves are listened to simultaneously. Likewise, when system 100a is configured to play a game, many sound waves can be heard simultaneously. For example, in the case of a game involving several characters, a sound wave can be created for each character.
  • Each of the sound waves is associated with a sound source, the position of which is known.
  • the touchscreen tablet 110a comprises an ambisonic encoder 111a according to the invention, a transformation circuit 112a, and an ambisonic decoder 113a.
  • the ambisonic encoder 111a, the transformation circuit 112a and the ambisonic decoder 113a consist of computer code instructions executed on a processor of the touch pad. They may for example have been obtained by installing a specific application or software on the tablet.
  • at least one of the ambisonic encoder 111a, the transformation circuit 112a and the ambisonic decoder 113a is a specialized integrated circuit, for example an ASIC (acronym for English “Application-Specific Integrated Circuit, literally“ application-specific integrated circuit ”), an FPGA (acronym for English Field-Programmable Gate Array).
  • the ambisonic encoder 111a is configured to calculate, in the frequency domain, a set of ambisonic coefficients representative of the whole of a sound scene, from at least one sound wave. It is further configured to apply reflections to at least one sound wave, in order to simulate a listening environment, for example a movie theater of a certain size, or a concert hall.
  • the transformation circuit 112a is configured to perform rotations of the soundstage by modifying the ambisonic coefficients, in order to simulate the rotation of the user's head, so that, whatever the orientation of his face, the different sound waves seem to come from the same position. For example, if the user turns his head to the left by an angle ⁇ , a rotation of the sound stage to the right by the same angle ⁇ makes it possible to continue sending the sound to him always from the same direction.
  • the helmet 120a is equipped with at least one movement sensor 121a, for example a gyrometer, making it possible to obtain an angle or a derivative of an angle of rotation of the head. user 130a.
  • a signal representative of an angle of rotation, or of a derivative of an angle of rotation is then sent by the headphones 121a to the tablet 120a, so that the transformation circuit 112a performs the rotation of the corresponding sound scene.
  • the ambisonic decoder 113a is configured to reproduce the sound scene on the two stereo channels of the headphones 120a, by converting the transformed ambisonic coefficients into two stereo signals, one for the left channel and the other for the right channel.
  • the ambisonic decoding is carried out using functions called HRTF (acronym for the English “Head Related Transfer Functions”, literally Head Related Transfer Functions) making it possible to reproduce the directions of the different sound sources on two stereo channels.
  • HRTF acronym for the English “Head Related Transfer Functions”, literally Head Related Transfer Functions
  • the system 100a thus allows its user to benefit from a particularly immersive experience: during a game or a multimedia content reading, in addition to the image, this system allows him to benefit from an impression. immersion in a sound scene. This impression is amplified both by following the orientations of the different sound sources when the user turns his head, and by the application of reflections giving an impression of immersion in a particular listening environment.
  • This system makes it possible, for example, to watch a film or a concert with an audio headset, while having an impression of immersion in a cinema hall or a concert hall. All of these operations are carried out in real time, which makes it possible to constantly adapt the sound perceived by the user to the orientation of his head.
  • the ambisonic encoder 111a makes it possible to encode a greater number of reflections from sound sources, with less complexity compared to an ambisonic encoder of the prior art. It therefore makes it possible to perform all the ambisonic calculations in real time, while increasing the number of reflections from sound sources. This increase in the number of reflections makes it possible to model more precisely the simulated listening environment (concert hall, cinema, etc.) and therefore improve the feeling of immersion in the sound scene.
  • the reduction in the complexity of the ambisonic encoding also makes it possible, by considering an identical number of sound sources, to reduce the electrical consumption of the encoder compared to an encoder of the state of the art, and therefore to increase the battery discharge time of the touch pad 110a. This therefore allows the user to enjoy multimedia content for a longer period of time.
  • the figure 1b represents a second example of a sound wave listening system, according to one embodiment of the invention.
  • the system 100b includes a central unit 110b connected to a screen 114b, a mouse 115b and a keyboard 116b and a headset 120b and is used by a user 130b.
  • the central unit comprises an ambisonic encoder 111b according to the invention, a transformation circuit 112b, and an ambisonic decoder 113b, respectively similar to the ambisonic encoder 111a, transformation circuit 112a, and ambisonic decoder 113a of the system 100a.
  • the ambisonic encoder 111b is configured to encode at least one wave representative of a sound scene by adding reflections thereto
  • the headphones 120a include at least one movement sensor 120b
  • the transformation circuit 120b is configured to perform soundstage rotations to follow the orientation of the user's head
  • the ambisonic decoder 113b is configured to output sound on the two stereo channels of the headphones 120b, so that the user 130b has an impression of immersion in a sound scene.
  • the 100b system is suitable for viewing multimedia content, but also for video games. Indeed, in a video game, very many sound waves, coming from different sources, can occur. This is for example the case in a strategy or war game, in which many characters can emit different sounds (noises of footsteps, running, shots, etc.) for various sound sources.
  • a 111b ambisonic encoder can encode all these sources, while adding many reflections to them making the scene more realistic and immersive, in real time.
  • the system 100b comprising an ambisonic encoder 111b according to the invention allows an immersive experience in a video game, with a large number of sound sources and reflections.
  • the figure 2 represents an example of a binauralization system comprising a binauralization engine by sound source of an audio scene according to the state of the art.
  • the binauralization system 200 is configured to transform a set 210 of sound sources of a soundstage into a left channel 240 and a right channel 241 of a stereo listening system, and includes a set of binaural motors 220, including a binaural motor by sound source.
  • the sources can be from any type of sound source (mono, stereo, 5.1, multiple sound sources in the case of a video game for example).
  • Each sound source is associated with an orientation in space, for example defined by angles ( ⁇ , ⁇ ) in a frame of reference, and by a sound wave, itself represented by a set of temporal samples.
  • the possible output channels correspond to the different listening channels, for example we can have two output channels in a stereo listening system, 6 output channels in a 5.1 listening system, etc ...
  • Each binauralization motor produces two outputs (one left and one right output), and the system 200 includes an addition circuit 230 of all the left outputs and an addition circuit 231 of all the right outputs of the set 220. binauralization engines.
  • the outputs of the addition logic 230 and 231 are respectively the sound wave of the left channel 240 and the sound wave of the right channel 241 of a stereo listening system.
  • the system 200 makes it possible to transform the set of sound sources 210 into two stereo channels, while being able to apply all the transformations allowed by the ambisonia, such as rotations.
  • the system 200 has a major drawback in terms of calculation time: it requires calculations to calculate the ambisonic coefficients of each sound source, calculations for the transformations of each sound source, and calculations for the outputs associated with each source. sound.
  • the computational load for the processing of a sound source by the system 200 is therefore proportional to the number of sound sources, and can, for a large number of sound sources, become prohibitive.
  • the figures 3a and 3b represent two examples of engines for binauralization of a 3D scene, respectively in the time domain and the frequency domain according to the state of the art.
  • the figure 3a represents an example of a binauralization engine of a 3D scene, in the time domain according to the state of the art.
  • the binauralization engine 300a comprises a single HOA encoding engine 320a for all of the sources 310 of the sound scene.
  • This encoding engine 320a is configured to calculate, at each time step, the binaural coefficients of each sound source as a function of the intensity and the position of the sound source at said time step, then to sum the binaural coefficients of the different sound sources. This makes it possible to obtain a single set 321a of binaural coefficients representative of the whole of the sound scene.
  • the binauralization engine 320a then comprises a coefficient transformation circuit 330a, configured to transform the set of coefficients 321a representative of the sound scene into a set of transformed coefficients 331a representative of the whole of the sound scene. This makes it possible for example to perform a rotation of the whole of the sound scene.
  • the binauralization engine 300a finally comprises a binaural decoder 340a, configured to restore the transformed coefficients 331a into a set of output channels, for example a left channel 341a and a right channel 342a of a stereo system.
  • the binauralization engine 300a therefore makes it possible to reduce the computational complexity necessary for the binaural processing of a sound scene compared to the system 200, by applying the transformation and decoding steps to the whole of the sound scene, rather than to each sound source taken individually.
  • figure 3b represents an example of a binauralization engine of a 3D scene, in the frequency domain according to the state of the art.
  • the 300b binauralization engine is quite similar to the 300a binauralization engine. It comprises a set 311b of frequency transformation logics, the set 311b comprising a frequency transformation logic for each sound source.
  • the frequency transformation logics can for example be configured to apply a fast Fourier transform (or FFT, from the acronym Fast Fourier Transform), in order to obtain a set 312b of sources in the frequency domain.
  • FFT fast Fourier transform
  • the application of frequency transforms is well known to those skilled in the art, and is for example described by A. Mertins, Signal Analysis: Wavelets, Filter banks, Time-Frequency Transforms and Applications, English (revised edition). ISBN: 9780470841839 .
  • the inverse operation or inverse frequency transformation (known as FFT -1 or inverse fast Fourier transformation in the case of a fast Fourier transform) makes it possible to restore, from a sampling of frequencies, the intensities of sound samples .
  • the binauralization engine 300b then includes an HOA encoder 320b in the frequency domain.
  • the encoder 320b is configured to calculate, for each source and at each frequency of the frequency sampling, the corresponding ambisonic coefficients, then to add the ambisonic coefficients of the different sources, in order to obtain a set 321b of ambisonic samples representative of the whole soundstage, at different frequencies.
  • the binauralization engine 300b then comprises a transformation circuit 330b, similar to the transformation circuit 330a, making it possible to obtain a set 331b of transformed ambisonic coefficients representative of the whole of the sound scene, and a binaural decoder 340b, configured to restore two stereo channels 341b and 342b.
  • the binaural decoder 340b comprises an inverse frequency transformation circuit, in order to restore the stereo channels in the time domain.
  • the properties of the binauralization engine 300b are quite similar to those of the binauralization engine 300a. It also makes it possible to perform binaural processing of a sound scene, with reduced complexity compared to the system 200.
  • the complexity of the binaural processing of the binauralization engines 300a and 300b is mainly due to the calculation of the HOA coefficients by the encoders 320a and 320b. Indeed, the number of coefficients to be calculated is proportional to the number of sources.
  • the transformation circuits 330a and 330b, as well as the binaural decoders 340a and 340b process sets of binaural coefficients representative of the whole of the sound scene, the number of which does not vary according to the number of sources.
  • the complexity of binaural encoders 320a and 320b can increase significantly. Indeed, the state of the art solution for processing reflections consists of adding a virtual sound source for each reflection. The complexity of the HOA encoding of these encoders according to the state of the art therefore increases proportionally as a function of the number of reflections per source, and can become problematic when the number of reflections becomes too large.
  • the figure 4 represents an example of an ambisonic encoder of a sound wave with a plurality of reflections, in a set of embodiments of the invention.
  • the ambisonic encoder 400 is configured to encode a sound wave 410 with a plurality of reflections, into a set of one-order ambisonic coefficients. To do this, the ambisonic encoder is configured to calculate a set 460 of representative spherical harmonics. the sound wave and the plurality of reflections.
  • the ambisonic encoder 400 will be described, by way of example, for the encoding of a single sound wave. However, an ambisonic encoder 400 according to the invention can also encode a plurality of sound waves, the elements of the ambisonic encoder being used in the same way for each additional sound wave.
  • the sound wave 410 can correspond for example to a channel of an audio track, or to a dynamically created sound wave, for example a sound wave corresponding to an object of a video game.
  • the sound waves are defined by successive sound intensity samples.
  • the sound waves can for example be sampled at a frequency of 22500Hz, 12000Hz, 44100 Hz, 48000 Hz, 88200 Hz, or 96000 Hz, and each of the intensity samples coded to 8, 12, 16, 24 or 32 bits. In the event of a plurality of sound waves, these can be sampled at different frequencies, and the samples can be encoded on different numbers of bits.
  • the ambisonic encoder 400 includes logic 420 for frequency transformation of the sound wave. This is similar to the logic 311b of frequency transformation of the sound waves of the binauralization system 300b according to the state of the art.
  • the encoder 400 includes frequency transformation logic for each sound wave.
  • a sound wave is defined 421, for a time window, by a set of intensities at different frequencies of a frequency sampling.
  • the frequency transformation logic 420 is an application logic of an FFT.
  • the encoder 400a also includes logic 430 for calculating spherical harmonics of the sound wave and the plurality of reflections from a position of a source of the sound wave and from positions of obstacles to propagation. of the sound wave.
  • the position of the source of the sound wave is defined by angles ( ⁇ s i , ⁇ s i ) and a distance from a listening position of the user.
  • the calculation of spherical harmonics Y 00 ( ⁇ s i , ⁇ s i ), Y 1-1 ( ⁇ s i , ⁇ s i ) Y 10 ( ⁇ s i , ⁇ s i ), Y 11 ( ⁇ s i , ⁇ s i ), ..., Y MM ( ⁇ s i , ⁇ s i ), the sound wave at the order M can be carried out according to the methods known in the state of the art, from the angles ( ⁇ s i , ⁇ s i ) defining the orientation of the source of the sound wave.
  • Logic 430 is also configured to calculate, from the position of the source of the sound wave, a set of spherical harmonics of the plurality of reflections.
  • logic 430 is configured to calculate, from the position of the source of the sound wave, and positions of obstacles to the propagation of the sound wave, a orientation of a virtual source of a reflection, defined by angles ( ⁇ s, r , ⁇ s, r ) then, from these angles, spherical harmonics Y 00 ( ⁇ s, r , ⁇ s, r ) , Y 1-1 ( ⁇ s, r , ⁇ s, r ), Y 10 ( ⁇ s, r , ⁇ s, r ), Y 11 ( ⁇ s, r , ⁇ s, r ), ..., Y MM ( ⁇ s, r , ⁇ s, r ) of the sound wave reflection.
  • the ambisonic encoder 400 also includes a plurality 440 of filter logic in the frequency domain receiving as input spherical harmonics of the plurality of reflections, each filter logic being parameterized by acoustic and delay coefficients of the reflections.
  • ⁇ r will be called an acoustic coefficient of a reflection and ⁇ r a delay of a reflection.
  • the acoustic coefficient can be a coefficient of ⁇ r reverberation, representative of a ratio of the intensities of a reflection to the intensities of the sound source and defined between 0 and 1.
  • a filter logic 440 is configured to filter the spherical harmonics by applying: ⁇ r e - j 2 ⁇ r Y ij ( ⁇ s, r , ⁇ s, r ).
  • the coefficient ⁇ r is treated as a reverberation coefficient.
  • a coefficient ⁇ a can be treated as an attenuation coefficient, and the filtering of spherical harmonics can for example be performed by applying: ( 1 - ⁇ a ) e -j 2 ⁇ f ⁇ r Y ij ( ⁇ s, r , ⁇ s, r ).
  • the coefficient ⁇ r will be considered as a reverberation coefficient.
  • a person skilled in the art could however easily implement the various embodiments of the invention with an attenuation coefficient rather than a reverberation coefficient.
  • the ambisonic encoder 400 also includes logic 450 for adding the spherical harmonics of the sound wave and the outputs of the filtering logic.
  • This logic makes it possible to obtain a set Y '00 , Y' 1-1 , Y '10 , Y' 11 , ... Y ' MM of spherical harmonics at order M, representative of both the wave sound, and reflections of the sound wave, in the frequency domain.
  • the number N r of reflections can be predefined.
  • the reflections of the sound wave are preserved according to their acoustic coefficient, the number Nr of reflections then depending on the position of the sound source, on the position of the user, and obstacles to the propagation of sound.
  • the acoustic coefficient is defined as a ratio of the intensity of the reflection to the intensity of the sound source, or a reverberation coefficient.
  • the reflections of the sound wave having an acoustic coefficient greater than or equal to a predefined threshold are preserved.
  • the acoustic coefficient is defined as an attenuation coefficient, i.e. a ratio between the sound intensity absorbed by the obstacles to the propagation of sound waves and the air path and intensity. sound source.
  • the reflections of the sound wave having an acoustic coefficient less than or equal to a predefined threshold are preserved
  • the ambisonic encoder 400 makes it possible to calculate a set of spherical harmonics Y ' ij representative both of the sound wave and of its reflections.
  • the encoder can include a logic of multiplying the spherical harmonics by the sound intensity values of the source at the different frequencies, in order to obtain ambisonic coefficients representative of both the sound wave and reflections.
  • the encoder 400 includes logic for adding the ambisonic coefficients of the different sound sources and their reflections, making it possible to obtain at the output ambisonic coefficients representative of the entire sound scene. .
  • the sound wave spherical harmonics calculation logic 430 is configured to calculate the spherical harmonics of the sound wave and the plurality of reflections from a position fixed source of the sound wave.
  • the orientations ( ⁇ si , ⁇ si ) of the sound source, and the orientations ( ⁇ s, r , ⁇ s, r ) of each of the harmonics are constant.
  • the spherical harmonics of the sound wave and of the plurality of reflections then also have a constant value, and can be calculated only once for the sound wave.
  • the sound wave spherical harmonics calculation logic 430 is configured to iteratively calculate the spherical harmonics of the sound wave and the plurality of reflections from successive positions of the source of the sound wave. According to different embodiments of the invention, different possibilities exist for defining the iterations of calculation. In one embodiment of the invention, logic 430 is configured to recalculate the values of the spherical harmonics of the sound wave and of the plurality of reflections each time a change in the position of the source of the wave. sound or user's position is detected.
  • logic 430 is configured to recalculate the values of the spherical harmonics of the sound wave and of the plurality of reflections at regular intervals, for example every 10 ms. In another embodiment of the invention, logic 430 is configured to recalculate the values of the spherical harmonics of the sound wave and of the plurality of reflections at each of the time windows used by the frequency transformation logic 420 of the sound wave. sound wave to convert the temporal samples of the sound wave into frequency samples.
  • each reflection is characterized by a single acoustic coefficient ⁇ r .
  • each reflection is characterized by an acoustic coefficient for each frequency of said frequency sampling.
  • a reflection at a frequency can be considered as zero, as a function of a comparison between the acoustic coefficient ⁇ r for this frequency and a predefined threshold.
  • a predefined threshold For example, if the coefficient ⁇ r represents a reverberation coefficient, the frequency is considered to be zero if it is less than a predefined threshold. On the contrary, if it is an attenuation coefficient, the frequency is considered zero if it is greater than or equal to a predefined threshold. This makes it possible to further limit the number of multiplications, and therefore the complexity of the ambisonic encoding, while having a minimal impact on the binaural rendering.
  • the ambisonic encoder 400 includes logic for calculating the acoustic coefficients and the delays, and the position of the virtual sound source of the reflections.
  • This calculation logic can for example be configured to calculate the acoustic coefficients and the delays of the reflections as a function of estimates of a difference in distance traveled by the sound between the position of the source of the sound wave and an estimated position d 'a user on the one hand, and the distance traveled by the sound between the positions of the virtual sound sources of the reflections and the estimated position of the user on the other hand.
  • the logic for calculating the acoustic coefficients and the delays, and the position of the virtual sound source of the reflections can therefore be configured to calculate an acoustic coefficient of a reflection of the sound wave as a function of the difference in distance traveled between the sound from the sound source in a straight line on the one hand, and sound having been affected by reflection on the other hand.
  • the logic for calculating the acoustic coefficients and delays, and the position of the virtual sound source of the reflections is also configured to calculate the acoustic coefficients of the reflections as a function of a coefficient acoustics of at least one obstacle to the propagation of sound waves, on which the sound is reflected.
  • the acoustic coefficient of the obstacle can be a reverberation coefficient or an attenuation coefficient.
  • the figure 5 represents an example of calculation of a secondary sound source, in one embodiment of the invention.
  • a source of the sound wave has a position 520 in a room 510, and the user has a position 540.
  • the room 510 consists of 4 walls 511, 512, 513 and 514.
  • the logic for calculating the acoustic coefficients and the delays, and the position of the virtual sound source of the reflections is configured to calculate the position, delay and attenuation of the virtual sound sources.
  • reflections as follows: for each of the walls 511, 512, 513, 514, the logic is configured to calculate a position of a virtual sound source of a reflection as the symmetrical of the position of the sound source with respect to a wall.
  • the calculation logic is thus configured to calculate the positions 521, 522, 523 and 524 of four virtual sound sources of the reflections, respectively with respect to the walls 511, 512, 513 and 514.
  • the calculation logic is configured to calculate a path of travel of the sound wave, and to deduce therefrom the corresponding acoustic coefficient and the corresponding delay.
  • the sound wave follows the path 530 to point 531 of the wall 512, then the path 532 to the position of the user 540.
  • the distance traveled by the user. sound along the path 530, 532 makes it possible to calculate an acoustic coefficient and a delay of the reflection.
  • the calculation logic is also configured to apply an acoustic coefficient corresponding to the absorption of the wall 512 at point 531. In a set of embodiments of the invention, this coefficient depends different frequencies, and can for example be determined, for each frequency, as a function of the material and / or the thickness of the wall 512.
  • the virtual sound sources 521, 522, 523, 524 are used to calculate secondary virtual sound sources, corresponding to multiple reflections.
  • a secondary virtual source 533 can be calculated as the symmetrical of the virtual source 521 with respect to the wall 514.
  • the corresponding sound wave path then comprises the segments 530 up to point 531; 534 between points 531 and 535; 536 between point 535 and position 540 of the user.
  • the acoustic coefficients and the delays can then be calculated from the distance traveled by the sound on segments 531, 535 and 536, and the absorption of the walls at points 531 and 535.
  • virtual sound sources corresponding to reflections can be calculated up to a predefined order n. Different embodiments are possible to determine which reflections to keep.
  • the calculation logic is configured to calculate, for each virtual sound source, a higher order virtual sound source for each of the walls, up to a predefined order n.
  • the ambisonic encoder is configured to process a predefined number Nr of reflections per sound source, and keeps the Nr reflections having the lowest attenuation.
  • the virtual sound sources are kept on the basis of a comparison of an acoustic coefficient with a predefined threshold.
  • the figure 6 represents an example of calculation of early reflections and late reflections, in one embodiment of the invention.
  • Diagram 600 represents the intensity of several reflections of the sound wave, versus time.
  • the axis 601 represents the intensity of a reflection
  • the axis 602 the delay between the emission of the sound wave by the source of the sound wave, and the perception of a reflection by the user.
  • reflections occurring before a predefined delay 603 are considered early reflections 610, and reflections occurring after delay 603 as late reflections 620.
  • early reflections are calculated at using a virtual sound source, for example according to the principle described with reference to the figure 5 .
  • the late reflections are calculated as follows: a set of Nt secondary sound sources is calculated, for example according to the principle described in figure 5 .
  • the logic for calculating acoustic coefficients and delays, and the position of the virtual sound source of the reflections is configured to keep a number Nr of reflections less than Nt, according to various embodiments described above.
  • it is further configured to build a list of (Nt - Nr) late reflections, including all non-conserved reflections. This list includes only, for each late reflection, an acoustic coefficient and a delay of the late reflection, but no position of a virtual source.
  • this list is transmitted by the ambisonic encoder to an ambisonic decoder.
  • the ambisonic decoder is then configured to filter its outputs, for example its output stereo channels, with the acoustic coefficients and the delays of the late reflections, then to add these filtered signals to the output signals. This makes it possible to improve the feeling of immersion in a room or a listening environment, while further limiting the computational complexity of the encoder.
  • the ambisonic encoder is configured to filter the sound wave with the acoustic coefficients and the delays of the late reflections, and to add the signals obtained uniformly to the set of ambisonic coefficients.
  • the late reflections have a low intensity and have no direction information from a sound source. They will therefore be perceived by a user as an “echo” of the sound wave, distributed homogeneously in the sound scene, and representative of a listening environment.
  • this calculation is carried out only once, for example on initialization of the sound scene, and the acoustic coefficients and the delays of the late reflections are reused without modification by the ambisonic encoder. This makes it possible to obtain late reflections representative of the listening environment at a lower cost. According to others embodiments of the invention, this calculation is performed iteratively. For example, these acoustic coefficients and delays of the late reflections can be calculated at predefined time intervals, for example every 5 seconds. This makes it possible to permanently conserve acoustic coefficients and delays of late reflections representative of the sound scene, and the relative positions of a source of the sound wave and of the user, while limiting the complexity of calculation linked to the determination. late reflections.
  • the acoustic coefficients and delays of the late reflections are calculated when the position of a source of the sound wave or of the user varies significantly, for example when the difference between the position of the user and a previous position of the user during a calculation of the acoustic coefficients and delays of the late reflections representative of the sound scene is greater than a predefined threshold. This makes it possible to calculate the acoustic coefficients and delays of the late reflections representative of the sound scene only when the position of a source of the sound wave or of the user has varied sufficiently to perceptibly modify the late reflections.
  • the figure 7 represents a method of encoding a sound wave at a plurality of reflections in a set of embodiments of the invention.
  • the method 700 comprises a step 710 of frequency transformation of the sound wave.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Stereophonic System (AREA)
EP16808645.2A 2016-01-05 2016-12-08 Encodeur ambisonique ameliore d'une source sonore a pluralite de reflexions Active EP3400599B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1650062A FR3046489B1 (fr) 2016-01-05 2016-01-05 Encodeur ambisonique ameliore d'une source sonore a pluralite de reflexions
PCT/EP2016/080216 WO2017118519A1 (fr) 2016-01-05 2016-12-08 Encodeur ambisonique ameliore d'une source sonore a pluralite de reflexions

Publications (2)

Publication Number Publication Date
EP3400599A1 EP3400599A1 (fr) 2018-11-14
EP3400599B1 true EP3400599B1 (fr) 2021-06-16

Family

ID=55953194

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16808645.2A Active EP3400599B1 (fr) 2016-01-05 2016-12-08 Encodeur ambisonique ameliore d'une source sonore a pluralite de reflexions

Country Status (5)

Country Link
US (2) US10475458B2 (zh)
EP (1) EP3400599B1 (zh)
CN (1) CN108701461B (zh)
FR (1) FR3046489B1 (zh)
WO (1) WO2017118519A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3602252A4 (en) * 2017-03-28 2020-12-16 Magic Leap, Inc. SYSTEM OF EXTENDED REALITY WITH SPACIOUS AUDIO TIED TO A USER MANIPULATED VIRTUAL OBJECT
US11004457B2 (en) 2017-10-18 2021-05-11 Htc Corporation Sound reproducing method, apparatus and non-transitory computer readable storage medium thereof
CA3059064C (en) 2018-03-07 2022-01-04 Magic Leap, Inc. Visual tracking of peripheral devices
CN109327795B (zh) * 2018-11-13 2021-09-14 Oppo广东移动通信有限公司 音效处理方法及相关产品

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6021206A (en) * 1996-10-02 2000-02-01 Lake Dsp Pty Ltd Methods and apparatus for processing spatialised audio
US20050069143A1 (en) * 2003-09-30 2005-03-31 Budnikov Dmitry N. Filtering for spatial audio rendering
AU2003301502A1 (en) * 2003-12-15 2005-08-03 France Telecom Method for synthesizing acoustic spatialization
EP2205007B1 (en) * 2008-12-30 2019-01-09 Dolby International AB Method and apparatus for three-dimensional acoustic field encoding and optimal reconstruction
FR3040807B1 (fr) 2015-09-07 2022-10-14 3D Sound Labs Procede et systeme d'elaboration d'une fonction de transfert relative a la tete adaptee a un individu

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
WO2017118519A1 (fr) 2017-07-13
US20190019520A1 (en) 2019-01-17
US10475458B2 (en) 2019-11-12
US11062714B2 (en) 2021-07-13
CN108701461A (zh) 2018-10-23
EP3400599A1 (fr) 2018-11-14
CN108701461B (zh) 2023-10-27
US20200058312A1 (en) 2020-02-20
FR3046489B1 (fr) 2018-01-12
FR3046489A1 (fr) 2017-07-07

Similar Documents

Publication Publication Date Title
EP1563485B1 (fr) Procede de traitement de donnees sonores et dispositif d'acquisition sonore mettant en oeuvre ce procede
EP2374123B1 (fr) Codage perfectionne de signaux audionumeriques multicanaux
EP1600042B1 (fr) Procede de traitement de donnees sonores compressees, pour spatialisation
EP2374124B1 (fr) Codage perfectionne de signaux audionumériques multicanaux
EP3400599B1 (fr) Encodeur ambisonique ameliore d'une source sonore a pluralite de reflexions
EP2898707B1 (fr) Calibration optimisee d'un systeme de restitution sonore multi haut-parleurs
EP1992198B1 (fr) Optimisation d'une spatialisation sonore binaurale a partir d'un encodage multicanal
EP3475943B1 (fr) Procede de conversion et d'encodage stereophonique d'un signal audio tridimensionnel
EP1999998A1 (fr) Procede de synthese binaurale prenant en compte un effet de salle
FR2862799A1 (fr) Dispositif et methode perfectionnes de spatialisation du son
WO2007104882A1 (fr) Dispositif et procede de codage par analyse en composante principale d'un signal audio multi-canal
WO2004086818A1 (fr) Procede pour traiter un signal electrique de son
WO2003073791A2 (fr) Procédé et dispositif de pilotage d'un ensemble de restitution d'un champ acoustique
WO2005069272A1 (fr) Procede de synthese et de spatialisation sonores
EP3895446B1 (fr) Procede d'interpolation d'un champ sonore, produit programme d'ordinateur et dispositif correspondants.
EP3025514B1 (fr) Spatialisation sonore avec effet de salle
EP1994526B1 (fr) Synthese et spatialisation sonores conjointes
EP4184505B1 (fr) Spatialisation sonore avec effet de salle, optimisee en complexite
FR2866974A1 (fr) Procede de traitement sonores, en particulier en contexte ambiophonique
FR3018026A1 (fr) Procede et dispositif de restitution d'un signal audio multicanal dans une zone d'ecoute

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180627

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20190701

19U Interruption of proceedings before grant

Effective date: 20190116

19W Proceedings resumed before grant after interruption of proceedings

Effective date: 20200302

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: MIMI HEARING TECHNOLOGIES GMBH

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20210209

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602016059445

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1402987

Country of ref document: AT

Kind code of ref document: T

Effective date: 20210715

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210916

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1402987

Country of ref document: AT

Kind code of ref document: T

Effective date: 20210616

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20210616

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210916

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210917

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211018

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602016059445

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

26N No opposition filed

Effective date: 20220317

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20211231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20211208

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20211208

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20211231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20211231

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20211231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20161208

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20231130

Year of fee payment: 8

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20231212

Year of fee payment: 8

Ref country code: DE

Payment date: 20231205

Year of fee payment: 8

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210616