EP1836876B1 - Verfahren und vorrichtung zur individualisierung von hrtfs durch modellierung - Google Patents
Verfahren und vorrichtung zur individualisierung von hrtfs durch modellierung Download PDFInfo
- Publication number
- EP1836876B1 EP1836876B1 EP06709051.4A EP06709051A EP1836876B1 EP 1836876 B1 EP1836876 B1 EP 1836876B1 EP 06709051 A EP06709051 A EP 06709051A EP 1836876 B1 EP1836876 B1 EP 1836876B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- directions
- hrtfs
- individual
- model
- measurements
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 42
- 238000005259 measurement Methods 0.000 claims description 97
- 230000006870 function Effects 0.000 claims description 64
- 238000012546 transfer Methods 0.000 claims description 37
- 238000013528 artificial neural network Methods 0.000 claims description 35
- 238000010200 validation analysis Methods 0.000 claims description 24
- 210000005069 ears Anatomy 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 16
- 238000012360 testing method Methods 0.000 claims description 16
- 230000000877 morphologic effect Effects 0.000 claims description 13
- 238000010276 construction Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 7
- 101710112083 Para-Rep C1 Proteins 0.000 claims description 6
- 102100022881 Rab proteins geranylgeranyltransferase component A 1 Human genes 0.000 claims description 6
- 101710119887 Trans-acting factor B Proteins 0.000 claims description 6
- 238000009434 installation Methods 0.000 claims description 6
- 101710084218 Master replication protein Proteins 0.000 claims description 5
- 101710112078 Para-Rep C2 Proteins 0.000 claims description 5
- 102100022880 Rab proteins geranylgeranyltransferase component A 2 Human genes 0.000 claims description 5
- 101710119961 Trans-acting factor C Proteins 0.000 claims description 5
- 230000000694 effects Effects 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 description 18
- 235000021183 entrée Nutrition 0.000 description 14
- 230000015572 biosynthetic process Effects 0.000 description 7
- 238000013178 mathematical model Methods 0.000 description 7
- 238000003786 synthesis reaction Methods 0.000 description 7
- 230000004913 activation Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000001228 spectrum Methods 0.000 description 6
- 210000003128 head Anatomy 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000009467 reduction Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000000513 principal component analysis Methods 0.000 description 3
- 238000013179 statistical model Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 206010011878 Deafness Diseases 0.000 description 2
- 241000897276 Termes Species 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- KRQUFUKTQHISJB-YYADALCUSA-N 2-[(E)-N-[2-(4-chlorophenoxy)propoxy]-C-propylcarbonimidoyl]-3-hydroxy-5-(thian-3-yl)cyclohex-2-en-1-one Chemical compound CCC\C(=N/OCC(C)OC1=CC=C(Cl)C=C1)C1=C(O)CC(CC1=O)C1CCCSC1 KRQUFUKTQHISJB-YYADALCUSA-N 0.000 description 1
- 206010021403 Illusion Diseases 0.000 description 1
- 241000861223 Issus Species 0.000 description 1
- 241001080024 Telles Species 0.000 description 1
- 238000010420 art technique Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 210000000613 ear canal Anatomy 0.000 description 1
- 229940082150 encore Drugs 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 210000002837 heart atrium Anatomy 0.000 description 1
- 238000007917 intracranial administration Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 210000003454 tympanic membrane Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S1/00—Two-channel systems
- H04S1/002—Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/301—Automatic calibration of stereophonic sound system, e.g. with test microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S1/00—Two-channel systems
- H04S1/007—Two-channel systems in which the audio signals are in digital form
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
Definitions
- the present invention relates to the modeling of individual transfer functions called HRTFs (for "Head Related Transfer Functions” ), relating to the hearing of an individual in the three-dimensional space.
- HRTFs for "Head Related Transfer Functions”
- the invention is particularly in the context of telecommunication services offering spatialized sound broadcasting (for example an audio conference between several speakers, a movie trailer).
- spatialized sound broadcasting for example an audio conference between several speakers, a movie trailer.
- the most effective technique for positioning sound sources in space is then binaural synthesis.
- Binaural synthesis is based on the use of so-called " binaural " filters, which reproduce the acoustic transfer functions between the sound source and the listener's ears. These filters are used to simulate auditory location indices, indices that allow a listener to locate sound sources in real listening situations. These filters take into account all the acoustic phenomena (in particular the diffraction by the head, the reflections on the roof of the ear and the top of the torso) which modify the acoustic wave in its path between the source and the ears of the listener. These phenomena vary greatly with the position of the sound source (mainly with its direction) and these variations allow the listener to locate the source in the space.
- Binaural techniques using binaural filters define the field of binaural synthesis in an advantageous context of the present invention.
- Binaural synthesis is based on binaural filters that model the propagation of the acoustic wave between the source and the two ears of the listener. These filters represent acoustic transfer functions called HRTFs that model the transformations generated by the torso, head and horn of the listener on the signal coming from a sound source. At each sound source position is associated a pair of HRTFs (one HRTF for the right ear, one HRTF for the left ear). In addition, HRTFs carry the acoustic fingerprint of the morphology of the individual on which they were measured.
- HRTFs therefore depend not only on the direction of the sound, but also on the individual. They are thus a function of the frequency f, the position ( ⁇ , ⁇ ) of the sound source (where the angle ⁇ represents the azimuth and the angle ⁇ the elevation), of the ear (left or right) and the individual.
- HRTFs are obtained by measurement.
- left and right HRTFs are measured by means of microphones inserted at the entrance of a subject's ear canal. The measurement must be performed in an anechoic chamber (or " deaf room ").
- M directions we obtain, for a given subject, a database of 2M acoustic transfer functions representing each position of the space for each ear.
- the spatialization effect is based on the use of HRTFs which, for optimal performance, must take into account acoustic propagation phenomena between the source and the ears, but also the individual specificities of the morphology of the listener.
- the experimental measurement of HRTFs directly on an individual is, at the moment, the most reliable solution to obtain binaural filters of quality and really individualized (taking into account the individual specificities of the morphology of the individual). It is recalled that it is a matter of measuring the transfer function between a source located at a given position ( ⁇ 1, ⁇ 1) and the two ears of the subject by means of microphones placed at the entrance of the auditory ducts of this person.
- An embodiment of this document provides in particular to enrich the morphological data of an individual, at the input of the model, by some HRTFs measured on this individual and in respective specific directions. Thus, only a small number of measurement directions are useful for obtaining the HRTFs of the individual in all directions of space.
- the conditions and directions in which the representative functions of the HRTFs are to be measured can be arbitrarily set in the learning step.
- arbitrarily refers to the fact that these measures are not necessarily privileged directions for the model to give better results. It will therefore be understood that these conditions and / or these measurement directions can be chosen for reasons independent of the proper functioning of the model. In addition, the measurement conditions are not necessarily optimal. This is why we are talking here about " representative measures of HRTFs " instead of " HRTFs measurements ".
- step c1) on any individual, must preferentially be reproducible with those which made it possible to constitute the model in step b).
- these measurement conditions can be chosen according to criteria that are completely independent of the operation of the model, the essential point being that they are reproducible between the moment when the model is constituted, in step b), and the moment when the measurements are taken on any individual in step c).
- obtaining complete HRTFs from any individual can be achieved by roughly measuring its HRTFs in only a few directions, with a lean measurement procedure (ie that is, involving only a reduced number of measuring directions and / or a simplified measuring device).
- the output vector Y of the model consists of coefficients associated with a given representation of an HRTF.
- the vector Y may correspond to the frequency coefficients describing the spectrum modulus of an HRTF, but other representations may be considered (principal component analysis, IIR filter, or others).
- the method of the invention is preferably based on statistical learning algorithms and, in a preferred embodiment, on network type algorithms. artificial neurons. These algorithms are briefly presented below.
- Statistical learning algorithms are tools for predicting statistical processes. They have been used successfully for the prediction of processes for which several explanatory variables can be identified. Artificial neural networks define a particular category of these algorithms. The interest of neural networks lies in their ability to capturing high-level dependencies, that is, dependencies that involve multiple variables at once. The process prediction takes advantage of the knowledge and exploitation of high-level dependencies. There is a wide variety of application domains of neural networks, especially in financial techniques to predict market fluctuations, in pharmaceuticals, in the banking field for the detection of credit card fraud, in marketing to predict behavior. consumers, or others. Neural networks are often considered as universal predictors, in the sense that they are capable of predicting arbitrary data from any explanatory variables, provided that the number of hidden units is sufficient. In other words, they make it possible to model any mathematical function of in if the number of hidden units m is sufficient.
- a neural network consists of three layers: an input layer 10, a hidden layer 11 and an output layer 12.
- the input layer 11 corresponds to the explanatory variables, that is to say the variables of input (the aforementioned vector X), from which the prediction is made, and which will be described in detail later.
- the output layer 12 defines the predicted values (the above-mentioned vector Y).
- a first step 111 consists in calculating linear combinations of the explanatory variables so as to combine the information coming potentially from several variables.
- a second step 112 consists in applying a non-linear transformation (for example a function of the " hyperbolic tangent " type) to each of the linear combinations in order to obtain the values of the hidden units or neurons that constitute the hidden layer. This nonlinear transformation defines the activation function of the neurons.
- the hidden units are recombined linearly, at step 113, to calculate the value predicted by the neural network.
- neural networks There are different categories of neural networks distinguished by their architecture (type of interconnection between neurons, choice of activation functions, or other) and the learning mode used.
- Neural networks are not used for prediction purposes only. They are also used for classification and / or grouping of Clustering in a perspective of information reduction. Indeed, a network of neurons is able, in a set of data, to identify common characteristics between the elements of this set, to group them according to their resemblance. Each group thus formed is then associated with an element representative of the information contained in the group, called " representative ". This representative can then be substituted for the entire group. The set of data can thus be described by means of a reduced number of elements, which constitutes a reduction of data. Kohonen maps or self-organizing maps (SOM in English for "Self Organizing Map”) are neural networks dedicated to this clustering task.
- SOM self-organizing maps
- the method that seemed the most immediate was a uniform selection in which a subset of directions was chosen by trying to cover the entire 3D sphere as homogeneously and evenly as possible. This method was based on a regular sampling of the 3D sphere. However, it turned out that the HRTFs did not vary in a uniform way depending on the direction. From this point of view, a uniform selection of HRTFs was not really effective.
- the clustering procedure also provides additional information as to the directions associated with the representative HRTFs, this information making it possible to define a selection of HRTFs intended to feed the input of the HRTFs calculation model. This selection is a priori non-uniform, but more efficient, and guarantees a better " representativeness " of the entire 3D sphere.
- the present invention proposes the use, as input parameters of the model, of a selection of HRTFs corresponding to directions in the sense that these directions are not necessarily " representative " (in the sense of the clustering technique described above). However, these directions remain exploitable in that the model is able to extract specific information relating to each individual.
- the invention uses " artificial neural network " type statistical learning algorithms, as a modeling tool for calculating HRTFs (for example with a " Multi Layer Perceptron " or MLP type neuron network). ).
- the input parameters of the neural network are at least the azimuth angle ( ⁇ 1) and the elevation angle ( ⁇ 1) specifying the direction of an HRTF to be calculated. These parameters are possibly supplemented by " individual " parameters associated with the individual whose HRTFs are to be calculated. These individual parameters include a selection of HRTFs from the individual that have been previously measured. Nevertheless, it is not excluded to add morphological parameters of the individual to the input of the model to enrich the information to be provided to the model.
- the output parameters of the model are then the coefficients of the vector describing the HRTF for the direction ( ⁇ 1, ⁇ 1) and for the individual specified in input.
- a risk of the learning phase is the over-learning which is expressed as follows: the neural network learns " by heart " the learning set and tries to reproduce variations specific to the learning set, then they do not exist at the global level.
- the validation phase 22 is conducted in conjunction with the learning phase 21. Referring to the figure 3 it consists in evaluating the prediction error of the neural network on a validation set (distinct from the training set), which defines the validation error. During learning, the Err_valid validation error begins to decrease and then starts to grow again when over-learning occurs. The minimum MIN of the validation error therefore determines the end of the learning.
- test phase is conducted once the training is complete and consists in evaluating the prediction error on the test set. This error, called “test error” , finally describes the final performance of the neural network.
- an operational neural network is available, to which it suffices to submit input parameters to obtain the HRTFs of an individual in one direction.
- the method in the general sense of the invention therefore comprises a step a) during which a database is formed by measuring a plurality of HRTFs in a multiplicity of directions of the space and for a plurality of individuals.
- This measurement step referenced 40 on the figure 4a consists in collecting the HRTFs measurements in N directions of space, for several individuals preferentially of different morphology (or " morphotype "), to obtain a complete data base according to the specificities of the individuals. More generally, the number of individuals taken into account during learning is high and better are the performance of the neural network, especially in terms of " universality ".
- step b) consists in learning the model using the database 20.
- steps 41 arbitrary steps i of measurements representative of HRTFs in a restricted number n (with n ⁇ N) are arbitrarily selected. This step 41 will be described in detail later, with reference to the figure 4c .
- the three learning phases 21, validation 22 and test 23 are then conducted to build the model in step 44. It will be noted that it is possible to adjust the limited number of measurements n to avoid the phenomenon of over-learning described above. Thus, it is possible to determine an optimum number Nopt of measurements necessary for the proper functioning of the model (step 42) and to adopt this optimum number (step 43) for the definition of the model.
- the neural network 44 for calculating the HRTFs.
- the neural network 44 is then able to calculate the HRTFs of any individual, in any direction, provided that there are a few HRTFs of the individual in the predetermined directions ⁇ i mes , ⁇ i mes .
- step c1) the measurement conditions of step c1) must be substantially reproducible with the measurement conditions for HRTFs in the directions i (step 41 of FIG. figure 4a ).
- the database 20 must be constituted under the most conventional and standard conditions to offer, at the output of the model, quality HRTFs that can be applied to rendering devices by providing satisfactory listening comfort.
- These measures " degraded " are denoted HRTF ( ⁇ i mes , ⁇ i mes ) and carried out at a step 48 of the figure 4c .
- the model compares these calculated HRTFs with the HRTFs of the database 20 in the same directions ( ⁇ j cal , ⁇ j Cal ). If the deviation is considered too large (arrow n), the learning model 44b is perfected until this difference is reduced to an acceptable error (arrow o): the model then becomes definitive (end step 44).
- IND is placed in a CAB that is not necessarily anechoic. He has a CAS helmet with at least one MIC microphone attached to one of his ears. Preferably, the CAS helmet is carried by a telescopic rigid rod in height (along the y axis). This rod is also attached to a REP1 mark of the cab CAB.
- This embodiment makes it possible to maintain the individual IND (with respect to the other axes x and z) and to position it correctly with respect to the reference mark REP1 and, consequently, with respect to the sound sources S1, S2,. CAB cabin.
- REP2 mark such as a visual cue on a mirror
- another REP2 mark allows the individual to be positioned in height (along the y axis).
- the individual can sit on a height adjustable seat and adjust the height until his ears coincide with the mark REP2 on the mirror.
- one of the advantages of the implementation of the invention is to avoid the clustering technique and to leave a free choice at the location of the sound sources S1-Sn.
- these sources may be available elsewhere than at the mirror bearing the mark REP2, or else at the level of the base of the rod REP1.
- the source S2 is slightly shifted relative to the reference mark REP1.
- the number of sources S1-Sn to predict depends, in principle, the number of HRTFs that one wishes to calculate from the model. Typically, to calculate HRTFs throughout the 3D space, between 25 and 30 prerequisite directions in CAB Cabin are recommended. Nevertheless, for a satisfactory comfort of listening, about fifteen measures is sufficient.
- the sources S1 to Sn are not necessarily arranged on the same area of the sphere portion.
- the purpose of the measurement protocol of the figure 5 is not to obtain HRTFs in the strict sense of the term, but more precisely transfer functions of an individual, these transfer functions being partially representative of its HRTFs. These transfer functions are intended to be used as input parameters of the model 44.
- the inventors have indeed found that the model was able to extract and use the individual information contained in these transfer functions, even if this information is partial or scrambled. What matters is not the quality of the HRTFs measured according to this protocol, but their reproducibility. It is essentially on this reproducibility that the model of HRTFs is based.
- One advantage of this measurement protocol is to relax the constraints of the measurement procedure, without affecting the proper functioning of the model.
- the sound sources S1-Sn provided in the cab CAB may be in respective positions belonging to different sphere surfaces.
- the signals measured by the microphone PCM are collected by an interface 51 of a central unit UC (for example an audio acquisition board), which converts them into digital data.
- a central unit UC for example an audio acquisition board
- These data, if necessary enriched by a measurement of the morphological parameter (s) of the individual, are then processed by the model 44 in the sense of the invention.
- the model 44 may be stored as a computer program product in a memory of the CPU.
- the HRTFs calculated for all the directions of the space that the model gives can then be stored in memory 52 or recorded on a removable medium (on diskette or engraved on CD-ROM) or communicated via a network such as the Internet or equivalent .
- the input layer of the neural network comprises a selection of HRTFs of the individual corresponding to any directions, but fixed a priori, and obtained under non-ideal conditions.
- HRTFs are certainly obtained by direct measurement on the individual IND, but under non-ideal conditions, especially in an environment that is not necessarily anechoic.
- the measurement protocol must be defined beforehand (typically in learning step b)) and must be rigorously followed in step c) of applying the model to any individual.
- the resulting neural network is able to compute the HRTFs of any individual in any direction, since it has the steps in the directions ⁇ i and ⁇ i my my selected and obtained under these conditions predefined.
- a single source can be provided which moves between positions S1 to Sn.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Feedback Control In General (AREA)
Claims (10)
- Verfahren zur Modellierung von einer Person eigenen HRTFs-Übertragungsfunktionen, wobei:a) eine Datenbank erstellt wird, die eine Vielzahl von HRTFs gemäß einer Vielfalt von Richtungen des Raums und für eine Vielzahl von Personen umfasst,b) durch Lernen in der Datenbank ein Modell konstruiert wird, das HRTFs für die Vielfalt von Richtungen angeben kann, ausgehend von einem Satz von Messwerten, die für HRTFs in aus der Vielfalt von Richtungen ausgewählten Richtungen repräsentativ sind, undc) für eine beliebige Person:und wobei:c1) ein Satz von Funktionen gemessen wird, die für die HRTFs der Person nur in den ausgewählten Richtungen repräsentativ sind,c2) das Modell an die Messwerte in den ausgewählten Richtungen angewendet wird, undc3) die HRTFs der Person in der ganzen Vielfalt von Richtungen erhalten werden,- die Messbedingungen und -richtungen zum Erhalt des Satzes von Messwerten willkürlich während des Lernschritts b) festgelegt werden, und- im Schritt c1) Messbedingungen angewendet werden, die im Wesentlichen mit den Messbedingungen des Schritts b) reproduzierbar sind.
- Verfahren nach Anspruch 1, wobei im Schritt a) parallel zur Erstellung der Datenbank für die Vielzahl von Personen außerdem an der Vielzahl von Personen jeweilige Sätze von Funktionen gemessen werden, die für die HRTFs unter den willkürlich festgelegten Messbedingungen und -richtungen repräsentativ sind, und für die Konstruktion des Modells im Schritt b):- am Eingang des Modells die jeweiligen Sätze angewendet werden, und- am Ausgang des Modells die Datenbank angewendet wird.
- Verfahren nach einem der Ansprüche 1 und 2, wobei das Modell unter Verwendung eines Netzwerks aus künstlichen Neuronen aufgebaut wird.
- Verfahren nach Anspruch 3, wobei der Schritt b) aufweist:- eine Lernphase,- eine Validierungsphase, die parallel zur Lernphase ausgeführt wird, und- eine Testphase,und wobei während der Validierungsphase eine optimale Anzahl (Nopt) von Messungen bestimmt wird, die am Eingang des Modells zur Durchführung des Schritts c) zu liefern sind, um eine Wirkung des Überlernens des Modells zu begrenzen.
- Verfahren nach Anspruch 4, wobei die optimale Anzahl (Nopt) in der Größenordnung von zwanzig liegt.
- Verfahren nach einem der vorhergehenden Ansprüche, wobei das Modell außerdem mindestens einen eine Person kennzeichnenden morphologischen Parameter verwendet, und wobei im Schritt c2) außerdem ein Messwert des morphologischen Parameters an das Modell geliefert wird.
- Verfahren nach einem der vorhergehenden Ansprüche, wobei im Schritt c2) am Eingang des Modells geliefert wird:- der Satz von Messwerten in den ausgewählten Richtungen, und- mindestens eine Richtung (ϕj cal, θj cal) aus der Vielfalt von Richtungen, in der eine Schätzung von HRTFs gewünscht wird.
- Anlage zur Schätzung von einer Person eigenen HRTFs-Übertragungsfunktionen, die aufweist:- eine Messkabine für repräsentative Übertragungsfunktionen von HRTFs in einem Satz von ausgewählten Richtungen, und- eine Verarbeitungseinheit (UC), um einen Satz von Messwerten an einer Person in den ausgewählten Richtungen wiederzugewinnen und die HRTFs der Person in einer Vielfalt von Richtungen des Raums zu bewerten, die die ausgewählten Richtungen umfassen, ausgehend von einem Modell, das fähig ist, HRTFs für eine Vielfalt von Richtungen ausgehend von einem Satz von Messwerten anzugeben, die für HRTFs nur in einigen Richtungen repräsentativ sind, die willkürlich unter der Vielfalt von Richtungen festgelegt werden,wobei die Messrichtungen in der Kabine den willkürlich festgelegten Richtungen entsprechen, dadurch gekennzeichnet, dass, da die Kabine Bezugspunkte (REP1, REP2) auf Achsen (x, y) aufweist, deren Schnittstelle eine Stellung der Ohren der Person (IND) in der Kabine (CAB) definiert, Schallquellen (S1-Sn), die in der Kabine (CAB) zur Durchführung der Messungen vorgesehen sind, in unterschiedlichen Abständen von der Schnittstelle platziert sind.
- Computerprogrammprodukt, das dazu bestimmt ist, in einem Speicher einer Verarbeitungseinheit oder auf einem entfernbaren Träger gespeichert zu werden, der mit einem Lesegerät der Verarbeitungseinheit zusammenwirken kann, oder dazu bestimmt ist, von einem Server zur Verarbeitungseinheit übertragen zu werden, das Anweisungen in Form eines Computercodes aufweist, um ein Modell zu konstruieren, das auf einem Netzwerk aus künstlichen Neuronen basiert und fähig ist, HRTFs-Übertragungsfunktionen einer Person für eine Vielfalt von Richtungen ausgehend von einem Satz von an dieser Person ausgeführten Messungen anzugeben, die für HRTFs nur in einigen Richtungen repräsentativ sind und willkürlich unter der Vielfalt von Richtungen festgelegt werden, wobei das Programm ausgehend von einer Datenbank, die eine Vielzahl HRTFs gemäß einer Vielfalt von Richtungen des Raums und für eine Vielzahl von Personen umfasst, mindestens eine Lernphase durchführt.
- Computerprogrammprodukt, das dazu bestimmt ist, in einem Speicher einer Verarbeitungseinheit oder auf einem entfernbaren Träger gespeichert zu werden, der mit einem Lesegerät der Verarbeitungseinheit zusammenwirken kann, oder das dazu bestimmt ist, von einem Server zur Verarbeitungseinheit übertragen zu werden, das Anweisungen in Form eines Computercodes aufweist, um ein Modell anzuwenden, das auf einem Netzwerk aus künstlichen Neuronen basiert und fähig ist, HRTFs-Übertragungsfunktionen einer Person für eine Vielfalt von Richtungen ausgehend von einem Satz von an dieser Person ausgeführten Messungen anzugeben, die für HRTFs nur in einigen Richtungen repräsentativ sind und willkürlich unter der Vielfalt von Richtungen festgelegt werden, wobei die an der Person ausgeführten Messungen in einer Kabine durchgeführt werden, in der:- die Messrichtungen den willkürlich festgelegten Richtungen entsprechen, und- die Kabine Bezugspunkte (REP1, REP2) auf jeweiligen Achsen (x, y) aufweist, deren Schnittstelle eine Stellung der Ohren der Person (IND) in der Kabine (CAB) definiert, um die Messungen durchzuführen, wobei in der Kabine (CAB) vorgesehene Schallquellen (S1-Sn) in unterschiedlichen Abständen zur Schnittstelle platziert sind.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0500218A FR2880755A1 (fr) | 2005-01-10 | 2005-01-10 | Procede et dispositif d'individualisation de hrtfs par modelisation |
PCT/FR2006/000037 WO2006075077A2 (fr) | 2005-01-10 | 2006-01-09 | Procede et dispositif d’individualisation de hrtfs par modelisation |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1836876A2 EP1836876A2 (de) | 2007-09-26 |
EP1836876B1 true EP1836876B1 (de) | 2018-07-18 |
Family
ID=34953232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP06709051.4A Active EP1836876B1 (de) | 2005-01-10 | 2006-01-09 | Verfahren und vorrichtung zur individualisierung von hrtfs durch modellierung |
Country Status (5)
Country | Link |
---|---|
US (1) | US20080137870A1 (de) |
EP (1) | EP1836876B1 (de) |
JP (1) | JP4718559B2 (de) |
FR (1) | FR2880755A1 (de) |
WO (1) | WO2006075077A2 (de) |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007048900A1 (fr) * | 2005-10-27 | 2007-05-03 | France Telecom | Individualisation de hrtfs utilisant une modelisation par elements finis couplee a un modele correctif |
WO2007101958A2 (fr) * | 2006-03-09 | 2007-09-13 | France Telecom | Optimisation d'une spatialisation sonore binaurale a partir d'un encodage multicanal |
JP4866301B2 (ja) * | 2007-06-18 | 2012-02-01 | 日本放送協会 | 頭部伝達関数補間装置 |
DE102007051308B4 (de) * | 2007-10-26 | 2013-05-16 | Siemens Medical Instruments Pte. Ltd. | Verfahren zum Verarbeiten eines Mehrkanalaudiosignals für ein binaurales Hörgerätesystem und entsprechendes Hörgerätesystem |
WO2009106783A1 (fr) * | 2008-02-29 | 2009-09-03 | France Telecom | Procede et dispositif pour la determination de fonctions de transfert de type hrtf |
JP5346187B2 (ja) * | 2008-08-11 | 2013-11-20 | 日本放送協会 | 頭部音響伝達関数補間装置、そのプログラムおよび方法 |
US8428269B1 (en) * | 2009-05-20 | 2013-04-23 | The United States Of America As Represented By The Secretary Of The Air Force | Head related transfer function (HRTF) enhancement for improved vertical-polar localization in spatial audio systems |
FR2958825B1 (fr) * | 2010-04-12 | 2016-04-01 | Arkamys | Procede de selection de filtres hrtf perceptivement optimale dans une base de donnees a partir de parametres morphologiques |
CN102802111B (zh) * | 2012-07-19 | 2017-06-09 | 新奥特(北京)视频技术有限公司 | 一种输出环绕声的方法和系统 |
AU2012394979B2 (en) | 2012-11-22 | 2016-07-14 | Razer (Asia-Pacific) Pte. Ltd. | Method for outputting a modified audio signal and graphical user interfaces produced by an application program |
US20140355769A1 (en) | 2013-05-29 | 2014-12-04 | Qualcomm Incorporated | Energy preservation for decomposed representations of a sound field |
US9466305B2 (en) | 2013-05-29 | 2016-10-11 | Qualcomm Incorporated | Performing positional analysis to code spherical harmonic coefficients |
US9426589B2 (en) | 2013-07-04 | 2016-08-23 | Gn Resound A/S | Determination of individual HRTFs |
US9922656B2 (en) | 2014-01-30 | 2018-03-20 | Qualcomm Incorporated | Transitioning of ambient higher-order ambisonic coefficients |
US9489955B2 (en) | 2014-01-30 | 2016-11-08 | Qualcomm Incorporated | Indicating frame parameter reusability for coding vectors |
US10770087B2 (en) | 2014-05-16 | 2020-09-08 | Qualcomm Incorporated | Selecting codebooks for coding vectors decomposed from higher-order ambisonic audio signals |
US9852737B2 (en) | 2014-05-16 | 2017-12-26 | Qualcomm Incorporated | Coding vectors decomposed from higher-order ambisonics audio signals |
US9620137B2 (en) | 2014-05-16 | 2017-04-11 | Qualcomm Incorporated | Determining between scalar and vector quantization in higher order ambisonic coefficients |
US9747910B2 (en) | 2014-09-26 | 2017-08-29 | Qualcomm Incorporated | Switching between predictive and non-predictive quantization techniques in a higher order ambisonics (HOA) framework |
US9584942B2 (en) * | 2014-11-17 | 2017-02-28 | Microsoft Technology Licensing, Llc | Determination of head-related transfer function data from user vocalization perception |
US9544706B1 (en) | 2015-03-23 | 2017-01-10 | Amazon Technologies, Inc. | Customized head-related transfer functions |
JP6596896B2 (ja) * | 2015-04-13 | 2019-10-30 | 株式会社Jvcケンウッド | 頭部伝達関数選択装置、頭部伝達関数選択方法、頭部伝達関数選択プログラム、音声再生装置 |
FR3040253B1 (fr) * | 2015-08-21 | 2019-07-12 | Immersive Presonalized Sound | Procede de mesure de filtres phrtf d'un auditeur, cabine pour la mise en oeuvre du procede, et procedes permettant d'aboutir a la restitution d'une bande sonore multicanal personnalisee |
US9967693B1 (en) * | 2016-05-17 | 2018-05-08 | Randy Seamans | Advanced binaural sound imaging |
US10306396B2 (en) | 2017-04-19 | 2019-05-28 | United States Of America As Represented By The Secretary Of The Air Force | Collaborative personalization of head-related transfer function |
US11615339B2 (en) | 2018-06-06 | 2023-03-28 | EmbodyVR, Inc. | Automated versioning and evaluation of machine learning workflows |
WO2020008655A1 (ja) * | 2018-07-03 | 2020-01-09 | 学校法人千葉工業大学 | 頭部伝達関数生成装置、頭部伝達関数生成方法およびプログラム |
US10798513B2 (en) * | 2018-11-30 | 2020-10-06 | Qualcomm Incorporated | Head-related transfer function generation |
US10798515B2 (en) * | 2019-01-30 | 2020-10-06 | Facebook Technologies, Llc | Compensating for effects of headset on head related transfer functions |
JP7206027B2 (ja) * | 2019-04-03 | 2023-01-17 | アルパイン株式会社 | 頭部伝達関数学習装置および頭部伝達関数推論装置 |
GB2584152B (en) * | 2019-05-24 | 2024-02-21 | Sony Interactive Entertainment Inc | Method and system for generating an HRTF for a user |
KR20210008788A (ko) | 2019-07-15 | 2021-01-25 | 삼성전자주식회사 | 전자 장치 및 그 제어 방법 |
WO2021010562A1 (en) * | 2019-07-15 | 2021-01-21 | Samsung Electronics Co., Ltd. | Electronic apparatus and controlling method thereof |
EP4085660A4 (de) | 2019-12-30 | 2024-05-22 | Comhear Inc | Verfahren zum bereitstellen eines räumlichen schallfeldes |
WO2022147208A1 (en) * | 2020-12-31 | 2022-07-07 | Harman International Industries, Incorporated | Method and system for generating a personalized free field audio signal transfer function based on near-field audio signal transfer function data |
JP2024502537A (ja) * | 2020-12-31 | 2024-01-22 | ハーマン インターナショナル インダストリーズ インコーポレイテッド | 自由場オーディオ信号伝達関数データに基づいてパーソナライズされた自由場オーディオ信号伝達関数を生成する方法及びシステム |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09191500A (ja) * | 1995-09-26 | 1997-07-22 | Nippon Telegr & Teleph Corp <Ntt> | 仮想音像定位用伝達関数表作成方法、その伝達関数表を記録した記憶媒体及びそれを用いた音響信号編集方法 |
WO1997025834A2 (en) * | 1996-01-04 | 1997-07-17 | Virtual Listening Systems, Inc. | Method and device for processing a multi-channel signal for use with a headphone |
US6243476B1 (en) * | 1997-06-18 | 2001-06-05 | Massachusetts Institute Of Technology | Method and apparatus for producing binaural audio for a moving listener |
DE19910372A1 (de) * | 1998-04-20 | 1999-11-04 | Florian M Koenig | Individuelle Außenrohr-Übertragungsfunktions-Bestimmung ohne zugehörige, übliche, akustische Probanden-Vermessung |
JP4226142B2 (ja) * | 1999-05-13 | 2009-02-18 | 三菱電機株式会社 | 音響再生装置 |
AUPQ514000A0 (en) * | 2000-01-17 | 2000-02-10 | University Of Sydney, The | The generation of customised three dimensional sound effects for individuals |
JP3521900B2 (ja) * | 2002-02-04 | 2004-04-26 | ヤマハ株式会社 | バーチャルスピーカアンプ |
CN1685762A (zh) * | 2002-09-23 | 2005-10-19 | 皇家飞利浦电子股份有限公司 | 声音重现系统、程序和数据载体 |
US7430300B2 (en) * | 2002-11-18 | 2008-09-30 | Digisenz Llc | Sound production systems and methods for providing sound inside a headgear unit |
US20090030552A1 (en) * | 2002-12-17 | 2009-01-29 | Japan Science And Technology Agency | Robotics visual and auditory system |
WO2005025270A1 (ja) * | 2003-09-08 | 2005-03-17 | Matsushita Electric Industrial Co., Ltd. | 音像制御装置の設計ツールおよび音像制御装置 |
WO2007048900A1 (fr) * | 2005-10-27 | 2007-05-03 | France Telecom | Individualisation de hrtfs utilisant une modelisation par elements finis couplee a un modele correctif |
-
2005
- 2005-01-10 FR FR0500218A patent/FR2880755A1/fr active Pending
-
2006
- 2006-01-09 WO PCT/FR2006/000037 patent/WO2006075077A2/fr active Application Filing
- 2006-01-09 JP JP2007549938A patent/JP4718559B2/ja active Active
- 2006-01-09 US US11/794,987 patent/US20080137870A1/en not_active Abandoned
- 2006-01-09 EP EP06709051.4A patent/EP1836876B1/de active Active
Non-Patent Citations (1)
Title |
---|
RICK L JENISON ET AL: "A Spherical Basis Function Neural Network for Modeling Auditory Space", NEURAL COMPUTATION., vol. 8, no. 1, 1 January 1996 (1996-01-01), US, pages 115 - 128, XP055356520, ISSN: 0899-7667, DOI: 10.1162/neco.1996.8.1.115 * |
Also Published As
Publication number | Publication date |
---|---|
EP1836876A2 (de) | 2007-09-26 |
WO2006075077A2 (fr) | 2006-07-20 |
FR2880755A1 (fr) | 2006-07-14 |
JP4718559B2 (ja) | 2011-07-06 |
JP2008527821A (ja) | 2008-07-24 |
WO2006075077A3 (fr) | 2006-10-05 |
US20080137870A1 (en) | 2008-06-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1836876B1 (de) | Verfahren und vorrichtung zur individualisierung von hrtfs durch modellierung | |
EP1946612B1 (de) | Hrtfs-individualisierung durch modellierung mit finiten elementen gekoppelt mit einem korrekturmodell | |
EP3348079B1 (de) | Verfahren und system zur entwicklung einer an ein individuum angepassten kopfbezogenen übertragungsfunktion | |
EP2898707B1 (de) | Optimierte kalibrierung eines klangwiedergabesystems mit mehreren lautsprechern | |
EP1563485B1 (de) | Verfahren zur verarbeitung von audiodateien und erfassungsvorrichtung zur anwendung davon | |
EP1992198B1 (de) | Optimierung des binauralen raumklangeffektes durch mehrkanalkodierung | |
EP1600042B1 (de) | Verfahren zum bearbeiten komprimierter audiodaten zur räumlichen wiedergabe | |
EP2258119B1 (de) | Verfahren und vorrichtung zur bestimmung von übertragungsfunktionen vom typ hrtf | |
EP2374124B1 (de) | Verwaltete codierung von mehrkanaligen digitalen audiosignalen | |
EP1479266B1 (de) | Verfahren und vorrichtung zur steuerung einer anordnung zur wiedergabe eines schallfeldes | |
EP2901718B1 (de) | Verfahren und vorrichtung zur wiedergabe eines audiosignals | |
EP1586220B1 (de) | Verfahren und einrichtung zur steuerung einer wiedergabeeinheitdurch verwendung eines mehrkanalsignals | |
Yamamoto et al. | Fully perceptual-based 3D spatial sound individualization with an adaptive variational autoencoder | |
FR3065137A1 (fr) | Procede de spatialisation sonore | |
EP3384688B1 (de) | Aufeinanderfolgende dekompositionen von audiofiltern | |
EP3449643B1 (de) | Verfahren und system zum senden eines 360°-audiosignals | |
EP3484185A1 (de) | Modellierung einer menge von akustischen übertragungsfunktionen einer person, 3d-soundkarte und 3d-sound-reproduktionssystem | |
EP3934282A1 (de) | Verfahren zur umwandlung eines ersten satzes repräsentativer signale eines schallfelds in einen zweiten satz von signalen und entsprechende elektronische vorrichtung | |
FR3093264A1 (fr) | Procédé de diffusion d’un signal audio | |
COMB | 12/HRTF (q) cal cal j" 91'l |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20070705 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
DAX | Request for extension of the european patent (deleted) | ||
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ORANGE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20170327 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20180322 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: FRENCH |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1020662 Country of ref document: AT Kind code of ref document: T Effective date: 20180815 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602006055839 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20180718 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1020662 Country of ref document: AT Kind code of ref document: T Effective date: 20180718 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181018 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181118 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181019 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602006055839 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
26N | No opposition filed |
Effective date: 20190423 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190109 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20190131 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190131 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190131 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190131 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190109 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181118 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180718 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20060109 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20231219 Year of fee payment: 19 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20231219 Year of fee payment: 19 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20231219 Year of fee payment: 19 |