WO2016077514A1 - Procédé et système de fonction de transfert relative à la tête centrée au niveau d'une oreille - Google Patents

Procédé et système de fonction de transfert relative à la tête centrée au niveau d'une oreille Download PDF

Info

Publication number
WO2016077514A1
WO2016077514A1 PCT/US2015/060259 US2015060259W WO2016077514A1 WO 2016077514 A1 WO2016077514 A1 WO 2016077514A1 US 2015060259 W US2015060259 W US 2015060259W WO 2016077514 A1 WO2016077514 A1 WO 2016077514A1
Authority
WO
WIPO (PCT)
Prior art keywords
ear
hrtf
listener
centered
audio
Prior art date
Application number
PCT/US2015/060259
Other languages
English (en)
Inventor
David S. Mcgrath
Rhonda Wilson
Original Assignee
Dolby Laboratories Licensing Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dolby Laboratories Licensing Corporation filed Critical Dolby Laboratories Licensing Corporation
Publication of WO2016077514A1 publication Critical patent/WO2016077514A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S1/005For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field

Definitions

  • the present invention relates to the field of audio signal processing and in particular discloses a head related transfer function processing system and method for the spatialization of audio.
  • Audio processing systems for the processing of audio signals for playback over headphones or the like for the purposes of externalising the audio sources to the listener are well-known.
  • HRTF head related transfer functions
  • the Head Related Transfer Functions are pairs of filters (referred to as a left-right pair) that are intended to mimic the way the sound of an audio object is modified as it propagates from the audio object to the left and right ears of a listener.
  • Fig. 1 illustrates an example coordinate system 1 where the origin 2 is located halfway between the listener's ears.
  • a method of creating a series of head related transfer functions for the playback of audio signals including the steps of: (a) for at least one intended listener's ear of playback, and for at least one externally positioned audio source, formulating at least one normalized ear centered HRTF having substantially invariant characteristics along a radial line from the listener' s ear; (b) modifying the normalized ear centered HRTF by a delay factor and an attenuation factor in accordance with a distance measure from at least one of the listener's ears.
  • a method of spatializing an audio input signal so that it has an apparent external position when played back over headphone transducers including the steps of: (a) initially forming a normalised HRTF for an audio input signal located at an external position relative to a listener, the HRTF being substantially invariant along a radial line from the listener' s ear; (b) further modifying the normalised HRTF by a delay and attenuation factor in accordance with the distance of the audio source from the listener to produce a resulting ear centered HRTF; (c) utilising the ear centered HRTF to filter the audio input signal to produce an output stream which approximates the effect of projection of the audio to an ear of the listener.
  • the step (c) preferably can include convolution of the ear centered HRTF with the audio signal.
  • a method of spatializing an input audio stream to produce a spatialized audio output stream for playback over audio transducers placed near a listener's ears including the steps of: (a) forming a left ear centered intermediate HRTF having substantially invariant characteristics along a radial line centered at an intended listener' s left ear; (b) delaying and attenuating the left ear centered intermediate HRTF in accordance with an intended distance measure of the input audio stream from a listener's ear to produce a left scaled HRTF; (c) combining the left scaled HRTF with the input audio stream to produce a left audio output stream signal; (d) forming a right ear centered intermediate HRTF having substantially invariant characteristics along a radial line centered at an intended listener's right ear; (e) delaying and attenuating the right ear centered intermediate HRTF in accordance with an intended distance measure of the input audio
  • the steps (c) and (f) of combining can comprise convolving the corresponding HRTF with the input audio stream.
  • a method of creating at least a first HRTF impulse response for a sound emitter at a specified location, for at least a first ear of a virtual listener including the steps of: (a) determining the location of the sound emitter relative to the first ear of the virtual listener; (b) determining a first ear relative direction of arrival, and a first ear relative distance of the sound emitter; (c) determining a first ear centered HRTF for the sound emitter, based on the first ear relative direction of arrival; and (d) forming the first HRTF impulse response from the first ear centered HRTF, including adding a delay to the first ear centered HRTF derived from a first ear relative distance and also including a gain applied to the first ear centered HRTF according to the first ear relative distance
  • the method can optionally calculate the delay and gain by including a first and second ear relative distance.
  • a method of formulating a first ear centered HRTF impulse response for a sound emitter at a predetermined location relative to a first ear including the steps of: (a) determining a first ear relative direction of arrival of the sound emitter, relative to the first ear of the virtual listener; (b) determining an undelayed ear centered HRTF impulse response, as a parameterised function of the first ear relative direction of arrival; (c) determining a head-shadow-delay, as a parameterised function of the first ear relative direction of arrival; (d) forming the first ear centered HRTF impulse response from the undelayed ear centered HRTF impulse response by the addition of the head-shadow-delay.
  • the methods can be applied substantially symmetrically for a first and second ear of a listener.
  • a method of spatializing an audio input signal so that it has an apparent external position when played back over headphone transducers including the steps of: (a) forming a series of prototype normalised HRTFs for an audio input signal located at a series of external positions relative to a listener, the prototype normalised HRTFs being substantially invariant along a radial line from the listener's ear; (b) utilising a series of interpolation functions for interpolating between the series of prototype normalised HRTFs in accordance with an apparent external position relative to the listener, so as to form an undelayed ear centered HRTF; (c) calculating a delay and gain factor from the radial distance to the apparent external position, and applying the delay and gain factor to the undelayed ear centered HRTF to produce an ear centered HRTF; (d) utilising the ear centered HRTF to filter the audio input signal to produce an output stream which approximates the effect of projection
  • the series of interpolation functions can comprise a series of polynomials.
  • the series of polynomials are preferably defined in terms of a Cartesian coordinate system centered around the listener.
  • the method can be utilized to form both a left and right channel signal for a listener and the same series of prototype normalised HRTFs are preferably used for each ear of the listener.
  • the prototype normalised HRTFs are preferably stored utilising a high sample rate and the utilising step (d) preferably can include subsampling the prototype normalised HRTFs to filter them with the audio signal.
  • a method of spatializing a series (M) of audio input signal objects each having an apparent external position so that the signals maintain an apparent external position when played back over headphone transducers including the steps of: (a) for each of the M audio input signal objects: (i) Computing a total delay and gain to be applied to a left-ear HRTF; (ii) Applying the delay and gain to the input audio signal object to produce a first ear delayed signal for the object; (iii) Interpolating a series of polynomials to produce a series (N) of scale factors and scaling the first ear delayed signal for the object, to produce N first ear delayed scaled signals for the object (b) producing combined first-ear-delayed- scaled signals, such that each of the combined first-ear-delayed-scaled signals is formed by summing together the corresponding first-ear-delayed-scaled signals for the objects; (c) filtering the combined first-ear-
  • an apparatus for implementing the methods described above In accordance with a further aspect of the present invention there is provided an apparatus for implementing the methods described above. In accordance with a further aspect of the present invention there is provided an computer readable storage medium storing a program of instructions that is executable by a device to perform the methods described above.
  • FIG. 1 illustrates schematically the coordinate system with the origin at a point midway between the ears
  • FIG. 2 illustrates schematically the coordinate system with the origin at listener's left ear
  • Fig. 3 illustrates schematically a top view of a listener and audio object
  • Fig. 4 illustrates the process for the generation of Absolute HRTFs
  • Fig. 5 illustrates the process for generation of Normalized HRTFs
  • Fig. 6 illustrates the process for front-end ear-centered processing, computing intermediate coefficients
  • Fig. 7 illustrates the process of back-end ear-centered processing with FIR filters implemented for each object, m;
  • Fig. 8 illustrates the back-end ear-centered centered processing with FIR filters operating on the results after summation of m objects
  • Fig. 9 illustrates an example rendering system utilising HRTFs generated using the embodiments of the invention.
  • One embodiment provides a system for formulating HRTF transfer functions from the audio object to each ear.
  • the coordinate system of Fig. 1 is not the only coordinate system that may be useful.
  • the postion of the audio object may be defined in terms of ear centered coordinates and unit vectors.
  • v R (x R , y R , z R ) : The location of the object, relative to the listener's right ear.
  • Fig. 2 illustrates an alternative coordinate system 20 which is centered 21 on the listeners left ear.
  • Fig. 3 there is illustrated a top view 30 of a listener 31 and audio object 32. If the distance between the listener's ears (the diameter of the listener's head, measured from ear-to-ear) is 2d e , then it follows that:
  • the distance of the audio object from the midpoint between the listener's ears, and the left ear and the right ear, respectively can be computed as follows:
  • HRTF Impulse Responses may be modified in various ways, to suit the requirements of different applications. For example, as an audio object moves closer to the listener, an Absolute HRTF Impulse response will vary in gain and delay, emulating the real-world, wherein a closer object will be louder, and the time delay from the audio object to the listener's ears will be shorter, as a result of the finite speed of sound. It is therefore possible to define a series of terms:
  • Absolute HRTFs A left-right pair of HRTF filters, representative of an audio object located at some position relative to the listener, that includes a delay that is representative of the time taken for sound to travel from the audio object to the listener, and a gain that is representative of the attenuation incurred as a result of the distance of the audio object from the listener.
  • Delay-normalized HRTFs A left-right pair of HRTF filters that do not include a delay that is representative of the time taken for sound to travel from the audio object to the listener. It is common, in the art, to normalize the delays of left-right HRTF pairs such that the first non-zero tap of either the left or right impulse response occurs close to time zero.
  • Gain-normalized HRTFs A left-right pair of HRTF filters that do not include a gain that is representative of the attenuation incurred as a result of the distance of the audio object from the listener. It is common, in the art, to normalize the gains of left-right HRTF pairs such that the average gain of the left-right filters at low frequencies is approximately 1.
  • Normalized HRTFs A pair of left-right HRTFs that are both Delay-normalized and Gain-normalized.
  • the audio object 32 moves closer to a listener, along a trajectory 33 that follows a straight line through the midpoint between the listeners ears, the direction-of- arrival unit- vector, ' will not change, but the Absolute HRTFs will change.
  • the most dramatic changes in the Absolute HRTFs will be the delay and gain changes, and if these changes are removed (by normalizing the HRTFs), the resulting normalized HRTFs will still exhibit some changes as a function of distance. These changes occur for many reasons, including the following: 1. the direction of the audio object, relative to each of the listener's ears, will vary as the audio object approaches the listener, due to parallax (the change in angular position of an external object, as the viewpoint is shifted). In the embodiment, the generation of the HRTF filters take account of this parallax.
  • the sound from the audio object, as it is incident on the listener's ears, will vary from being a plane-wave (when the audio object is a large distance from the listener) to a spherical-wave (as the audio object comes closer to the listener)
  • the plane- wave/spherical-wave variation in the HRTFs which is particularly significant for audio objects very close to the listener (less than 50cm, say) is also accounted for in an alternative embodiment through the manipulation of a residual filter which is designed to account for the near field effects.
  • the transfer function from this audio object to the listener's left ear will be h L (v , t) , as defined previously. As the distance d L increases, h L (v , t) will exhibit a delay that increases with distance, and a gain that varies inversely with distance. By removing these delay and gain artefacts, the embodiment creates a normalized far-field HRTF.
  • a new 'ear centered' filter function, i c (v ' L , ?) can be defined based on the far- field HRTFs, as follows: [0063] The equation above effectively removes the distance-related gain and delay from the far-field HRTFs.
  • Normalized ear-centered HRTF is defined to refer to the filter h ⁇ c (v' L ,t) . Note that this filter is a function of v (34), the unit vector that indicates the direction of the audio object relative to the listener's left ear.
  • h L (v,t) — h ⁇ (v' L ,t L -) ® r(v L , t)
  • Equation 15 is effectively saying that the HRTF, h L (v,t) , may be approximated by applying a gain and a time-shift to the Normalized ear-centered HRTF (and likewise, for the right ear, as per Equation 16).
  • An implementation of a system implementing the method of Equation 15 for both left and right ears is illustrated 40 in Fig. 4. The method utilizes the fact that the Normalized ear-centered HRTF, h EC (v ' L , t) , is made simpler to generate because it is a function of the unit- vector v ' L , but not a function of the radial distance d L .
  • the first step is to add the offset 42 for conversion to left ear coordinates v L .
  • v L , v ' L and d L are calculated 43.
  • h EC (v ' L , t) del EC and gain L Ec can be formed 44.
  • the conversion of the Normalized ear- centered HRTF h EC (v ' L , t) to the final HRTF is done by: adding 45 delay del L Ec and scaling 46 by gain gain L Ec . Similar operations can be carried out for the symmetrical right ear case.
  • del EC In order to generate the normalized HRTFs, del EC , gain EC and gain EC need to be computed by different means as illustrated 50 in Fig. 5. This processing takes place inside the box labelled "Distance Normalization" 51 in Fig. 5.
  • the delays may be computed according to two primary methods:
  • gain calculations can be computed according two primary methods:
  • the normalization is performed by taking into account d N , the distance of the audio object to the nearest ear of the listener.
  • d the distance of the audio object from the midpoint between the listener's ears. Equations 20 and 21 imply that one HRTF (the near ear) will have zero delay added, whilst the other ear will have the relative delay added, ensuring that the correct inter-aural delay is present in the final HRTF pair.
  • Equations 25 and 26 imply that one HRTF (the near ear) will have unity gain applied, whilst the other ear will have a gain less than unity applied, ensuring that the correct inter-aural gain is present in the final HRTF pair.
  • the ear-centered HRTF polynomial method imply that one HRTF (the near ear) will have unity gain applied, whilst the other ear will have a gain less than unity applied, ensuring that the correct inter-aural gain is present in the final HRTF pair.
  • Fig.5 includes the blocks referred to as EC L 52 and EC R .
  • the EC L block 52 for example, is responsible converting from v' L to h EC (v' L ,?) :
  • a polynomial method for the generation of the ear-centered HRTFs can operate as follows (for the symmetrical case of left-ear HRTF):
  • the ear-centered HRTFs are then specified by the following information:
  • n l (48) h (v ' R , t) h R u (v ' R , t - del R u (v ' R )) (50)
  • the embodiment operates on audio signals as time-sampled digital signals.
  • the impulse responses will also be time-sampled.
  • the examples given previously are described in terms of continuous-time impulse responses, but it will be appreciated by those skilled in the art that equivalent discrete-time impulse responses may be used in places where the continuous -time functions are described here.
  • the filter responses may be stored as higher-sample-rate impulse responses, so that time-shifts may be implemented with sub-sample accuracy (with the final HRTF filters being decimated after they have been generated by the algorithms described herein).
  • Fig. 9 shows one such arrangment 90, where audio source 91 for a particular location, is duplicated for left and right channels. Taking the symmetrical case of the left channel, it is loaded into an FIR filter 92 where it is convolved with the corresponding HRTF 93 calculated as
  • the convolved output forms one spatialized audio source output which is summed 94 with other outputs to produce an overall left speaker output 95.
  • the arrangement of Fig. 9 can be implemented in a real time or batch manner.
  • the playback can be to a series of headphone transducers a user to listen to spatialised audio.
  • the audio output 95, 96 can be stored for subsequent playback to a user at a later time, with the playback requiring less onerous computational resources.
  • h L (v ' L , n) h L u (v ' L , t - del L u (v ' L )) .
  • any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others.
  • the term comprising, when used in the claims should not be interpreted as being limitative to the means or elements or steps listed thereafter.
  • the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B.
  • Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.
  • exemplary is used in the sense of providing examples, as opposed to indicating quality. That is, an "exemplary embodiment” is an embodiment provided as an example, as opposed to necessarily being an embodiment of exemplary quality.
  • some of the embodiments are described herein as a method or combination of elements of a method that can be implemented by a processor of a computer system or by other means of carrying out the function.
  • a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or element of a method.
  • an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the element for the purpose of carrying out the invention.
  • Coupled when used in the claims, should not be interpreted as being limited to direct connections only.
  • the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other.
  • the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means.
  • Coupled may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Abstract

La présente invention concerne un procédé de création d'une série de fonctions de transfert relatives à la tête pour la lecture de signaux audio, le procédé comprenant les étapes consistant : (a) pour au moins une oreille de lecture de l'auditeur concerné, et pour au moins une source audio positionnée à l'extérieur, à formuler au moins une HRTF centrée au niveau d'une oreille normalisée disposant pratiquement de caractéristiques invariantes le long d'une ligne radiale à partir de l'oreille de l'auditeur; (b) à modifier la HRTF centrée au niveau d'une oreille normalisée par un facteur de retard et un facteur d'atténuation en fonction de la distance d'une oreille de l'auditeur.
PCT/US2015/060259 2014-11-14 2015-11-12 Procédé et système de fonction de transfert relative à la tête centrée au niveau d'une oreille WO2016077514A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462079648P 2014-11-14 2014-11-14
US62/079,648 2014-11-14

Publications (1)

Publication Number Publication Date
WO2016077514A1 true WO2016077514A1 (fr) 2016-05-19

Family

ID=54704110

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/060259 WO2016077514A1 (fr) 2014-11-14 2015-11-12 Procédé et système de fonction de transfert relative à la tête centrée au niveau d'une oreille

Country Status (1)

Country Link
WO (1) WO2016077514A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113170272A (zh) * 2018-10-05 2021-07-23 奇跃公司 近场音频渲染
EP3893523A4 (fr) * 2018-12-29 2022-02-16 Huawei Technologies Co., Ltd. Procédé et appareil de traitement de signal audio
CN116249053A (zh) * 2018-10-05 2023-06-09 奇跃公司 用于双耳音频渲染的耳间时间差交叉渐变器

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007045016A1 (fr) * 2005-10-20 2007-04-26 Personal Audio Pty Ltd Simulation audio spatiale
WO2013142653A1 (fr) * 2012-03-23 2013-09-26 Dolby Laboratories Licensing Corporation Procédé hrtf et système pour génération de fonction de transfert de tête par mélange linéaire de fonctions de transfert de tête

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007045016A1 (fr) * 2005-10-20 2007-04-26 Personal Audio Pty Ltd Simulation audio spatiale
WO2013142653A1 (fr) * 2012-03-23 2013-09-26 Dolby Laboratories Licensing Corporation Procédé hrtf et système pour génération de fonction de transfert de tête par mélange linéaire de fonctions de transfert de tête

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Auralization : Fundamentals of Acoustics, Modelling, Simulation, Algorithms and Acoustic Virtual Reality", 1 January 2008, SPRINGER, ISBN: 978-3-642-08023-4, article MICHAEL VORLÄNDER: "Chapter 9: Convolution and sound synthesis", pages: 137 - 146, XP055243183 *
BROWN ET AL: "A structural model for binaural sound synthesis", IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, vol. 6, no. 5, 1 January 1998 (1998-01-01), pages 476 - 488, XP055106098, ISSN: 1063-6676, DOI: 10.1109/89.709673 *
JAN-GERRIT RICHTER ET AL: "Spherical harmonics based HRTF datasets: Design, implementation and evaluation for real-time auralization", AIA-DAGA 2013 INTERNATIONAL CONFERENCE ON ACOUSTICS, 1 January 2013 (2013-01-01), pages 612 - 615, XP055243333, ISBN: 978-3-939296-05-8 *
OTANI MAKOTO ET AL: "Numerical study on source-distance dependency of head-related transfer functions", THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, AMERICAN INSTITUTE OF PHYSICS FOR THE ACOUSTICAL SOCIETY OF AMERICA, NEW YORK, NY, US, vol. 125, no. 5, 1 May 2009 (2009-05-01), pages 3253 - 3261, XP012123266, ISSN: 0001-4966, DOI: 10.1121/1.3111860 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113170272A (zh) * 2018-10-05 2021-07-23 奇跃公司 近场音频渲染
CN116249053A (zh) * 2018-10-05 2023-06-09 奇跃公司 用于双耳音频渲染的耳间时间差交叉渐变器
EP3893523A4 (fr) * 2018-12-29 2022-02-16 Huawei Technologies Co., Ltd. Procédé et appareil de traitement de signal audio
US11917391B2 (en) 2018-12-29 2024-02-27 Huawei Technologies Co., Ltd. Audio signal processing method and apparatus

Similar Documents

Publication Publication Date Title
CN107018460B (zh) 具有头部跟踪的双耳头戴式耳机呈现
KR101651419B1 (ko) 머리 전달 함수들의 선형 믹싱에 의한 머리 전달 함수 생성을 위한 방법 및 시스템
EP3229498B1 (fr) Procédé et appareil de traitement de signal audio destiné à un rendu binauriculaire
US10893375B2 (en) Headtracking for parametric binaural output system and method
EP3114859B1 (fr) Modélisation structurale de la réponse impulsionnelle relative à la tête
CN106664485B (zh) 基于自适应函数的一致声学场景再现的系统、装置和方法
EP2719200B1 (fr) Réduction du volume des données des fonctions de transfert relatives à la tête
US11838742B2 (en) Signal processing device and method, and program
JP2019115042A (ja) 位相応答特性を利用するバイノーラルレンダリングのためのオーディオ信号処理方法及び装置
WO2016077514A1 (fr) Procédé et système de fonction de transfert relative à la tête centrée au niveau d'une oreille
JP2018511213A (ja) 変調された脱相関による空間的オーディオ信号の向上
US8923536B2 (en) Method and apparatus for localizing sound image of input signal in spatial position
US20160100270A1 (en) Audio signal processing apparatus and audio signal processing method
EP3912365A1 (fr) Dispositif et procédé de restitution d'un signal audio binaural
JP3581811B2 (ja) 3dデジタル・オーディオにおける耳間時間遅延を処理するための方法および装置
JP2020110007A (ja) パラメトリック・バイノーラル出力システムおよび方法のための頭部追跡

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15801051

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15801051

Country of ref document: EP

Kind code of ref document: A1