EP1844627B1 - Dispositif et procédé pour simuler un système de synthèse de champ d'onde - Google Patents

Dispositif et procédé pour simuler un système de synthèse de champ d'onde Download PDF

Info

Publication number
EP1844627B1
EP1844627B1 EP06707014A EP06707014A EP1844627B1 EP 1844627 B1 EP1844627 B1 EP 1844627B1 EP 06707014 A EP06707014 A EP 06707014A EP 06707014 A EP06707014 A EP 06707014A EP 1844627 B1 EP1844627 B1 EP 1844627B1
Authority
EP
European Patent Office
Prior art keywords
audio
wave field
field synthesis
output condition
simulating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP06707014A
Other languages
German (de)
English (en)
Other versions
EP1844627A1 (fr
Inventor
Katrin Reichelt
Gabriel Gatzsche
Frank Melchior
Sandra Brix
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Publication of EP1844627A1 publication Critical patent/EP1844627A1/fr
Application granted granted Critical
Publication of EP1844627B1 publication Critical patent/EP1844627B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/13Application of wave-field synthesis in stereophonic audio systems

Definitions

  • the present invention relates to the wave field synthesis technique, and more particularly to tools for creating audio scene descriptions and for verifying audio scene descriptions, respectively.
  • WFS Wave Field Synthesis
  • Applied to the acoustics can be simulated by a large number of speakers, which are arranged side by side (a so-called speaker array), any shape of an incoming wavefront.
  • a so-called speaker array any shape of an incoming wavefront.
  • the audio signals of each speaker must be fed with a time delay and amplitude scaling so that the radiated sound fields of each speaker properly overlap.
  • the contribution to each speaker is calculated separately for each source and the resulting signals added together. If the sources to be reproduced are in a room with reflective walls, reflections must also be reproduced as additional sources via the loudspeaker array. The effort in the calculation therefore depends heavily on the number of sound sources, the reflection characteristics of the recording room and the number of speakers.
  • the advantage of this technique is in particular that a natural spatial sound impression over a large area of the playback room is possible.
  • the direction and distance of sound sources are reproduced very accurately.
  • virtual sound sources can even be positioned between the real speaker array and the listener.
  • wavefield synthesis works well for environments whose characteristics are known, irregularities occur when the texture changes, or when wave field synthesis is performed based on environmental conditions that do not match the actual nature of the environment.
  • An environmental condition can be described by the impulse response of the environment.
  • the space compensation using wavefield synthesis would be to first determine the reflection of that wall to determine when a sound signal reflected from the wall will return to the loudspeaker and what amplitude this reflected sound signal will be Has. If the reflection from this wall is undesirable, then with the wave field synthesis it is possible to eliminate the reflection from this wall by impressing the loudspeaker with a signal of opposite amplitude to the reflection signal in addition to the original audio signal, so that the traveling compensating wave is the Reflectance wave extinguished, so that the reflection from this wall in the environment that looks is, is eliminated. This can be done by first computing the impulse response of the environment and determining the nature and position of the wall based on the impulse response of that environment, the wall being interpreted as a source of mirrors, that is, a sound source reflecting an incident sound.
  • Wavefield synthesis (WFS or sound field synthesis), as developed at the TU Delft in the late 1980s, represents a holographic approach to sound reproduction. The basis for this is the Kirchhoff-Helmholtz integral. This states that any sound fields within a closed volume can be generated by means of a distribution of monopole and dipole sound sources (loudspeaker arrays) on the surface of this volume.
  • a synthesis signal for each loudspeaker of the loudspeaker array is calculated from an audio signal which emits a virtual source at a virtual position, the synthesis signals being such in terms of amplitude and phase, that a wave resulting from the superposition of the individual sound waves output by the loudspeakers present in the loudspeaker array corresponds to the wave which would originate from the virtual source at the virtual position, if this virtual source is at the virtual position would be a real source with a real position.
  • multiple virtual sources exist at different virtual locations.
  • the computation of the synthesis signals is performed for each virtual source at each virtual location, typically resulting in one virtual source in multiple speaker synthesis signals. Seen from a loudspeaker, this loudspeaker thus receives several synthesis signals, which go back to different virtual sources. A superimposition of these sources, which is possible due to the linear superposition principle, then gives the reproduced signal actually emitted by the speaker.
  • the final-rendered and analog-to-digital converted reproduction signals for the individual loudspeakers could be transmitted, for example via two-wire lines, from the wave field synthesis central unit to the individual loudspeakers.
  • the wave field synthesis central unit could always be made only for a special reproduction room or for a reproduction with a fixed number of loudspeakers.
  • the German patent DE 10254404 B4 discloses a system as it is in Fig. 7 is shown.
  • One part is the central wave field synthesis module 10.
  • the other part is composed of individual speaker modules 12a, 12b, 12c, 12d, 12e which are connected to actual physical speakers 14a, 14b, 14c, 14d, 14e as shown in FIG Fig. 1 is shown.
  • the number For example, in typical applications, the speaker 14a-14e is in the range above 50 and typically well above 100. If each loudspeaker is assigned its own loudspeaker module, the corresponding number of loudspeaker modules is also required. Depending on the application, however, it is preferred to address a small group of adjacent loudspeakers from a loudspeaker module.
  • a speaker module which is connected to four speakers, for example, feeds the four speakers with the same playback signal, or whether the four speakers corresponding different synthesis signals are calculated, so that such a speaker module actually off consists of several individual speaker modules, but which are physically combined in one unit.
  • each transmission link 16a-16e being coupled to the central wave field synthesis module and to a separate loudspeaker module.
  • a serial transmission format providing a high data rate such as a so-called Firewire transmission format or a USB data format is preferred. Data transfer rates in excess of 100 megabits per second are advantageous.
  • the data stream which is transmitted from the wave field synthesis module 10 to a loudspeaker module is thus correspondingly formatted according to the selected data format in the wave field synthesis module and provided with synchronization information which is provided in conventional serial data formats.
  • This synchronization information is extracted from the data stream by the individual loudspeaker modules and used to control the individual loudspeaker modules in terms of their reproduction, that is, ultimately to the analog-to-digital conversion to obtain the analog loudspeaker signal and the purpose of resampling to synchronize.
  • the central wave-field synthesis module works as a master and all loudspeaker modules operate as clients, with the individual data streams across the different links 16a-16e all receiving the same synchronization information from the central module 10.
  • the rendering still determines the total capacity of the system. Is the central rendering unit therefore z.
  • the central rendering unit therefore z. For example, if it is able to render 32 virtual sources simultaneously, ie to compute the synthesis signals for these 32 virtual sources simultaneously, then serious capacity bottlenecks will occur if more than 32 sources are active at a time in an audio scene. This is sufficient for simple scenes. For more complex scenes, in particular with immersive sound impressions, ie when it rains and many raindrops are single sources, it is immediately obvious that the capacity with a maximum of 32 sources is no longer sufficient. A similar situation also occurs when you have a large orchestra and in fact want to process every orchestra player or at least each group of instruments as their own source in their own position. Here, 32 virtual sources can quickly become too little.
  • a scene description is used in which the individual audio objects are defined together such that, using the data in the scene description and the audio data for the individual virtual sources, the complete scene is rendered by a renderer or a multi-rendering Arrangement can be processed.
  • a renderer or a multi-rendering Arrangement For each audio object, it is exactly defined where the audio object has to start and where the audio object ends. Furthermore, for each audio object, exactly the position of the virtual source is indicated at which the virtual source should be, which is to be entered into the wave field synthesis rendering device, so that for each speaker the corresponding synthesis signals are generated.
  • a disadvantage of the concept described is the fact that it is relatively rigid, in particular when creating the audio scene descriptions. So a sound engineer will create an audio scene just for a particular wave field synthesizer, of which he knows exactly the situation in the playback room and the audio scene description created so that it runs smoothly on the well-defined wave field synthesis system known to the producers.
  • the sound engineer will consider the maximum capacity of the wave field synthesis rendering device as well as wave field requirements in the rendering room already when creating the audio scene description. For example, if a renderer has a maximum capacity of 32 audio sources to process, the sound engineer will already be careful to edit the audio scene description so that no more than 32 sources can be processed simultaneously.
  • An audio scene description is thus obtained as a sequence of audio objects, each audio object being a virtual one Position and a start time, an end time or a duration includes.
  • a disadvantage of this concept is that the sound engineer, who creates the audio scene description, must concentrate on boundary conditions of the wave field synthesis system, which actually have nothing to do with the creative side of the audio scene. It would therefore be desirable to whom the sound engineer could concentrate solely on the creative aspects without having to consider a particular wave field synthesis system on which his audi scene is to run.
  • Another disadvantage of the described concept is that when an audio scene description from a wave field synthesis system having a particular first behavior for which the audio scene description has been designed is to be made on another wave field synthesis system having a second behavior for which the audio scene has not been designed ,
  • the audio scene description becomes the second system only in terms of performance of the first system and do not exhaust the additional efficiency of the second system.
  • the second system also refers to a z. B. larger playback room, it can no longer be ensured at certain points that the wavefronts of two virtual sources, such as bass guitar and lead guitar, arrive almost simultaneously.
  • the object of the present invention is to provide a concept for simulating a wave field synthesis system by which an audio scene description can be efficiently examined for a particular wave field synthesis system and related potentially occurring errors.
  • the present invention is based on the recognition that in addition to an audio scene description that defines a temporal sequence of audio objects, output conditions are provided either within the audio scene description or separately from the audio scene description, then the behavior of the wave field synthesis system on which an audio scene description is to run to simulate. Based on the simulated behavior of the wave field synthesis system and on the basis of the output conditions, it can then be checked whether the simulated behavior of the wave field synthesis system fulfills the output condition or not.
  • This concept makes it easy to simulate one audio scene description for another wave field synthesis system and to account for system independent general output conditions for the other wave field synthesis system without the sound engineer or creator of the audio scene description dealing with such "secular" things of an actual wave field synthesis.
  • the occupation with the actual boundary conditions of a wave field synthesis system is taken from the sound engineer by the device according to the invention. He can simply write his audio scene description, guided by creative thoughts, as he would like to, by protecting the artistic impression through the system-independent output conditions.
  • the inventive concept determines whether the audio scene description, the universal, ie has not been written for a particular system, can run on a specific system, if and where appropriate in the playback room problems occur.
  • the processor can simulate the behavior of the wave field synthesis system almost in real time and verify it on the basis of the given output condition.
  • the output condition may refer to hardware aspects of the wave field synthesis system, such as a maximum processing capability of the renderer device, or to sound field specific things in the rendering room, such as having wavefronts of two virtual sources perceived within a maximum time difference. or that level differences between two virtual sources at all points or at least at certain points in the playback room must be in a predetermined corridor.
  • the hardware-specific output conditions it is preferable not to include them in the audio scene description due to the flexibility and compatibility requirements, but to provide them externally to the reviewer.
  • a creator of an audio scene description ensures that at least minimum sound impression requirements are met, but that some flexibility remains in wave field synthesis rendering in order to play an audio scene description not only with optimal quality on a single wave field synthesis system, but rather different wave-field synthesis systems, adding the flexibility granted by the author through intelligent post-processing of the audio scene description, However, which is preferably carried out by machine, is advantageously exploited.
  • the present invention serves as a tool to verify whether output conditions of an audio scene description can be met by a wave field synthesis system. Should violations of output conditions occur, the inventive concept in the preferred embodiment will inform the user which virtual sources are problematic, where in the playback room violations of the output conditions occur and at what time. Thus, it can be judged whether an audio scene description easily runs on any wave field synthesis system or whether the audio scene description needs to be rewritten due to serious violations of the output conditions, or if violations of the output conditions occur, but they are not so serious as to actually describe the audio scene would have to manipulate.
  • Fig. 1a shows a schematic representation of an inventive device for simulating a wave field synthesis system with a playback room in which one or more speaker arrays and a coupled to the speaker array wave field synthesis rendering device can be attached.
  • the inventive apparatus comprises means 1 for providing an audio scene description defining a temporal sequence of audio objects, wherein an audio object comprises an audio file for a virtual source or a reference to the audio file and information about a source location of the virtual source.
  • the audio files may either be contained directly in the audio scene description 1 or may be identifiable by references to audio files in an audio file database 2 and fed to a device 3 for simulating the behavior of the wave field synthesis system.
  • the audio files are controlled via a control line 1a or supplied to the simulation device 3 via a line 1b, which also contains the source positions.
  • a line 3a will be active which is in Fig. 1a indicated by dashed lines.
  • the device 3 for simulating the wave field synthesis system is designed to use information about the wave field synthesis system, and then, on the output side, to supply the simulated behavior of the wave field synthesis system to a device 4 for checking the output condition.
  • the device 4 is designed to check whether the simulated behavior of the wave field synthesis system fulfills the output condition or not.
  • the device 4 for checking receives an output condition via an input line 4a, the output condition being supplied either externally to the device 4.
  • the output condition may also be derived from the audio scene description, as represented by a dashed line 4b.
  • the first case that is, where the issue condition is externally supplied, is preferred when the issue condition is a hardware-technical condition related to the wave-field synthesis system, such as a maximum transfer capacity of a data connection or, as a bottleneck of overall processing, a maximum Computing capacity of a renderer, or, in multi-renderer systems, a single renderer module.
  • the issue condition is a hardware-technical condition related to the wave-field synthesis system, such as a maximum transfer capacity of a data connection or, as a bottleneck of overall processing, a maximum Computing capacity of a renderer, or, in multi-renderer systems, a single renderer module.
  • Renderers generate synthesis signals from the audio files using information about the speakers and using information about the source locations of the virtual sources, that is, for each of the many speakers a separate signal, wherein the synthesis signals to each other have different phase and amplitude ratios, so that the many speakers according to the theory of wave field synthesis generate a common wavefront that propagates in the playback room.
  • typical renderer modules are limited in their capacity, such as a maximum capacity of 32 simultaneously to be processed virtual sources. Such an output condition, namely that a maximum of 32 sources may be processed by a renderer at a time, could be provided, for example, to the device 4 for checking the output condition.
  • output conditions relate to the sound field in the playback room.
  • output conditions define a sound field or property of a sound field in the playback room.
  • the wave field synthesis system simulating means 3 is configured to simulate the sound field in the reproducing room using information about an arrangement of the one or more speaker arrays in the reproducing room and using the audio data.
  • the means 4 for checking in this case is arranged to check whether or not the simulated sound field satisfies the output condition in the reproduction room.
  • the means 4 will be arranged to provide a display, such as an optical display, telling the user whether the dispensing condition is not met, completely satisfied, or only partially fulfilled.
  • the device 4 is also designed to check to z. B., as it is based on Fig. 1d is presented to identify problem areas in the playback room (WGR), where z. B. a wavefront output condition is not met. Based on this information, a user of the simulation tool can then decide whether he accepts the partial violation or not, or whether he takes certain measures to achieve a lesser violation of the output conditions, etc.
  • Fig. 1b shows a preferred implementation of the device 3 for simulating a wave field synthesis system.
  • the device 3 comprises in the in Fig. 1b shown preferred embodiment of the present invention, a required anyway for a wave field synthesis system wave field synthesis rendering device 3b to from the scene description, the audio files, the information about speaker positions or possibly further information about the z. B. acoustics of the playback room, etc. Synthesis signals to be generated, which are then supplied to a speaker simulator 3 c.
  • the speaker simulator is configured to detect a sound field in the playback room, preferably at each position of interest in the playback room. With reference to the procedure, refer to below Fig. 1c is described, it can then be determined for each searched point in the playback room, whether a problem has occurred or not.
  • FIG. 1c The flow diagram shown is first simulated by the device 3 for simulating a wavefront in the reproduction space for a first virtual source (5a). Then, by means 3, a wavefront in the reproduction space for the second virtual source is simulated (FIG. 5b). Of course, the two steps 5a and 5b in the presence of appropriate computing capacities also parallel to each other, so be performed simultaneously. This is followed in a step 5c on the basis of the first Wavefront calculated for the first virtual source and based on the second wavefront for the second virtual source a property to be simulated. Preferably, this property will be a property that must be satisfied between two particular virtual sources, such as a level difference, a runtime difference, etc.
  • step 5c Which property is calculated in step 5c depends on the output condition, since of course only information needs to be simulated, which should also be compared with output conditions.
  • the actual comparison of the calculated property, ie the result of step 5c, with the output condition takes place in a step 5d.
  • step 5e not only can it be indicated whether a condition is not satisfied, but also where in the playback room such a condition is not met. Furthermore, in the in Fig. 1c 5, the problematic virtual sources are also identified.
  • FIG. 1d a preferred embodiment of the present invention is shown.
  • An output condition that is in Fig. 1 is considered, defines a sound propagation time with respect to audio data.
  • this condition is particularly in the in Fig. 1d shown reproduction space, which is surrounded by four loudspeaker arrays LSA1, LSA2, LSA3, LSA4, then, if the sources are positioned according to the audio scene description very far apart, not to be fulfilled for each point in the playback room.
  • Problem zones identified by the inventive concept are in Fig. 1d plotted in the playback room.
  • the producer has positioned the guitar and the bass at a distance of 100 m. Further, as the output condition, a maximum transit time difference of 10 m was set for the entire reproduction space, that is, a period of 10 m divided by the speed of sound.
  • the procedure according to the invention, as described on the basis of Fig. 1 has been described, the problem areas, as in Fig. 1d are hinted at and a producer or a sound engineer who the audio scene description with regard to the wave field synthesis system, the in Fig. 1d shown, communicate.
  • performance bottlenecks and quality holes can be predicted. This is achieved by favoring centralized data management, that is to say that both the scene description and the audio files are stored in an intelligent database, and furthermore that a device 3 for simulating the wave field synthesis system is provided, which is a more or less accurate simulation of the wave field synthesis -Systems supplies. This eliminates costly manual testing and artificially limiting system performance to a level considered to be performance and quality assured.
  • a relative definition of the audio objects relative to each other and, in particular, a positioning which is variable within a time span or spatial range is preferred, as it is based on Fig. 3 will be described.
  • the relative positioning or arrangement of audio objects / audio files provides a workable way to define output conditions that preferably relate to a property of two virtual objects, that is also something relative.
  • a database is still used to reuse such assignments / issuing conditions.
  • both relative and variable constraints are used to test the violation of certain sound requirements on different systems.
  • a test mechanism then checks the existing display area imposed by the wave field synthesis loudspeaker array for whether there are any positions which the issue condition is violated. Preferably, furthermore, the author of the sound scene is informed about this violation.
  • the simulation device according to the invention can provide a pure indication of the situation of the output condition, ie whether or not it is injured and, where applicable, where it is injured and where not.
  • the simulation device according to the invention is preferably designed not only to identify the problematic virtual sources, for example, but also to propose solutions to a processor.
  • the simulation device can use an iterative approach in which the sources are moved closer and closer to one another in a certain step size, in order then to see whether the output condition is now satisfied at previously problematic points in the reproduction space. So the "cost function" will be whether there are fewer issue condition violation points than in the previous iteration run.
  • the device according to the invention comprises a device for manipulating an audio object if the audio object violates the output condition.
  • This manipulation can thus consist in an iterative manipulation in order to propose a positioning for the user.
  • the concept according to the invention with this manipulation device can also be used in wave-field synthesis processing in order to use a scene description to adapt it to the actual system Create schedule.
  • This implementation is particularly preferred when the audio objects are not fixed in time and location, but in time and place is given a time span in which the audio object manipulation device is allowed to independently manipulate the audio objects without asking the sound engineer.
  • care is taken in such a real-time simulation / processing that the output conditions are not violated even more by a shift within a time span or local span.
  • the apparatus of the invention may also operate off-line by writing from an audio scene description by audio object manipulation a schedule file based on the simulation results for different output conditions and then rendered in a wavefield synthesis system rather than the original audio scene description.
  • the advantage of this implementation is that the audio file has been written without the intervention of the sound engineer, ie without the time and financial resources of a producer.
  • an audio object should specify the audio file that effectively represents the audio content of a virtual source.
  • the audio object does not need to include the audio file, but may have an index pointing to a defined location in a database where the actual audio file is stored.
  • an audio object preferably comprises an identification of the virtual source, which may be, for example, a source number or a meaningful file name, etc.
  • the audio object specifies a start and / or end time the virtual source, ie the audio file. Specifying only a time period for the start means that the actual starting point of the rendering of this file by the renderer can be changed within the time span. In addition, if a time limit is specified for the end, this also means that the end can also be varied within the time span, which, depending on the implementation, will generally lead to a variation of the audio file also in terms of its length. Any implementations are possible, such.
  • a definition of the start / end time of an audio file so that although the starting point may be moved, but in no case the length may be changed, so that automatically the end of the audio file is also moved.
  • it is preferred to also keep the end variable since it is typically not problematic whether z.
  • a wind noise starts sooner or later, or ends slightly earlier or later.
  • an audio object further comprises a location span for the position. So it will be irrelevant for certain audio objects, whether they z. B. come from the front left or front center, or if they are shifted by a (small) angle with respect to a reference point in the playback room.
  • audio objects, especially from the noise area which can be positioned at any position and thus have a maximum spatial range, for example, by a code for "arbitrary" or by no code (implicit) in the Audio object can be specified.
  • An audio object may include other information, such as an indication of the nature of the virtual Source, that is, whether the virtual source must be a point source for sound waves, or whether it must be a source of plane waves, or whether it must be a source that generates sources of any wavefront, provided the renderers Modules are able to handle such information.
  • Fig. 3 shows by way of example a schematic representation of a scene description, in which the temporal sequence of different audio objects A01, .... AOn + 1 is shown.
  • attention is drawn to the audio object A03, for which a period of time as shown in Fig. 3 is defined.
  • a period of time as shown in Fig. 3 is defined.
  • both the start point and the end point of the audio object AO3 can be in Fig. 3 to be shifted by the time span.
  • the definition of the audio object A03 is that the length must not be changed, but this can be set variably from audio object to audio object.
  • a scene description is used that has relative indications.
  • the flexibility is increased by giving the beginning of the audio object A02 no longer at an absolute time, but in a relative time to the audio object AO1.
  • a relative description of the location information is preferred, so not that an audio object is to be arranged at a certain position xy in the playback room, but z.
  • B. is a vector offset to another audio object or to a reference object.
  • the time span information can be recorded very efficiently, namely simply by setting the time span such that it expresses that the audio object A03 has, for B. in a period between two minutes and two minutes and 20 seconds after the start of the audio object AO1 can begin.
  • Such a relative definition of the space and time conditions leads to a database efficient representation in the form of constraints, such as.
  • Modeling Output Constraints in Multimedia Database Systems T. Heimrich, 11th International Multimedia Modeling Conference, IEEE, January 12, 2005 to January 14, 2005 , Melbourne. It shows the use of constraints in database systems to define consistent database states.
  • temporal constraints are described using Allen relationships and spatial constraints using spatial relationships. From this, favorable output constraints can be defined for synchronization purposes.
  • Such output constraints include a temporal or spatial condition between the objects, a reaction in case of a violation of a constraint, and a verification time, ie when such a constraint needs to be checked.
  • the spatial / temporal output objects of each scene are modeled relative to one another.
  • the audio object manipulation device achieves a translation of these relative and variable definitions into an absolute spatial and temporal order.
  • This order represents the output schedule which defines how, in particular, the renderer module is addressed in the wave-field synthesis system.
  • the schedule is thus an output schedule that arranges the audio data according to the output conditions.
  • FIG. 4 a preferred embodiment of such an output schedule set forth.
  • Fig. 4 a data stream according to Fig. 4 from left to right, that is, from an audio object manipulation device of one to one or more wave field synthesis renderers of a wavefield system.
  • the data stream for each audio object in the in Fig. 4 In the embodiment shown, first a header H, in which the position information and the time information stand, and a subordinate audio file for the special audio object, which in Fig. 4 with AO1 for the first audio object, A02 for the second audio object, etc.
  • a wave field synthesis renderer then receives the data stream and detects z. B. to an existing and agreed synchronization information that now comes a header. Based on another synchronization information, the renderer then recognizes that the header is now over. Alternatively, a fixed length in bits can be agreed for each Haeder.
  • the audio renderer After receiving the header, the audio renderer knows the in Fig. 4 shown preferred embodiment of the present invention automatically that the subsequent audio file, ie, for. B. AO1, to the audio object, so to the Source location identified in the header.
  • Fig. 4 shows a serial data transfer to a wave field synthesis renderer.
  • the renderer requires an input buffer preceded by a data stream reader to parse the data stream.
  • the data stream reader will then interpret the header and store the associated audio data accordingly, so that when an audio object is to render, the renderer reads out the correct audio file and the correct source position from the input buffer.
  • Other data for the data stream are of course possible.
  • a separate transmission of both the time / location information and the actual audio data may be used.
  • the present invention is thus based on an object-oriented approach, that is to say that the individual virtual sources are understood as objects which are distinguished by an audio file and a virtual position in space and possibly by the nature of the source, that is, if they are a point source for sound waves or a source of plane waves or a source of differently shaped sources.
  • the calculation of the wave fields is very computation-intensive and the capacities of the hardware used, such as sound cards and computers, in conjunction with the efficiency of the calculation algorithms bound. Even the best-equipped PC-based solution thus quickly reaches its limits in the calculation of wave field synthesis, when many sophisticated sound events are to be displayed simultaneously. Thus, the capacity limit of the software and hardware used dictates the limitation on the number of virtual sources in the mixdown and playback.
  • Fig. 6 shows such a limited in its known wave field synthesis concept that includes an authoring tool 60, a control renderer module 62, and an audio server 64, wherein the control renderer module is configured to provide a speaker array 66 with data for the speaker array 66 to generate a desired wavefront 68 by superimposing the single waves of the individual loudspeakers 70.
  • the authoring tool 60 allows the user to create scenes, edit and control the wave field synthesis based system.
  • a scene consists of information about the individual virtual audio sources as well as the audio data.
  • the properties of the audio sources and the references to the audio data are stored in an XML scene file.
  • the audio data itself is stored on the audio server 64 and transmitted from there to the renderer module.
  • the renderer module receives the control data from the authoring tool so that the control renderer module 62, which is centrally executed, can generate the synthesis signals for the individual loudspeakers.
  • This in Fig. 6 shown concept is in " Authorization System for Wave Field Synthesis ", F. Melchior, T. Röder, S. Brix, S. Wabnik and C. Riegel, AES Convention Paper, 115th AES Assembly, October 10, 2003, New York s.
  • each renderer is supplied with the same audio data, regardless of whether the renderer needs this data for playback or not because of the limited number of speakers assigned to it.
  • a reduction of redundant data transfer operations and data processing operations in a wave field synthesis multi-renderer system is achieved, which leads to an increase in the computing capacity or the number of simultaneously computable audio sources.
  • the audio server is extended by the data output device, which is able to determine which renderer needs which audio and metadata.
  • the data output device possibly supported by the data manager, requires a plurality of information in a preferred embodiment. This information is initially the audio data, then the source and position data of the sources, and finally the configuration of the renderers, that is, information about the connected speakers and their positions and their capacity.
  • an output schedule is generated by the data output device with a temporal and spatial arrangement of the audio objects. From the spatial arrangement, the time schedule and the renderer configuration, the data management module then calculates which source is relevant for which renderer at a particular time.
  • the database 22 is the output side to the data output device 24 supplemented, wherein the data output device is also referred to as a scheduler.
  • This scheduler then generates at its outputs 20a, 20b, 20c for the various renderers 50 the renderer input signals in order to power the corresponding loudspeakers of the loudspeaker arrays.
  • the scheduler 24 is still supported by a storage manager 52 in order to configure the database 22 by means of a RAID system and corresponding data organization specifications.
  • a data generator 54 On the input side is a data generator 54, which may be, for example, a sound engineer or an audio engineer who is to model or describe an audio scene in an object-oriented manner. In this case, he provides a scene description that includes corresponding output conditions 56, which are then optionally stored in the database 22 together with audio data after a transformation 58.
  • the audio data may be manipulated and updated using an insert / update tool 59.
  • the method according to the invention can be implemented in hardware or in software.
  • the implementation may be on a digital storage medium, particularly a floppy disk or CD, with electronically readable control signals that may interact with a programmable computer system to perform the method.
  • the invention thus also consists in a computer program product with a program code stored on a machine-readable carrier for carrying out the method when the computer program product runs on a computer.
  • the invention can thus be realized as a computer program with a program code for carrying out the method when the computer program runs on a computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • Waveguide Switches, Polarizers, And Phase Shifters (AREA)
  • Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)

Claims (17)

  1. Dispositif pour simuler un système de synthèse de champ d'onde par rapport à une salle de reproduction dans laquelle peuvent être placées une ou plusieurs rangées de haut-parleurs pouvant être couplées à un moyen de rendu de synthèse de champ d'onde, aux caractéristiques suivantes:
    un moyen (1) destiné à fournir une description de scène audio qui définit une succession dans le temps d'objets audio, un objet audio présentant un fichier audio pour une source virtuelle ou un renvoi au fichier audio et des informations sur une position de la source virtuelle, et une condition de sortie pour le système de synthèse de champ d'onde étant prédéterminée;
    un moyen (3) destiné à simuler le comportement du système de synthèse de champ d'onde à l'aide d'informations sur le système de synthèse de champ d'onde et des fichiers audio; et
    un moyen (4) destiné à vérifier si le comportement simulé remplit la condition de sortie.
  2. Dispositif selon la revendication 1, dans lequel la condition de sortie définit un comportement d'un champ sonore dans la salle de reproduction,
    dans lequel le moyen destiné à simuler est réalisé de manière à simuler le champ sonore dans la salle de reproduction, et
    dans lequel le moyen (4) destiné à vérifier est réalisé de manière à vérifier si le champ sonore simulé remplit la condition de sortie dans la salle de reproduction.
  3. Dispositif selon la revendication 1, dans lequel le moyen (3) destiné à simuler présente les caractéristiques suivantes:
    un moyen de rendu de synthèse de champ d'onde (3b) qui est réalisé de manière à générer, à partir de la description de scène audio et des informations sur les positions des haut-parleurs dans la salle de reproduction, des signaux de synthèse; et
    un simulateur de haut-parleur (3c) destiné à simuler le champ sonore généré par les haut-parleurs sur base des signaux de synthèse.
  4. Dispositif selon l'une des revendications précédentes,
    dans lequel le moyen (1) destiné à fournir est réalisé de manière à fournir une condition de sortie présentant une propriété définie d'une source virtuelle par rapport à une autre source virtuelle,
    dans lequel le moyen (3) destiné à simuler est réalisé de manière à simuler un premier champ sonore dans la salle de reproduction sur base d'une première source virtuelle sans l'autre source virtuelle et, par ailleurs, un deuxième champ sonore dans la salle de reproduction sur base de l'autre source virtuelle sans l'une source virtuelle, et
    dans lequel le moyen (4) destiné à vérifier est réalisé de manière à vérifier, à l'aide du premier champ sonore et du deuxième champ sonore, la propriété définie.
  5. Dispositif selon l'une des revendications précédentes,
    dans lequel le moyen (3) destiné à simuler est réalisé de manière à simuler le champ sonore pour différentes positions dans la salle de reproduction, et
    dans lequel le moyen (4) destiné à vérifier est réalisé de manière à vérifier la condition de sortie pour les différentes positions.
  6. Dispositif selon l'une des revendications précédentes, présentant par ailleurs la caractéristique suivante:
    un moyen destiné à afficher (5e) si et où dans le système de synthèse d'onde la condition de sortie est remplie ou non remplie.
  7. Dispositif selon l'une des revendications précédentes, présentant par ailleurs la caractéristique suivante:
    un moyen destiné à identifier (5f) celle parmi une pluralité de conditions de sortie qui n'est pas remplie et la source virtuelle parmi une pluralité de sources virtuelles par suite de laquelle la condition de sortie est violée.
  8. Dispositif selon l'une des revendications précédentes, dans lequel la condition de sortie prescrit qu'un front d'onde sur base d'une première source virtuelle et un front d'onde sur base d'une deuxième source virtuelle dans la salle de reproduction d'onde doivent arriver dans un laps de temps prédéterminé à un point dans la salle de reproduction,
    le moyen (3) destiné à simuler étant réalisé de manière à calculer une différence de temps entre l'occurrence du front d'onde sur base d'une première source virtuelle et l'occurrence du front d'onde sur base d'une deuxième source virtuelle; et
    le moyen (4) destiné à vérifier étant réalisé de manière à comparer la différence de temps calculée avec la condition de sortie.
  9. Dispositif selon l'une des revendications précédentes, présentant par ailleurs les caractéristiques suivantes:
    un moyen destiné à manipuler un objet audio lorsque l'objet audio viole la condition de sortie.
  10. Dispositif selon la revendication 9, dans lequel le moyen destiné à manipuler est réalisé de manière à manipuler une position virtuelle de l'objet audio, un moment de début ou un moment de fin, ou à marquer l'objet audio dans la scène audio comme étant problématique, de sorte que l'objet audio puisse être supprimé lors de la reproduction de la scène audio.
  11. Dispositif selon l'une des revendications précédentes, dans lequel la condition de sortie définit une différence d'intensité de volume entre deux sources virtuelles,
    le moyen (3) destiné à simuler étant réalisé de manière à déterminer une différence d'intensité de volume entre les deux sources virtuelles à un endroit dans la salle de reproduction, et
    le moyen (4) destiné à vérifier étant réalisé de manière à comparer la différence d'intensité de volume déterminée avec la condition de sortie.
  12. Dispositif selon l'une des revendications précédentes,
    dans lequel la condition de sortie est un nombre maximum d'objets audio à traiter simultanément par un moyen de rendu de synthèse de champ d'onde,
    dans lequel le moyen (3) destiné à simuler est réalisé de manière à déterminer une charge du moyen de rendu de synthèse de champ d'onde, et
    dans lequel le moyen (4) destiné à vérifier est réalisé de manière à comparer une charge calculée avec la condition de sortie.
  13. Dispositif selon l'une des revendications précédentes, dans lequel un objet audio dans la description de scène audio définit, pour une source virtuelle associée, un début dans le temps ou une fin dans le temps, l'objet audio présentant pour la source virtuelle un laps de temps dans lequel doit se situer le début ou la fin, ou présente une portée d'emplacement dans laquelle doit se situer une position de la source virtuelle.
  14. Dispositif selon la revendication 13, présentant par ailleurs les caractéristiques suivantes:
    un moyen de manipulation d'objet audio destiné à faire varier un moment de début ou moment de fin réel d'un objet audio dans le laps de temps ou une position réelle de la source virtuelle dans la portée d'emplacement en réaction à une violation d'une condition de sortie.
  15. Dispositif selon la revendication 14, réalisé par ailleurs de manière à examiner si une violation d'une condition de sortie peut être éliminée par la variation de l'objet audio dans le laps de temps ou la portée d'emplacement.
  16. Procédé pour simuler un système de synthèse de champ d'onde par rapport à une salle de reproduction dans laquelle peuvent être placées une ou plusieurs rangées de haut-parleurs pouvant être couplées à un moyen de rendu de synthèse de champ d'onde, aux étapes suivantes consistant à:
    fournir (1) une description de scène audio qui définit une succession dans le temps d'objets audio, un objet audio présentant un fichier audio pour une source virtuelle ou un renvoi au fichier audio et des informations sur une position de la source virtuelle, et une condition de sortie pour le système de synthèse de champ d'onde étant prédéterminée;
    simuler (3) le comportement du système de synthèse de champ d'onde à l'aide d'informations sur le système de synthèse de champ d'onde et des fichiers audio; et
    vérifier (4) si le comportement simulé remplit la condition de sortie.
  17. Programme d'ordinateur avec un code de programme pour réaliser le procédé pour simuler un système de synthèse de champ d'onde selon la revendication 16 lorsque le programme d'ordinateur est exécuté sur un ordinateur.
EP06707014A 2005-02-23 2006-02-16 Dispositif et procédé pour simuler un système de synthèse de champ d'onde Active EP1844627B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102005008369A DE102005008369A1 (de) 2005-02-23 2005-02-23 Vorrichtung und Verfahren zum Simulieren eines Wellenfeldsynthese-Systems
PCT/EP2006/001413 WO2006089683A1 (fr) 2005-02-23 2006-02-16 Dispositif et procede pour simuler un systeme de synthese de champ d'onde

Publications (2)

Publication Number Publication Date
EP1844627A1 EP1844627A1 (fr) 2007-10-17
EP1844627B1 true EP1844627B1 (fr) 2009-01-21

Family

ID=36282944

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06707014A Active EP1844627B1 (fr) 2005-02-23 2006-02-16 Dispositif et procédé pour simuler un système de synthèse de champ d'onde

Country Status (6)

Country Link
US (1) US7809453B2 (fr)
EP (1) EP1844627B1 (fr)
JP (1) JP4700071B2 (fr)
AT (1) ATE421846T1 (fr)
DE (2) DE102005008369A1 (fr)
WO (1) WO2006089683A1 (fr)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005008342A1 (de) * 2005-02-23 2006-08-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Speichern von Audiodateien
DE102005033238A1 (de) * 2005-07-15 2007-01-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Ansteuern einer Mehrzahl von Lautsprechern mittels eines DSP
DE102005033239A1 (de) * 2005-07-15 2007-01-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Steuern einer Mehrzahl von Lautsprechern mittels einer graphischen Benutzerschnittstelle
MX2009002795A (es) * 2006-09-18 2009-04-01 Koninkl Philips Electronics Nv Codificacion y decodificacion de objetos de audio.
KR101407200B1 (ko) * 2009-11-04 2014-06-12 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. 가상 소스와 연관된 오디오 신호를 위한 라우드스피커 배열의 라우드스피커들에 대한 구동 계수를 계산하는 장치 및 방법
US9338572B2 (en) 2011-11-10 2016-05-10 Etienne Corteel Method for practical implementation of sound field reproduction based on surface integrals in three dimensions
DE102012200512B4 (de) 2012-01-13 2013-11-14 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Berechnen von Lautsprechersignalen für eine Mehrzahl von Lautsprechern unter Verwendung einer Verzögerung im Frequenzbereich
WO2013184215A2 (fr) * 2012-03-22 2013-12-12 The University Of North Carolina At Chapel Hill Procédés, systèmes et supports lisibles par ordinateur permettant de simuler la propagation du son dans des lieux vastes au moyen de sources équivalentes
JP6038312B2 (ja) * 2012-07-27 2016-12-07 フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン ラウドスピーカ・エンクロージャ・マイクロホンシステム記述を提供する装置及び方法
CN104019885A (zh) 2013-02-28 2014-09-03 杜比实验室特许公司 声场分析系统
EP3515055A1 (fr) 2013-03-15 2019-07-24 Dolby Laboratories Licensing Corp. Normalisation d'orientations de champ acoustique sur la base d'une analyse de scène auditive
JP6022685B2 (ja) 2013-06-10 2016-11-09 株式会社ソシオネクスト オーディオ再生装置及びその方法
US10679407B2 (en) 2014-06-27 2020-06-09 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for modeling interactive diffuse reflections and higher-order diffraction in virtual environment scenes
US9977644B2 (en) 2014-07-29 2018-05-22 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for conducting interactive sound propagation and rendering for a plurality of sound sources in a virtual environment scene
MX2017006581A (es) * 2014-11-28 2017-09-01 Sony Corp Dispositivo de transmision, metodo de transmision, dispositivo de recepcion, y metodo de recepcion.
US9949052B2 (en) 2016-03-22 2018-04-17 Dolby Laboratories Licensing Corporation Adaptive panner of audio objects
US10248744B2 (en) 2017-02-16 2019-04-02 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for acoustic classification and optimization for multi-modal rendering of real-world scenes
IL311731A (en) 2018-02-15 2024-05-01 Magic Leap Inc Musical instruments in mixed reality
US11337023B2 (en) * 2019-12-20 2022-05-17 Magic Leap, Inc. Physics-based audio and haptic synthesis
CN117044234A (zh) 2020-05-29 2023-11-10 奇跃公司 表面适当碰撞

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4327200A1 (de) * 1993-08-13 1995-02-23 Blaupunkt Werke Gmbh Einrichtung zur stereophonen Wiedergabe
US5390138A (en) * 1993-09-13 1995-02-14 Taligent, Inc. Object-oriented audio system
JPH07303148A (ja) 1994-05-10 1995-11-14 Nippon Telegr & Teleph Corp <Ntt> 通信会議装置
JPH08272380A (ja) * 1995-03-30 1996-10-18 Taimuuea:Kk 仮想3次元空間音響の再生方法および装置
JPH10211358A (ja) 1997-01-28 1998-08-11 Sega Enterp Ltd ゲーム装置
JPH1127800A (ja) 1997-07-03 1999-01-29 Fujitsu Ltd 立体音響処理システム
JP2000267675A (ja) * 1999-03-16 2000-09-29 Sega Enterp Ltd 音響信号処理装置
JP2001042865A (ja) * 1999-08-03 2001-02-16 Sony Corp オーディオデータ送信装置および方法、オーディオデータ受信装置および方法、並びに記録媒体
JP2002123262A (ja) * 2000-10-18 2002-04-26 Matsushita Electric Ind Co Ltd 対話型音場シミュレーション装置、並びに、対話形式によって音場をシミュレートする方法およびそのプログラムを記録した記録媒体
JP2002199500A (ja) 2000-12-25 2002-07-12 Sony Corp 仮想音像定位処理装置、仮想音像定位処理方法および記録媒体
JP2003284196A (ja) 2002-03-20 2003-10-03 Sony Corp 音像定位信号処理装置および音像定位信号処理方法
JP2004007211A (ja) * 2002-05-31 2004-01-08 Victor Co Of Japan Ltd 臨場感信号の送受信システム、臨場感信号伝送装置、臨場感信号受信装置、及び臨場感信号受信用プログラム
EP1552724A4 (fr) 2002-10-15 2010-10-20 Korea Electronics Telecomm Procede de generation et d'utilisation de scene audio 3d presentant une spatialite etendue de source sonore
DE10254404B4 (de) 2002-11-21 2004-11-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audiowiedergabesystem und Verfahren zum Wiedergeben eines Audiosignals
US7706544B2 (en) 2002-11-21 2010-04-27 Fraunhofer-Geselleschaft Zur Forderung Der Angewandten Forschung E.V. Audio reproduction system and method for reproducing an audio signal
US9002716B2 (en) 2002-12-02 2015-04-07 Thomson Licensing Method for describing the composition of audio signals
JP4601905B2 (ja) 2003-02-24 2010-12-22 ソニー株式会社 デジタル信号処理装置およびデジタル信号処理方法
DE10321980B4 (de) 2003-05-15 2005-10-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Berechnen eines diskreten Werts einer Komponente in einem Lautsprechersignal
DE10321986B4 (de) * 2003-05-15 2005-07-14 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Pegel-Korrigieren in einem Wellenfeldsynthesesystem
DE10328335B4 (de) 2003-06-24 2005-07-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Wellenfeldsyntesevorrichtung und Verfahren zum Treiben eines Arrays von Lautsprechern

Also Published As

Publication number Publication date
ATE421846T1 (de) 2009-02-15
DE502006002710D1 (de) 2009-03-12
JP4700071B2 (ja) 2011-06-15
US7809453B2 (en) 2010-10-05
DE102005008369A8 (de) 2007-02-01
US20080013746A1 (en) 2008-01-17
WO2006089683A1 (fr) 2006-08-31
EP1844627A1 (fr) 2007-10-17
JP2008532373A (ja) 2008-08-14
DE102005008369A1 (de) 2006-09-07

Similar Documents

Publication Publication Date Title
EP1844627B1 (fr) Dispositif et procédé pour simuler un système de synthèse de champ d&#39;onde
EP1844628B1 (fr) Procede et dispositif d&#39;amorçage d&#39;une installation de moteur de rendu de synthese de front d&#39;onde avec objets audio
EP1723825B1 (fr) Dispositif et procede pour reguler un dispositif de rendu de synthese de champ electromagnetique
EP1851998B1 (fr) Dispositif et procédé pour fournir des données dans un système a dispositifs de rendu multiples
EP1652405B1 (fr) Dispositif et procede de production, de mise en memoire ou de traitement d&#39;une representation audio d&#39;une scene audio
EP1878308B1 (fr) Dispositif et procede de production et de traitement d&#39;effets sonores dans des systemes de reproduction sonore spatiale a l&#39;aide d&#39;une interface graphique d&#39;utilisateur
EP1671516B1 (fr) Procede et dispositif de production d&#39;un canal a frequences basses
DE10254404B4 (de) Audiowiedergabesystem und Verfahren zum Wiedergeben eines Audiosignals
EP1972181B1 (fr) Dispositif et procédé de simulation de systèmes wfs et de compensation de propriétés wfs influençant le son
DE102010030534A1 (de) Vorrichtung zum Veränderung einer Audio-Szene und Vorrichtung zum Erzeugen einer Richtungsfunktion
DE102006053919A1 (de) Vorrichtung und Verfahren zum Erzeugen einer Anzahl von Lautsprechersignalen für ein Lautsprecher-Array, das einen Wiedergaberaum definiert
EP1789970B1 (fr) Procédé et dispositif pour mémoriser des fichiers audio
DE69935974T2 (de) Verfahren und system zur behandlung von gerichtetem schall in einer akustisch-virtuellen umgegung
DE102012017296B4 (de) Erzeugung von Mehrkanalton aus Stereo-Audiosignalen
DE10321980B4 (de) Vorrichtung und Verfahren zum Berechnen eines diskreten Werts einer Komponente in einem Lautsprechersignal
WO2019158750A1 (fr) Dispositif et procédé pour matriçage audio spatial à base d&#39;objet
EP2503799B1 (fr) Procédé et système de calcul de fonctions HRTF par synthèse locale virtuelle de champ sonore
DE102010009170A1 (de) Verfahren zum Verarbeiten und/oder Mischen von Tonspuren

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070803

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

17Q First examination report despatched

Effective date: 20071227

DAX Request for extension of the european patent (deleted)
RIN1 Information on inventor provided before grant (corrected)

Inventor name: MELCHIOR, FRANK

Inventor name: REICHELT, KATRIN

Inventor name: BRIX, SANDRA

Inventor name: GATZSCHE, GABRIEL

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIN1 Information on inventor provided before grant (corrected)

Inventor name: REICHELT, KATRIN

Inventor name: MELCHIOR, FRANK

Inventor name: BRIX, SANDRA

Inventor name: GATZSCHE, GABRIEL

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REF Corresponds to:

Ref document number: 502006002710

Country of ref document: DE

Date of ref document: 20090312

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090502

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

BERE Be: lapsed

Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FORDERUNG DER ANGEWAN

Effective date: 20090228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090421

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090521

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090622

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

26N No opposition filed

Effective date: 20091022

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090421

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090228

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090422

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100228

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090722

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230220

Year of fee payment: 18

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230524

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20240220

Year of fee payment: 19

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: AT

Payment date: 20240216

Year of fee payment: 19

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240216

Year of fee payment: 19

Ref country code: GB

Payment date: 20240222

Year of fee payment: 19