EP1723825A1 - Dispositif et procede pour reguler un dispositif de rendu de synthese de champ electromagnetique - Google Patents

Dispositif et procede pour reguler un dispositif de rendu de synthese de champ electromagnetique

Info

Publication number
EP1723825A1
EP1723825A1 EP06706963A EP06706963A EP1723825A1 EP 1723825 A1 EP1723825 A1 EP 1723825A1 EP 06706963 A EP06706963 A EP 06706963A EP 06706963 A EP06706963 A EP 06706963A EP 1723825 A1 EP1723825 A1 EP 1723825A1
Authority
EP
European Patent Office
Prior art keywords
audio object
wave field
field synthesis
audio
rendering device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP06706963A
Other languages
German (de)
English (en)
Other versions
EP1723825B1 (fr
Inventor
Katrin Reichelt
Gabriel Gatzsche
Thomas Heinrich
Kai-Uwe Sattler
Sandra Brix
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Technische Universitaet Ilmenau
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Technische Universitaet Ilmenau
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV, Technische Universitaet Ilmenau filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Publication of EP1723825A1 publication Critical patent/EP1723825A1/fr
Application granted granted Critical
Publication of EP1723825B1 publication Critical patent/EP1723825B1/fr
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/403Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/13Application of wave-field synthesis in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution

Definitions

  • the present invention relates to the field of wave field synthesis, and more particularly to the driving of a wave field synthesis rendering device with data to be processed.
  • the present invention relates to wave-field synthesis concepts, and more particularly to efficient wave-field synthesis design in conjunction with a-V multi-renderer system.
  • Applied to the acoustics can be simulated by a large number of speakers, which are arranged side by side (a so-called speaker array), any shape of an incoming wavefront.
  • a single point source to be reproduced and a linear arrangement of the speakers the audio signals of each loudspeaker have to be fed with a time delay and amplitude scaling in such a way that the radiated sound fields of the individual loudspeakers are superimposed correctly.
  • the contribution to each speaker is calculated separately for each source and the resulting signals added together.
  • the cost of the calculation therefore depends heavily on the number of sound sources, the reflection characteristics of the recording room and the number of speakers.
  • the advantage of this technique is in particular that a natural spatial sound impression over a large area of the playback room is possible.
  • the direction and distance of sound sources are reproduced very accurately.
  • virtual sound sources can even be positioned between the real speaker array and the listener.
  • wavefield synthesis works well for environments whose characteristics are known, irregularities occur when the nature changes, or when wave field synthesis is performed on the basis of an environmental condition that does not match the actual nature of the environment.
  • An environmental condition can be described by the impulse response of the environment.
  • the wave field synthesis thus allows a correct mapping of virtual sound sources over a large playback area. At the same time it offers the sound engineer and sound engineer new technical and creative potential in the creation of even complex soundscapes.
  • Field field synthesis (WFS or sound field synthesis), as developed at the TU Delft in the late 1980s, represents a holographic approach to sound reproduction. The basis for this is the Kirchhoff-Helmholtz integral. This means that any sound fields within a closed volume can be generated by means of a distribution of monopole and dipole sound sources (loudspeaker arrays) on the surface of this volume.
  • an audio signal that emits a virtual source at a virtual position is used to calculate a synthesis signal for each loudspeaker of the loudspeaker array, the synthesis signals being designed in amplitude and phase in such a way that a wave resulting from the superimposition of the loudspeaker array individual sound wave output by the speakers existing in the loudspeaker array, corresponding to the wave that would have originated from the virtual source at the virtual position, if this virtual source at the virtual position was a real source with a real position.
  • multiple virtual sources exist at different virtual locations.
  • the computation of the synthesis signals is performed for each virtual source at each virtual location, typically resulting in one virtual source in multiple speaker synthesis signals. Seen from a loudspeaker, this loudspeaker thus receives several synthesis signals, which go back to different virtual sources. A superimposition of these sources, which is possible due to the linear superposition principle, then gives the reproduced signal actually emitted by the speaker.
  • the finished and analog-to-digital converted display signals for the individual loudspeakers could, for example, be transmitted via two-wire lines from the wave field synthesis central unit to the individual loudspeakers.
  • the wave field synthesis central unit could always be made only for a special reproduction room or for a reproduction with a fixed number of loudspeakers.
  • German Patent DE 10254404 B4 discloses a system as shown in FIG.
  • One part is the central wave field synthesis module 10.
  • the other part is composed individual speaker modules 12a, 12b, 12c, 12d, 12e, which are connected to actual physical speakers 14a, 14b, 14c, 14d, 14e as shown in Fig. 1.
  • the number of speakers 14a.-14e in typical applications is in the range above 50 and typically even well above 100. If each loudspeaker is assigned its own loudspeaker module, the corresponding number of loudspeaker modules is also required. Depending on the application, however, it is preferred to address a small group of adjacent loudspeakers from a loudspeaker module.
  • a loudspeaker module connected to four loudspeakers for example, feeds the four loudspeakers with the same playback signal, or whether corresponding different synthesis signals are calculated for the four loudspeakers, so that such a loudspeaker module is actually consists of several individual loudspeaker modules, but which are physically combined in one unit.
  • each transmission link 16a-16e being coupled to the central wave field synthesis module and to a separate loudspeaker module.
  • a serial transmission format that provides a high data rate such as a so-called Firewire transmission format or a USB data format.
  • Data transfer rates in excess of 100 megabits per second are advantageous.
  • the data stream which is transmitted from the wave field synthesis module 10 to a loudspeaker module, is accordingly formatted according to the selected data format in the wave field synthesis module and is synchronized with a synchronization unit. provided in common serial data formats.
  • This synchronization information is extracted from the individual loudspeaker modules from the data stream and used to represent the individual loudspeaker modules with regard to their reproduction, that is to say finally to the analog-to-digital conversion for obtaining the analog loudspeaker signal and the sampling provided for this purpose. sampling).
  • the central wavefield synthesis module operates as a master, and all loudspeaker modules operate as clients, with the individual datastreams receiving the same synchronization information from the central module 10 over the various links 16a-16e.
  • the known wave field synthesis concept uses a scene description in which the individual audio objects are defined together such that, using the data in the scene description and the audio data for the individual virtual sources, the complete scene is rendered by a renderer Arrangement can be processed.
  • a renderer Arrangement For each audio object, it is exactly defined where the audio object has to start and where the audio object ends. Furthermore, for each audio object, exactly the position of the virtual source is indicated at which the virtual source should be, which is to be entered into the wave field synthesis rendering device, so that for each speaker the corresponding synthesis signals are generated.
  • each renderer has limited computing power.
  • a renderer is capable of processing 32 audio sources simultaneously.
  • a transmission path from the audio server to the renderer has a limited transmission bandwidth, so provides a maximum transmission rate in bits per second.
  • Another possibility is to take into account when creating the scene description no consideration of actual wave field synthesis conditions, but to create the scene description just as it wishes the scene author.
  • This possibility is advantageous in view of a higher flexibility and portability of scene descriptions under different wave field synthesis systems, as this creates scene descriptions which are not designed for a specific system but are more general.
  • the same scene description when run on a wave field synthesis system having the high capacity renderer, will result in a better sound impression than in a system having renderers with lower computational power.
  • the second possibility is advantageous in that a scene description does not result in a better sound impression due to the fact that it has been generated for a wave field synthesis system with a very limited capacity, even in a better capacity wave field synthesis system.
  • a disadvantage of the second possibility is that when the wave field synthesis system is brought to its maximum capacity, performance drops or other related problems will occur, since the renderer, because of its maximum capacity, if it is to process more sources, one Processing of the additional sources can simply refuse.
  • the object of the present invention is to provide a flexible concept for controlling a wave field synthesis rendering device, by means of which fractions are at least reduced and at the same time a high degree of flexibility is obtained.
  • the present invention is based on the finding that actual capacity limits can be extended by being intercepted at the wave field synthesis processing load peaks occurring in that the beginning and / or end of an audio object or di & "position can be varied of an audio object within a time span or location span to a perhaps This is achieved by specifying appropriate margins rather than fixed times in the scene description for certain sources where the beginning and / or the end and even the position within a certain range can be variable and then, depending on a load situation in the wave field synthesis system, the actual start and actual virtual position of an audio object are varied within that time span.
  • overload situations are thereby reduced or even completely eliminated by moving audio objects forward or backward within their time span or, in the case of multi-renderer systems, shifting them with respect to their position, so that one of the Due to the changed position, renderer no longer has to generate synthesis signals for this virtual source.
  • Audio objects that lend themselves particularly well to such a definition of time / area definition are sources that contain sounds, eg. B. gossip noise, dripping or any other background noise, such as a wind noise or z. B. also a driving noise of a approaching from far away train.
  • B. gossip noise e.g. B. gossip noise, dripping or any other background noise, such as a wind noise or z.
  • B. also a driving noise of a approaching from far away train.
  • the effects on the described very dynamically occurring overload situation can be eminent.
  • the scheduling or scheduling of audio sources within the scope of their spatial ranges and time periods can lead to a very short overload situation being able to be converted into a correspondingly longer situation that can still be processed. This can of course by a z.
  • this problem is solved by z. B. the previous audio object, if a corresponding margin was specified, already ended a second earlier, or that the later audio object within a predetermined period z. For example, pushing it backward for one second causes the audio objects to stop intersecting and thus not receive an unpleasant rejection of the entire later audio object, which may be a few minutes in length.
  • Fig. 1 is a block diagram of the device according to the invention.
  • FIG. 2 shows an exemplary audio object
  • each audio object is assigned a header with the current time data and position data
  • FIG. 5 shows an embedding of the inventive concept in a wave field synthesis overall system
  • Fig. 6 is a schematic representation of a known wave field synthesis concept
  • FIG. 7 shows a further illustration of a known wave field synthesis concept.
  • FIG. 1 shows a device according to the invention for controlling a wave field synthesis rendering device arranged in a wave field synthesis system 0, wherein the wave field synthesis rendering device is designed to generate synthesis signals for a plurality of loudspeakers within a loudspeaker array from audio objects.
  • An audio object comprises in particular an "audio file for a virtual source and at least one source position at which the virtual source is to be arranged inside or outside the playback room, ie with respect to the listener.
  • the apparatus according to the invention shown in FIG. 1 comprises a scene description providing means 1, the scene description defining a time sequence of audio data, wherein an audio object for a virtual source associated with the audio object defines a start time or an end time, the audio object for the virtual source has a period of time in which to start or end the audio object.
  • the scene description is such that the audio object has a location span in which a position of the virtual source must lie.
  • the device according to the invention further comprises a monitoring monitor 2, which is designed to monitor a utilization of the wave field synthesis system 0, so as to determine a utilization situation of the wave field synthesis system.
  • a monitoring monitor 2 which is designed to monitor a utilization of the wave field synthesis system 0, so as to determine a utilization situation of the wave field synthesis system.
  • an audio object manipulation device 3 is provided, which is designed to vary an actual start point or end point of the audio object to be taken into account by the wave field synthesis rendering device within the time span or an actual position of the virtual source within the spatial span, depending on
  • an audio file server 4 is provided, which can be implemented together with the audio object manipulation device 3 in an intelligent database. Alternatively, it is a simple file server which, depending on a control signal from the audio object manipulation device 3, supplies an audio file either directly via a data connection 5a to the wave field synthesis system and in particular to the wave field synthesis rendering device.
  • the audio file to the audio object manipulation device 3 via a data connection 5b, which then feeds a data stream via its control line 6a to the field-synthesizing system 0 and in particular to the individual renderer modules or the single renderer module, which comprises both the actual start points and / or end points of the audio object determined by the manipulation device or comprises the corresponding position and also comprises the audio data itself.
  • the audio object manipulation device 3 is supplied with the scene description by the device 1 via an input line 6b, while the load situation of the wave field synthesis system 0 is supplied by the monitoring monitor 2 via a further input line 6c.
  • the monitoring monitor 2 is also connected via a monitoring line 7 to the wave field sync.
  • These system 0 connected depending on the situation z. For example, check how many sources are being processed in a renderer module, and whether the capacity limit has been reached, or to check what the current data rate is, just on line 6a or data line 5a or on another line within the field-synthesis system.
  • the load situation does not necessarily have to be the current utilization situation, but can also be a future utilization situation.
  • This implementation is preferred in that then the variability, such as the individual audio objects with each other - ' in view of avoiding overload peaks in the future can be scheduled or manipulated, for. B. by a current variation within a period of time only in some future avoid overload peak helps.
  • the efficiency of the concept according to the invention becomes ever greater the more sources exist which do not have fixed starting points or end points, but have starting points or end points which are provided with a time span or which have no fixed source positions but source positions which provide a spatial span are.
  • the audio object manipulation device 3 would position the position of this virtual source whose actual position for the audio impression or for the audio scene is insignificant so that it is processed by a renderer other than the front renderer, ie so that the front renderer is not burdened but only loaded with another renderer, which, however, does not work on its capacity limit anyway.
  • an audio object should specify the audio file that effectively represents the audio content of a virtual source.
  • the audio object does not need to include the audio file, but may have an index pointing to a defined location in a database where the actual audio file is stored.
  • an audio object preferably comprises an identification of the virtual source, which may be, for example, a source number or a meaningful file name, etc.
  • the audio object specifies a period of time for the beginning and / or the end of the virtual source, that is, the audio file. Specifying only a time period for the start means that the actual starting point of the rendering of this file by the renderer can be changed within the time span. In addition, if a time limit is specified for the end, this also means that the end can also be varied within the time span, which, depending on the implementation, will generally lead to a variation of the audio file also in terms of its length. Any implementations are possible, such.
  • the start / end times of an audio file so that although the starting point may be moved, but that on ⁇ " '• any case, the length may be changed, so therefore automatically the end of the audio file will also be moved.
  • the tail variable since it typically is not problematic, for example, if a wind noise starts sooner or later, or if it ends a little earlier or later
  • Further specifications are possible depending on the implementation or desired, such as a specification, that although the starting point may be varied, but not the end point, etc.
  • an audio object further comprises a location span for the position. So it will be irrelevant for certain audio objects, whether they z. For example, they may come from the front left or the front center, or they may be shifted by a (small) angle with respect to a reference point in the playback room.
  • audio objects especially from the noise area exist, which can be positioned at any desired location and thus have a maximum spatial span, for example by a code for "random" or by no code ( implicitly) in the audio object.
  • An audio object may include other information, such as an indication of the type of virtual source, that is, whether the virtual source must be a point source for sound waves, or whether it must be a source of plane waves, or whether must be a source that generates sources of arbitrary wavefront, provided the renderer modules are able to process such information.
  • FIG. 3 shows, by way of example, a schematic representation of a scene description in which the time sequence of various audio objects AO1,... AOn + 1 is shown.
  • attention is drawn to the audio object A03, for which a period of time, as shown in FIG. 3, is defined.
  • a period of time as shown in FIG. 3
  • both the start point and the end point of the audio object A03 in FIG. 3 can be shifted by the time period.
  • the definition of the audio object A03 is that the length may not be changed, but this can be variably set from audio object to audio object.
  • the audio object A03 is shifted by the audio object manipulation device 3, so that no capacitive exceeded and thus no suppression of the audio object AO3 more takes place.
  • a scene description is used that has relative indications.
  • the flexibility is increased by the fact that the beginning of the audio object AO2 is no longer given in an absolute time, but in a relative period of time to the audio object AO1.
  • a relative description of the location information is preferred, so not that an audio object is to be arranged at a certain position xy in the playback room, but z.
  • B. is a vector offset to another audio object or to a reference object.
  • the time span information can be recorded very efficiently, namely simply by setting the time span such that it expresses that the audio object A03 has, for B. in a period between two minutes and two minutes and 20 seconds after the start of the audio object AOl can begin.
  • the spatial / temporal output objects of each scene are modeled relative to one another.
  • the audio object manipulation device achieves a transfer of these relative and variable definitions into an absolute spatial and temporal order.
  • This order represents the output schedule obtained at the output 6a of the system shown in FIG. 1 and defines how the renderer module in particular is addressed in the wave field synthesis system.
  • the schedule is thus an output schedule that arranges the audio data according to the output conditions.
  • FIG. 4 shows a data stream which is transmitted from left to right according to FIG. 4, ie from the audio object manipulation device 3 of FIG. 1 to one or more wave field synthesis renderers of the wave field system 0 of FIG. 1.
  • the data stream for each audio object initially comprises a header H in which the position information and the time information are located, and subordinate an audio file for the specific audio object shown in FIG. 4 with AO1 for the first Audio object, A02 for the second audio object, etc. is designated.
  • a wave field synthesis renderer then receives the data stream and detects z. B. to an existing and agreed synchronization information that now comes a header. Based on another synchronization information, the renderer then recognizes that the header is now over. Alternatively, a fixed length in bits can be agreed for each Haeder.
  • the audio renderer After receiving the header, in the preferred embodiment of the present invention shown in FIG. 4, the audio renderer automatically knows that the subsequent audio file, ie, e.g. AOl belongs to the audio object, that is, to the source location identified in the header.
  • FIG. 4 shows a serial data transmission to a field-synthesis synthesizer.
  • the renderer requires an input buffer preceded by a data stream reader to parse the data stream.
  • the data stream reader will then interpret the header and store the associated audio data so that when an audio object is to render, the renderer reads out the correct audio file and location from the input buffer.
  • Other data for the data stream are of course possible.
  • a separate transmission of both the time / location information and the actual audio data may be used.
  • the present invention is thus based on an object-oriented approach, that is to say that the individual virtual sources are understood as objects which are distinguished by an audio file and a virtual position in space and possibly by the nature of the source, that is, if they are a point source for sound waves or a source of plane waves or a source of differently shaped sources.
  • the calculation of the wave fields is very computation-intensive and tied to the capacities of the hardware used, such as sound cards and computers, in the interplay with the efficiency of the calculation algorithms.
  • Even the best-equipped PC-based solution quickly reaches its limits in the calculation of wave field synthesis when many demanding sound events are to be displayed simultaneously.
  • the capacity limit of the software and hardware used dictates the limitation on the number of virtual sources in the mixdown and playback.
  • FIG. 6 shows such a limited-capacity known wave-field synthesis concept including 4 : an authoring tool 60, a control renderer module 62, and an audio server 64, wherein the control renderer module is configured to form a speaker array 66 so that the loudspeaker array 66 generates a desired wavefront 68 by superposition of the individual waves of the individual loudspeakers 70.
  • the authoring tool 60 allows the user to create scenes, edit and control the wave field synthesis based system.
  • a scene consists of information about the individual virtual audio sources as well as the audio data.
  • the properties of the audio sources and the references to the audio data are stored in an XML scene file.
  • the audio data itself is stored on the audio server 64 and transmitted from there to the renderer module.
  • the renderer module receives the control data from the authoring tool so that the control renderer module 62, which is centrally executed, can generate the synthesis signals for the individual loudspeakers.
  • the concept shown in Figure 6 is described in "Authoring System for Wave Field Synthesis", F. Melchior, T. Röder, S. Brix, S. Wabnik and C. Riegel, AES Convention Paper, 115th AES Assembly, 10. October 2003, New York. If this wave field synthesis system is operated with multiple renderer modules, each renderer is supplied with the same audio data, regardless of whether the renderer needs this data for playback or not because of the limited number of speakers assigned to it. Since each of the current computers is capable of calculating 32 audio sources, this represents the limit for the system. On the other hand, the number of sources that can be changed in the overall system should be increased significantly and efficiently. This is one of the essential requirements for complex applications, such as movies, scenes with immersive atmospheres, such as rain or applause or other complex audio scenes.
  • a reduction of redundant data transfer operations and data processing operations in a wave field synthesis multi-renderer system is achieved, which leads to an increase in the computing capacity or the number of simultaneously computable audio sources.
  • the audio server is extended by the data output device, which is able to determine which renderer needs which audio and metadata.
  • the data output device possibly supported by the data manager, requires a plurality of information in a preferred embodiment. This information is first the audio data, then the source and position data of the sources, and finally the configuration of the renderers, ie information about the connected loudspeakers and their positions and their capacity.
  • an output schedule is generated by the data output device with a temporal and spatial arrangement of the audio objects. From the spatial arrangement, the time schedule and the renderer configuration, the data management module then calculates which source for which renderers are relevant at any given time.
  • the database 22 is supplemented on the output side by the data output device 24, wherein the data output device is also referred to as a scheduler.
  • This scheduler then generates at its outputs 20a, 20b, 20c for the various renderers 50 the renderer input signals in order to power the corresponding loudspeakers of the loudspeaker arrays.
  • the scheduler 24 is preferably also supported by a storage manager 52 in order to configure the database 42 by means of a RAID system and corresponding data organization specifications.
  • a data generator 54 On the input side is a data generator 54, which may be, for example, a sound engineer or an audio engineer who is to model or describe an audio scene in an object-oriented manner. In this case, he provides a scene description that includes corresponding output conditions 56, which are then optionally stored in the database 22 together with audio data after a transformation 58.
  • the audio data may be manipulated and updated using an insert / update tool 59.
  • the method according to the invention can be implemented in hardware or in software.
  • the implementation may be on a digital storage medium, particularly a floppy disk or CD, with electronically readable control signals that may interact with a programmable computer system to perform the method.
  • the invention thus also exists in a computer program product with a program code stored on a machine-readable carrier for carrying out the method when the computer program product runs on a computer.
  • the invention can be realized as a computer program with a program code for carrying out the method when the computer program runs on a computer.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Health & Medical Sciences (AREA)
  • Stereophonic System (AREA)
  • Control Of Metal Rolling (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Paper (AREA)
  • Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)

Abstract

Selon l'invention, pour assurer la commande d'un dispositif de rendu de synthèse de champ électromagnétique, disposé dans un système de synthèse de champ électromagnétique, une description de scène est utilisée (1), dans laquelle il n'est pas défini une position absolue ou un instant absolu pour une source, mais un espace temps ou un espace lieu dans lequel l'objet audio peut varier. Il est également prévu un moniteur de surveillance (2), qui surveille une situation d'utilisation du système de synthèse de champ électromagnétique. Un manipulateur d'objet audio (3) module pour finir le point de départ effectif de l'objet audio que le dispositif de rendu de synthèse de champ électromagnétique doit observer ou la position effective de l'objet audio dans l'espace temps ou l'espace lieu, afin d'éviter des goulets d'étranglement au niveau des lignes de transmission ou dans le dispositif de rendu.
EP06706963A 2005-02-23 2006-02-15 Dispositif et procede pour reguler un dispositif de rendu de synthese de champ electromagnetique Not-in-force EP1723825B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102005008333A DE102005008333A1 (de) 2005-02-23 2005-02-23 Vorrichtung und Verfahren zum Steuern einer Wellenfeldsynthese-Rendering-Einrichtung
PCT/EP2006/001360 WO2006089667A1 (fr) 2005-02-23 2006-02-15 Dispositif et procede pour reguler un dispositif de rendu de synthese de champ electromagnetique

Publications (2)

Publication Number Publication Date
EP1723825A1 true EP1723825A1 (fr) 2006-11-22
EP1723825B1 EP1723825B1 (fr) 2007-11-07

Family

ID=36169151

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06706963A Not-in-force EP1723825B1 (fr) 2005-02-23 2006-02-15 Dispositif et procede pour reguler un dispositif de rendu de synthese de champ electromagnetique

Country Status (7)

Country Link
US (1) US7668611B2 (fr)
EP (1) EP1723825B1 (fr)
JP (1) JP4547009B2 (fr)
CN (1) CN101129086B (fr)
AT (1) ATE377923T1 (fr)
DE (2) DE102005008333A1 (fr)
WO (1) WO2006089667A1 (fr)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005008342A1 (de) * 2005-02-23 2006-08-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Speichern von Audiodateien
DE102005033239A1 (de) * 2005-07-15 2007-01-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Steuern einer Mehrzahl von Lautsprechern mittels einer graphischen Benutzerschnittstelle
US20110188342A1 (en) * 2008-03-20 2011-08-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Device and method for acoustic display
US9706324B2 (en) * 2013-05-17 2017-07-11 Nokia Technologies Oy Spatial object oriented audio apparatus
CN106961647B (zh) 2013-06-10 2018-12-14 株式会社索思未来 音频再生装置以及方法
DE102014018858B3 (de) * 2014-12-15 2015-10-15 Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung Hochdruckfeste Probenkammer für die Durchlicht-Mikroskopie und Verfahren zu deren Herstellung
EP3317878B1 (fr) 2015-06-30 2020-03-25 Fraunhofer Gesellschaft zur Förderung der Angewand Procédé et dispositif pour créer une base de données
CN105022024A (zh) * 2015-07-02 2015-11-04 哈尔滨工程大学 一种基于Helmholtz积分方程的结构噪声源识别方法
US11212637B2 (en) * 2018-04-12 2021-12-28 Qualcomm Incorproated Complementary virtual audio generation
US10764701B2 (en) * 2018-07-30 2020-09-01 Plantronics, Inc. Spatial audio system for playing location-aware dynamic content
CN113965842A (zh) * 2021-12-01 2022-01-21 费迪曼逊多媒体科技(上海)有限公司 一种基于wfs波场合成技术的可变声学家庭影院音响系统

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL8800745A (nl) * 1988-03-24 1989-10-16 Augustinus Johannes Berkhout Werkwijze en inrichting voor het creeren van een variabele akoestiek in een ruimte.
JPH07303148A (ja) * 1994-05-10 1995-11-14 Nippon Telegr & Teleph Corp <Ntt> 通信会議装置
EP0700180A1 (fr) * 1994-08-31 1996-03-06 STUDER Professional Audio AG Dispositif pour le traitement de signaux de audio numériques
GB2294854B (en) * 1994-11-03 1999-06-30 Solid State Logic Ltd Audio signal processing
JPH10211358A (ja) * 1997-01-28 1998-08-11 Sega Enterp Ltd ゲーム装置
JPH1127800A (ja) * 1997-07-03 1999-01-29 Fujitsu Ltd 立体音響処理システム
JP2000267675A (ja) * 1999-03-16 2000-09-29 Sega Enterp Ltd 音響信号処理装置
JP2004007211A (ja) * 2002-05-31 2004-01-08 Victor Co Of Japan Ltd 臨場感信号の送受信システム、臨場感信号伝送装置、臨場感信号受信装置、及び臨場感信号受信用プログラム
EP1552724A4 (fr) * 2002-10-15 2010-10-20 Korea Electronics Telecomm Procede de generation et d'utilisation de scene audio 3d presentant une spatialite etendue de source sonore
DE10254404B4 (de) * 2002-11-21 2004-11-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audiowiedergabesystem und Verfahren zum Wiedergeben eines Audiosignals
US7706544B2 (en) 2002-11-21 2010-04-27 Fraunhofer-Geselleschaft Zur Forderung Der Angewandten Forschung E.V. Audio reproduction system and method for reproducing an audio signal
JP4601905B2 (ja) * 2003-02-24 2010-12-22 ソニー株式会社 デジタル信号処理装置およびデジタル信号処理方法
DE10321980B4 (de) * 2003-05-15 2005-10-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Berechnen eines diskreten Werts einer Komponente in einem Lautsprechersignal
DE10321986B4 (de) * 2003-05-15 2005-07-14 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Pegel-Korrigieren in einem Wellenfeldsynthesesystem
DE10344638A1 (de) * 2003-08-04 2005-03-10 Fraunhofer Ges Forschung Vorrichtung und Verfahren zum Erzeugen, Speichern oder Bearbeiten einer Audiodarstellung einer Audioszene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2006089667A1 *

Also Published As

Publication number Publication date
US20080008326A1 (en) 2008-01-10
EP1723825B1 (fr) 2007-11-07
CN101129086B (zh) 2011-08-03
ATE377923T1 (de) 2007-11-15
JP2008532372A (ja) 2008-08-14
DE502006000163D1 (de) 2007-12-20
CN101129086A (zh) 2008-02-20
WO2006089667A1 (fr) 2006-08-31
DE102005008333A1 (de) 2006-08-31
US7668611B2 (en) 2010-02-23
JP4547009B2 (ja) 2010-09-22

Similar Documents

Publication Publication Date Title
EP1844628B1 (fr) Procede et dispositif d&#39;amorçage d&#39;une installation de moteur de rendu de synthese de front d&#39;onde avec objets audio
EP1723825B1 (fr) Dispositif et procede pour reguler un dispositif de rendu de synthese de champ electromagnetique
EP1844627B1 (fr) Dispositif et procédé pour simuler un système de synthèse de champ d&#39;onde
EP1851998B1 (fr) Dispositif et procédé pour fournir des données dans un système a dispositifs de rendu multiples
DE10328335B4 (de) Wellenfeldsyntesevorrichtung und Verfahren zum Treiben eines Arrays von Lautsprechern
DE10254404B4 (de) Audiowiedergabesystem und Verfahren zum Wiedergeben eines Audiosignals
EP1652405B1 (fr) Dispositif et procede de production, de mise en memoire ou de traitement d&#39;une representation audio d&#39;une scene audio
EP1671516B1 (fr) Procede et dispositif de production d&#39;un canal a frequences basses
EP1972181B1 (fr) Dispositif et procédé de simulation de systèmes wfs et de compensation de propriétés wfs influençant le son
EP1525776B1 (fr) Dispositif de correction de niveau dans un systeme de synthese de champ d&#39;ondes
EP1789970B1 (fr) Procédé et dispositif pour mémoriser des fichiers audio
EP1606975B1 (fr) Dispositif et procede de calcul d&#39;une valeur discrete dans un signal de haut-parleur
DE102012017296A1 (de) Erzeugung von Mehrkanalton aus Stereo-Audiosignalen

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20060727

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK YU

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REF Corresponds to:

Ref document number: 502006000163

Country of ref document: DE

Date of ref document: 20071220

Kind code of ref document: P

RIN2 Information on inventor provided after grant (corrected)

Inventor name: SATTLER, KAI-UWE, PROF. DR.,

Inventor name: REICHELT, KATRIN

Inventor name: BRIX, SANDRA

Inventor name: HEINRICH, THOMAS

Inventor name: GATZSCHE, GABRIEL

ET Fr: translation filed
GBT Gb: translation of ep patent filed (gb section 77(6)(a)/1977)

Effective date: 20080307

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080207

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080218

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080207

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080307

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

BERE Be: lapsed

Owner name: TU ILMENAU

Effective date: 20080228

Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FORDERUNG DER ANGEWAN

Effective date: 20080228

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080407

REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

26N No opposition filed

Effective date: 20080808

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080228

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080208

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080215

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080215

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080508

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071107

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080229

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20220221

Year of fee payment: 17

Ref country code: DE

Payment date: 20220217

Year of fee payment: 17

Ref country code: CH

Payment date: 20220221

Year of fee payment: 17

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20220221

Year of fee payment: 17

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 502006000163

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20230215

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230228

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230215

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230215

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230228

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230901