EP1872620B1 - Dispositif et procede pour commander une pluralite de haut-parleurs au moyen d'une interface graphique d'utilisateur - Google Patents

Dispositif et procede pour commander une pluralite de haut-parleurs au moyen d'une interface graphique d'utilisateur Download PDF

Info

Publication number
EP1872620B1
EP1872620B1 EP06762422A EP06762422A EP1872620B1 EP 1872620 B1 EP1872620 B1 EP 1872620B1 EP 06762422 A EP06762422 A EP 06762422A EP 06762422 A EP06762422 A EP 06762422A EP 1872620 B1 EP1872620 B1 EP 1872620B1
Authority
EP
European Patent Office
Prior art keywords
directional
source
path
compensation
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP06762422A
Other languages
German (de)
English (en)
Other versions
EP1872620A1 (fr
EP1872620B9 (fr
Inventor
Michael Strauss
Michael Beckinger
Thomas Röder
Frank Melchior
Gabriel Gatzsche
Katrin Reichelt
Joachim Deguara
Martin Dausel
René RODIGAST
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Publication of EP1872620A1 publication Critical patent/EP1872620A1/fr
Application granted granted Critical
Publication of EP1872620B1 publication Critical patent/EP1872620B1/fr
Publication of EP1872620B9 publication Critical patent/EP1872620B9/fr
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/40Visual indication of stereophonic sound image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/13Application of wave-field synthesis in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic

Definitions

  • the present invention relates to audio engineering, and more particularly to the positioning of sound sources in systems comprising delta stereophonic systems (DSS) or wave field synthesis systems or both systems.
  • DSS delta stereophonic systems
  • wave field synthesis systems or both systems.
  • Typical public address systems for supplying a relatively large environment such as in a conference room on the one hand or a concert hall in a hall or even in the open air on the other hand all suffer from the problem that due to the commonly used small number of speaker channels a faithful reproduction of the sound sources anyway eliminated , But even if a left channel and a right channel are used in addition to the mono channel, you always have the problem of the level. So, of course, the back seats, so the seats that are far away from the stage, need to be sounded as well as the seats that are close to the stage. If z. B.
  • a single monaural loudspeaker does not allow directional perception in a conference room. It only allows directional perception if the location of the loudspeaker corresponds to the direction. This is inherent in the fact that there is only one loudspeaker channel. However, even if there are two stereo channels, you can at most between the left and the right channel back and forth, so to speak panning. This may be beneficial if there is only one source. However, if there are several sources, the localization is only roughly possible in a small area of the auditorium, as with two stereo channels. You also have a sense of direction in stereo, but only in the sweet spot. In the case of several sources, this directional impression becomes more and more blurred, especially as the number of sources increases.
  • the loudspeakers in such medium to large auditoriums which are supplied with stereo or mono mixes, are arranged above the listeners, so that they can not reproduce any directional information of the source anyway.
  • support loudspeakers are also used, which are positioned near a sound source. This is an attempt to restore the natural hearing location.
  • These support speakers are normally driven without delay, while the stereo sound is delayed through the supply speakers, so that the support speaker is first perceived and thus according to the law of the first wavefront localization is possible.
  • support speakers also have the problem that they are perceived as a point source. This leads, on the one hand, to a difference to the actual position of the sound emitter and, moreover, there is the danger that everything is too loud for the front viewers, while everything is too quiet for the back viewers.
  • support speakers only allow a real direction perception when the sound source, so z.
  • a speaker is located directly in the vicinity of the support speaker. This would work if a support speaker is installed in the lectern, and a speaker is always on the lectern, and in this playback room it is impossible that somebody stands next to the lectern and plays something for the audience.
  • support loudspeakers usually use conventional loudspeakers, which in turn have the acoustic properties of a point source-just as the supply loudspeakers-resulting in an excessive level which is often perceived as unpleasant in the immediate vicinity of the systems.
  • the aim is to create an auditory perception of source positions for public address scenarios, as they take place in the theater / drama area, whereby conventional normal sound systems, which are only designed to provide sufficient coverage of the entire audience area with loudness directional speaker systems and their control should be added.
  • medium to large auditoriums are supplied with stereo or mono and occasionally with 5.1 surround technology.
  • the speakers are located beside or above the listener and can play correct directional information of the sources only for a small audience area. Most listeners get a wrong directional impression.
  • DSS delta stereophonic systems
  • the DD 242954 A3 discloses a large-capacity public address system for larger rooms and areas in which the action or performance and reception or listening rooms are directly adjacent to each other or identical.
  • the sound is made according to the principles of running time. In particular occurring misalignments and jump effects in movements, which are particularly disturbing for important solo sound sources, are avoided by a maturity graduation is realized without limited source areas and the sound power of the sources is taken into account.
  • a control device which is connected to the deceleration or amplification means, controls this analogous to the sound paths between the source and Schallstrahlerorten. For this purpose, a position of a source is measured and used to adjust loudspeakers according to gain and delay accordingly.
  • a playback scenario comprises a plurality of mutually delimited speaker groups, which are each controlled.
  • Delta stereophony means that one or more directional loudspeakers are present in the vicinity of the real sound source (eg on a stage), which detect a locator in large parts of the audience area. It is an almost natural direction perception possible. These speakers are timed to the directional speaker to realize the location reference. This is always the direction-giving Speakers perceived and thus a localization possible, this relationship is also referred to as the "law of the first wave front".
  • the support speakers are perceived as a point source. There is a difference to the actual position of the sound emitter, that is, the original source when e.g. a soloist is not directly in front of or next to the support speaker, but is located away from the support speaker.
  • wave direction synthesis systems can be used to achieve a real directional reference via virtual sound sources.
  • WFS wave field synthesis
  • Applied to the acoustics can be simulated by a large number of speakers, which are arranged side by side (a so-called speaker array), any shape of an incoming wavefront.
  • a so-called speaker array any shape of an incoming wavefront.
  • the audio signals of each speaker must be fed with a time delay and amplitude scaling so that the radiated sound fields of each speaker properly overlap.
  • the contribution to each speaker is calculated separately for each source and the resulting signals added together. If the sources to be reproduced are in a room with reflective walls, reflections must also be reproduced as additional sources via the loudspeaker array. The cost of the calculation therefore depends heavily on the number of sound sources, the reflection characteristics of the recording room and the number of speakers.
  • the advantage of this technique is in particular that a natural spatial sound impression over a large area of the playback room is possible.
  • the direction and distance of sound sources are reproduced very accurately.
  • virtual sound sources can even be positioned between the real speaker array and the listener.
  • wavefield synthesis works well for environments whose characteristics are known, irregularities occur when the texture changes, or when wave field synthesis is performed based on environmental conditions that do not match the actual nature of the environment.
  • An environmental condition can be described by the impulse response of the environment.
  • the space compensation using wavefield synthesis would be to first determine the reflection of that wall to determine when a sound signal reflected from the wall will return to the loudspeaker and what amplitude this reflected sound signal will be Has. If the reflection from this wall is undesirable, then with the wave field synthesis it is possible to eliminate the reflection from this wall by impressing the loudspeaker with a signal of opposite amplitude to the reflection signal in addition to the original audio signal, so that the traveling compensating wave is the Reflectance wave extinguished, so that the reflection from this wall in the environment that is considered, is eliminated. This can be done by first computing the impulse response of the environment and determining the nature and position of the wall based on the impulse response of that environment, the wall being interpreted as a source of mirrors, that is, a sound source reflecting an incident sound.
  • Wavefield synthesis (WFS or sound field synthesis), as developed at the TU Delft in the late 1980s, represents a holographic approach to sound reproduction. The basis for this is the Kirchhoff-Helmholtz integral. This states that any sound fields within a closed volume can be generated by means of a distribution of monopole and dipole sound sources (loudspeaker arrays) on the surface of this volume. Details can be found in MM Boone, ENG Verheijen, PF.
  • a synthesis signal for each loudspeaker of the loudspeaker array is calculated from an audio signal that emits a virtual source at a virtual position, the synthesis signals being designed in such a way that amplitude and phase Wave resulting from the superposition of the individual sound waves output by the loudspeakers present in the loudspeaker array corresponding to the wave that would originate from the virtual source at the virtual position, if this virtual source at the virtual position is a real source with a real position would.
  • multiple virtual sources exist at different virtual locations.
  • the computation of the synthesis signals is performed for each virtual source at each virtual location, typically resulting in one virtual source in multiple speaker synthesis signals. Seen from a loudspeaker, this loudspeaker thus receives several synthesis signals, which go back to different virtual sources. A superimposition of these sources, which is possible due to the linear superposition principle, then gives the reproduced signal actually emitted by the speaker.
  • the quality of the audio playback increases with the number of speakers provided. This means that the audio playback quality is the better and more realistic is the more speakers in the loudspeaker array (s) are present.
  • the finished and analog-to-digital converted reproduction signals for the individual loudspeakers could, for example, be transmitted via two-wire lines from the wave field synthesis central unit to the individual loudspeakers.
  • the wave field synthesis central unit could always be made only for a special reproduction room or for a reproduction with a fixed number of loudspeakers.
  • the delta stereophony is particularly problematic, since position artifacts due to phase and level errors occur when fading between different sound sources. Furthermore, phase errors and mislocalizations occur at different moving speeds of the sources. Moreover, the crossfading from one support speaker to another support speaker is associated with a great deal of programming, while at the same time there are problems maintaining the overview of the entire audio scene, especially when multiple sources are faded from different support speakers, and when In particular, many support speakers that can be controlled differently, exist.
  • the delta stereophony is much less expensive in terms of calculating the loudspeaker signals than the wave field synthesis.
  • wave field synthesis arrays can not be widely used because of space requirements and the requirement for an array of closely spaced loudspeakers.
  • wave field synthesis does not predetermine a fixed grid of supporting loudspeakers, but instead a movement of a virtual source can take place continuously.
  • a support speaker can not move. However, the movement of the support speaker can be generated virtually by directional glare.
  • delta stereophony limits the number of possible support speakers, which are housed in a stage, for reasons of expense (depending on the set) and sound management reasons is limited.
  • each support speaker if it is to work on the principle of the first wavefront, requires additional speakers that produce the necessary volume. This is precisely the advantage of delta stereophony, which is actually a relatively small and thus well placed speaker for localization
  • many other nearby loudspeakers are designed to produce the necessary volume for the listener, who can sit quite aft in a relatively large audience room.
  • each directional area having a localization loudspeaker (or a small group of simultaneously controlled localization loudspeakers) which is driven without or with only a slight delay, while the other loudspeakers of the directional group have the same signal, but timed to produce the required volume while the localization speaker had delivered the well-defined localization.
  • each directional area next to the localization speaker also requires enough loudspeakers to generate sufficient volume, the number of directional areas is limited when a stage space is divided into contiguous non-overlapping directional areas, where each directional area is a localization speaker or a small one Group of closely adjacent Lokalisationslaut Maschinenern is assigned.
  • Typical delta stereophonic concepts are based on blending between two locations when a source is to move from one location to another. This concept is problematic when z. B. should be intervened manually in a programmed set-up, or if an error correction has to take place. For example, it turns out that a singer does not follow the If the agreed route is over the stage, but rather different, there will be an increasing difference between the perceived position and the actual position of the singer, which of course is not desirable.
  • the object of the present invention is to provide a flexible yet artifact-reduced concept for controlling a plurality of loud speakers.
  • This object is achieved by a device for controlling a plurality of loudspeakers according to claim 1, a method for controlling a plurality of loudspeakers according to claim 15 or a computer program according to claim 16.
  • the present invention is based on the recognition that an artifact-reduced and rapid manual intervention in the course of the movement of sources is achieved by allowing a compensation path on which a source can move.
  • the compensation path differs from the normal source path in that it does not begin at a direction group position, but begins at a connection line between two directional groups, at any point in that connection line, and extends from there to a new destination directional group.
  • the source must be described by at least three directional groups, wherein in a preferred embodiment of the present invention, a position description of the source identification of the three directional groups involved and two blend Factors, where the first blend factor indicates where on the source path has been "bent", and where the second blend factor indicates where the source is currently on the compensation path, that is, how far the source already is from the source path or how long the source has to run until the new target direction.
  • the calculation of the weighting factors for the loudspeakers of the three directional areas involved takes place according to the invention based on the source path, the stored value of the source path parameter and information about the compensation path.
  • the information about the compensation path may include the new target per se or the second blend factor.
  • a predefined speed may be used, which may be predetermined by the system, since the movement on the compensation path is typically a compensation movement which does not depend on the audio scene but which is there, something in a preprogrammed one Change scene or correct. For this reason, the velocity of the audio source on the compensation path will typically be relatively fast, but not so fast that problematic audible artifacts will occur.
  • the means for calculating the weighting factors is configured to calculate weighting factors that depend linearly on the glare factors.
  • alternative concepts such as non-linear dependencies in terms of a sine 2 function or a cosine 2 function can also be used.
  • the device for controlling the plurality of loudspeakers further comprises a jump compensator, which preferably operates hierarchically based on various provided compensation strategies to avoid a hard swap of the source by means of a jump compensation path.
  • a preferred embodiment is based on having to go away from the adjoining directional areas that define the "raster" of well locatable motion points on a stage. That was due the requirement that the directional areas are not overlapping, so clear control conditions are available, the number of directional areas limited because each directional area in addition to the Lokalisationslaut Maschinener also needed a sufficiently large number of speakers, in addition to the first wave front, by the Lokalisationslaut Maschinener is generated, also to produce a sufficient volume.
  • a division of the stage space is made in overlapping directional areas, thereby creating the situation that a speaker may belong not only to a single directional area, but to a plurality of directional areas, such as at least the first directional area and the second directional area and if applicable to a third or a further fourth directional area.
  • the affiliation of a loudspeaker to a directional area is experienced by the loudspeaker in that, if it belongs to a directional area, it is assigned a specific loudspeaker parameter which is determined by the directional area.
  • a speaker parameter may be a delay which will be small for the localization speakers of the directional area and will be greater for the other speakers of the directional area.
  • Another parameter can be a scaling or a filter curve, which can be determined by a filter parameter (equalizer parameter).
  • each loudspeaker on a stage will have its own loudspeaker parameter, depending on which direction it belongs to.
  • These values of loudspeaker parameters, which depend on which directional area the loudspeaker belongs to, typically become heuristic in a sound check by a sound engineer, partly empirical for a particular room and then when the speaker is working, inserted.
  • the speaker parameter speaker has two different values. So a loudspeaker, if it belongs to directional area A, would have a first delay DA. However, if the speaker belongs to the directional area B, the speaker would have a different delay value DB.
  • the speaker parameters are used. to use the audio signal for this speaker and for the currently viewed audio source.
  • the actually indissoluble contradiction namely that a loudspeaker has two different delay settings, scaling settings or filter settings, is eliminated by using the loudspeaker parameter values for all directional groups involved for calculating the audio signal to be output by the loudspeaker become.
  • the calculation of the audio signal depends on the distance measure, that is, the spatial position between the two direction group positions
  • the distance measure will typically be a factor lying between zero and one, where a factor of zero determines that the speaker is at the direction group position A is, while a factor of one determines that the speaker is on the direction group position B.
  • a transition is preferred, which leads to comb filter effects, but which are not or barely audible due to the fast transition.
  • interpolation is preferred in order to avoid the comb filter effects associated with slow crossfades, which are also clearly audible.
  • the switching is not abruptly made, ie from one sample to the next, but a transition is effected, controlled by a switching parameter within a fade range that will include multiple samples based on a fade function, which is preferably linear but which may be nonlinear, e.g. B. may be trigonometric, made.
  • a graphical user interface is provided that graphically illustrates ways of a sound source from one directional area to another directional area.
  • compensation paths are also taken into account to allow rapid changes in the path of a source, or hard ones Jumps from sources, as they might occur in scene breaks, to avoid.
  • the compensation path ensures that a path of a source can not only be changed when the source is in the directional position, but also when the source is between two directional positions. This ensures that a source can turn off its programmed path between two directional positions. In other words, this is achieved in particular by the fact that the position of a source can be defined by three (adjacent) directional areas, in particular by identification of the three directional areas as well as the indication of two glare factors.
  • a wave field synthesis array is mounted in the sound space, which also represents a directional region having a directional location position by indicating a virtual position (e.g., in the center of the array).
  • a user-friendly and flexible system which allows a flexible division of a space into directional groups, as directional group overlaps are allowed, with loudspeakers in such an overlapping zone being supplied with loudspeaker parameters derived from the loudspeaker parameters corresponding to the directional areas, with respect to their loudspeaker parameters
  • Derivative is preferably done by interpolation or crossfading.
  • a hard decision could be made, for example, when the source is closer to the one directional area, the to take a loudspeaker parameter so as to take the other loudspeaker parameter when the source is closer to the other directional field, whereby the then occurring hard jump for artifact reduction could simply be smoothed.
  • pitch-controlled blending or pitch-controlled interpolation is preferred.
  • Fig. 1 shows a schematic representation of a stage space, which is divided into three directional areas RGA, RGB and RGC, each directional area comprises a geometric area 10a, 10b, 10c of the stage, the range limits are not critical. The only thing that matters is whether there are loudspeakers in the various areas that are in Fig. 1 are shown. In area I located speakers belong to the in Fig. 1 shown example only to the directional group A, wherein the position of the directional group A is designated at 11a.
  • the directional group RGA is assigned the position 11a at which preferably the loudspeaker of the directional group A is present, which according to the law of the first wavefront has a delay which is smaller than the delays of all other loudspeakers assigned to the directional group A.
  • area II there are speakers, which are assigned only to the direction group RGB, the by definition has a direction group position 11b at which the support loudspeaker of the directional group RGB is located, which has a smaller delay than all other loudspeakers of the directional group RGB.
  • the directional group C having by definition a position 11c at which the supporting loudspeaker of the directional group RGC is arranged, with a shorter delay than all the other loudspeakers of the directional group RGC will send.
  • Fig. 1 shows division of the stage space in direction areas an area IV, in which speakers are arranged, which are assigned to both the directional group RGA and the directional group RGB. Accordingly, a region V exists in which loudspeakers are arranged which are assigned to both the directional group RGA and the directional group RGC.
  • each speaker in a stage setting is assigned a speaker parameter or a plurality of speaker parameters by the sound engineer or sound director.
  • speaker parameters include, as in Fig. 2a in column 12, a delay parameter, a scale parameter and an EQ filter parameter.
  • the delay parameter D indicates how much of an audio signal that is output from this speaker will be relative to a reference value (which is true for another speaker, but not necessarily real) must be) is delayed.
  • the Scale parameter indicates how much of an audio signal is amplified or attenuated by this speaker compared to a reference value.
  • the EQ filter parameter specifies how the frequency response of an audio signal to be output from a speaker should look like.
  • the EQ filter parameter would indicate a frequency response in which the high frequencies are attenuated with respect to the low frequencies ,
  • any frequency response can be set for each speaker via an EQ filter parameter.
  • the audio signal for a loudspeaker in the ranges I, II and III is simply calculated taking into account the corresponding loudspeaker parameter or the corresponding loudspeaker parameters.
  • each speaker has two associated speaker parameter values for each speaker parameter. For example, if only the speakers in the directional group RGA are active, ie if a source is located exactly at the direction group position A (11a), only the speakers of the directional group A will play for this audio source. In this case, to calculate the audio signal for the speaker uses the column of parameter values assigned to the direction group RGA.
  • the audio signal is now calculated taking into account both parameter values and preferably taking into account the distance measure, as will be explained later.
  • an interpolation or cross-fading between the parameter values Delay and Scale is made.
  • the loudspeakers of the directional group RGC must also be active.
  • the loudspeakers located in area VII then taking into account the three typically different parameter values for the same loudspeaker parameter, while for area V and area VI, taking into account the loudspeaker parameter values for the directional groups A and C for one and the same speaker will take place.
  • Fig. 9a the case shows that a source moves from the directional area A (11a) to the directional area C (11c).
  • the loudspeaker signal LsA for a loudspeaker in the directional area A becomes dependent on the position of the source between A and B, ie of BlendAC in Fig. 9a S1 is decreasing linearly from 1 to 0, while at the same time the source C loudspeaker signal is attenuated less and less.
  • S 2 increases linearly from 0 to 1.
  • the blending factors S 1 , S 2 are chosen so that at any time the sum of the two factors 1.
  • Alternative transitions, such as non-linear transitions, can also be used.
  • BlendAC value the sum of the blending factors for the speakers concerned is equal to one.
  • non-linear functions are, for example, a COS 2 function for the factor S1, while a SIN 2 function is used for the weighting factor S2.
  • Other functions are known in the art.
  • Fig. 3a provides a complete fading rule for all speakers in the ranges I, II, III. It should also be noted that the parameters associated with a loudspeaker are given in the table Fig. 2a from the corresponding areas already in the audio signal AS in the upper right in Fig. 3a have been included.
  • Fig. 3b shows next to the rule case, in Fig. 9a has been defined, in which a source is located on a connecting line between two directional areas, wherein the exact location between the start and the target direction area is described by the glare factor AC, the compensation case, which occurs, for example, when the path of a source while running Movement is changed. Then, the source of each current position, which is located between two directional areas, this position by BlendAB in Fig. 3b is displayed to be blinded to a new position. This results in the compensation path that is in Fig. 3b 15b, while the (regular) path was originally programmed between the directional areas A and B and is referred to as the source path 15a.
  • Fig. 3b shows the case that during a movement of the source from A to B had changed something and therefore the original programming is changed so that the source is not now into the area B, but into the area C.
  • the under Fig. 3b The equations shown represent the three weighting factors g 1 , g 2 , g 3 which provide the fading property for the loudspeakers in the directional areas A, B, C. It should again be pointed out that the directional area-specific loudspeaker parameters have already been taken into account again in the audio signal AS for the individual directional areas. For the areas I, II, III, the audio signals AS A, AS B can AS c of the original audio signal AS simply by using the data stored for the corresponding speaker speaker parameter in the column 16a Fig. 2a are calculated, and then finally perform the final fading weighting with the weighting factor g 1 .
  • these weights do not have to be split into different multiplications but will typically take place in one and the same multiplication, in which case the scale factor Sk is multiplied by the weighting factor g 1 in order then to obtain a multiplier that will eventually match the audio signal is multiplied to obtain the speaker signal LS a .
  • the same weighting g 1 , g 2 , g 3 is used, but in order to calculate the underlying audio signal AS a , AS b or AS c, an interpolation / mixing of the loudspeaker parameter values predetermined for one and the same loudspeaker, instead of finding, as explained below.
  • Fig. 4 shows a device for driving a plurality of loudspeakers, wherein the loudspeakers are grouped into directional groups, wherein a first directional group is assigned a first direction group position, wherein a second direction group is assigned a second direction group position, wherein at least one speaker of the first and the second directional group is assigned and wherein the loudspeaker is associated with a loudspeaker parameter having a first parameter value for the first directional group and a second parameter value for the second directional group.
  • the apparatus first comprises means 40 for providing a source position between two directional group positions, that is, for example, for providing a source position between the directional group position 11a and the directional group position 11b as indicated by BlendAB in FIG Fig. 3b is specified.
  • the device further comprises means 42 for calculating a loudspeaker signal for the at least one loudspeaker based on the first parameter value provided above a first parameter value input 42a which applies to the directional group RGA and based on a second parameter value corresponding to a second parameter value Parameter value input 42b is provided, and applies to the directional group RGB. Furthermore, the means 42 for calculating receives the audio signal via an audio signal input 43 in order then to supply the loudspeaker signal for the relevant loudspeaker in the range IV, V, VI or VII on the output side. The output of device 42 at output 44 will be the actual audio signal if the speaker being viewed is active only due to a single audio source.
  • the loudspeaker is active due to several audio sources, as it is in Fig. 7 for each source by means of a processor 71, 72 or 73 a component for the loudspeaker signal of the considered loudspeaker due to this one audio source 70a, 70b, 70c calculated, and then finally the in Fig. 7 N summed component signals in a summer 74.
  • the temporal synchronization here takes place via a control processor 75, which, like the DSS processors 71, 72, 73, is preferably designed as a DSP (digital signal processor).
  • DSP application specific hardware
  • FIG. 7 a sample-wise calculation is shown.
  • Summer 74 performs sample-by-sample summation, while delta stereophonic processors 71, 72, 73 also issue sample by sample, and the audio signal is further provided sample by sample.
  • all processing can be performed in the frequency domain as well, namely when summing 74 spectra together.
  • certain processing may be performed in the frequency domain or the time domain, depending on which implementation is more favorable for the particular application.
  • processing can also take place in the filter bank domain, which then requires an analysis filter bank and a synthesis filter bank.
  • Fig. 5 a more detailed embodiment of the device 42 for calculating a loudspeaker signal from Fig. 4 explained.
  • the audio signal which is assigned to an audio source, is first supplied to a filter mixing block 44 via the audio signal input 43.
  • the filter blend block 44 is configured to take into account all three filter parameter settings EQ1, EQ2, EQ3 when considering a speaker in region VII.
  • the output of the filter blend block 44 then represents an audio signal which has been filtered in appropriate proportions, as will be described later, to some extent have influences from the filter parameter settings of all three directional regions involved.
  • This audio signal at the output of the filter mix block 44 is then supplied to a delay processing stage 45.
  • the delay processing stage 45 is designed to generate a delayed audio signal whose delay is now based on an interpolated delay value or, if no interpolation is possible, whose signal shape depends on the three delays D1, D2, D3.
  • the three delays associated with a loudspeaker for the three directional groups are provided to a delay interpolation block 46 to calculate an interpolated delay value D int , which is then fed to the delay processing block 45.
  • a scaling 47 is performed, wherein the scaling 47 is performed using a total scaling factor that depends on the three scaling factors associated with the same loudspeaker due to the fact that the loudspeaker belongs to several directional groups.
  • This total scaling factor is calculated in a scaling interpolation block 48.
  • the scaling interpolation block 48 is also given the weighting factor which describes the total fading for the directional area and in connection with Fig. 3b is also fed, as shown by an input 49, so that the scaling in block 47, the final speaker signal component is output on the basis of a source for a loudspeaker, which is at the in Fig. 5 shown embodiment may belong to three different directional groups.
  • All speakers of the other directional groups except for the three affected directional groups by which a source is defined will not output signals for that source, but may of course be active for other sources.
  • weighting factors may be used to interpolate the delay D int or to interpolate the scaling factor S as used for fading, as indicated by the equations in FIG Fig. 5 set out in addition to the blocks 45 and 47, respectively.
  • Fig. 6 a preferred embodiment of the present invention, which is implemented on a DSP.
  • the audio signal is provided via an audio signal input 43, wherein when the audio signal is in an integer format, an integer / floating point transformation is first performed in a block 60.
  • Fig. 6 shows a preferred embodiment of the filter blend block 44 in FIG Fig. 5 , In particular, includes Fig. 6 Filter EQ1, EQ2, EQ3, wherein the transfer functions or impulse responses of the filters EQ1, EQ2, EQ3 are controlled by corresponding filter coefficients via a filter coefficient input 440.
  • the filters EQ1, EQ2, EQ3 may be digital filters that convolve an audio signal with the impulse response of the corresponding filter, or there may be transform means, wherein a weighting of spectral coefficients is performed by frequency transfer functions.
  • the signals filtered with the equalizer settings in EQ1, EQ2, EQ3, all of which go back to one and the same audio signal, as shown by a distribution point 441 then weighted in respective scaling blocks with the weighting factors g 1 , g 2 , g 3 to then sum up the results of the weights in a summer.
  • the part of the delay processing 45 of Fig. 5 is.
  • the equalizer parameters EQ1, EQ2, EQ3 do not become directly as shown in the table in FIG Fig. 2a is taken, but it is preferably carried out an interpolation of the equalizer parameters, which is done in a block 442.
  • Block 442 on the input side, actually receives the equalizer coefficient associated with a loudspeaker, as indicated by a block 443 in FIG Fig. 6 is shown.
  • the interpolation task of the Filter Ramping block effectively performs low pass filtering of successive Equalizer coefficients to avoid artifacts due to rapidly changing Equalizer filter parameters EQ1, EQ2, EQ3.
  • the sources can thus be blinded over several directional areas, these directional areas are characterized by different settings for the equalizer. Between the different equalizer settings is dazzled, taking as it is in Fig. 6 is shown in block 44, all equalizers go through in parallel and the outputs are superimposed.
  • weighting factors g1, g2, g3, as used in block 44 for blending the equalizer settings are the weighting factors included in FIG Fig. 3b are shown.
  • a weighting factor conversion block 61 that converts a position of a source into weighting factors for preferably three surrounding directional areas.
  • the block 61 is preceded by a position interpolator 62 which typically depends on an input of a start position (POS1) and a target position (POS2) and the corresponding blending factors used in the in Fig. 3b
  • the factors are blend AB and blend ABC, and typically calculate a current position depending on a motion velocity input at a current time.
  • the position input takes place in a block 63.
  • the position interpolator does not have to be provided.
  • the position update rate is arbitrarily adjustable. For example, a new weighting factor could be calculated for each sample. However, this is not preferred. Instead, it has been found that the weighting factor update rate also needs to be made at a fraction of the sampling frequency, even in terms of meaningful artifact avoidance.
  • the scaling calculation used in Fig. 5 has been illustrated by blocks 47 and 48 is in Fig. 6 only partially shown.
  • the calculation of the total scaling factor shown in block 48 of FIG Fig. 5 has been made, does not find in the in Fig. 6 DSP instead of, but in an upstream control DSP.
  • the overall scaling factor, as shown by "Scales" 64, is already input and interpolated in a scaling / interpolation block 65 to finally perform a final scaling in block 66a before then, as shown in block 67a is to the summer 74 of Fig. 7 is gone.
  • the device according to the invention allows two delay processing.
  • One delay processing is the delay mix 451, while the other delay processing is the delay interpolation performed by an IIR all pass 452.
  • the output of the block 44 which has been stored in the ring buffer 450 is provided in the delay mix with three different delays explained below, the delays with which the delay blocks are driven in block 451 are the non-smoothed delays that in the table, based on Fig. 2a has been explained for a speaker, are specified.
  • This fact is also clarified by a block 66b, which indicates that the direction group delays are input here, whereas in a block 67b not the direction group delays are input, but at a time only one loudspeaker a delay, namely the interpolated one Delay value Dint coming from block 46 in Fig. 5 is produced.
  • the audio signal with three different delays in block 451 then becomes, respectively, as shown in FIG Fig. 6
  • weighting factors are now preferably not the weighting factors generated by linear blending, as shown in FIG Fig. 3b is shown. Instead, it is preferred to perform a loudness correction of the weights in a block 453 to achieve a nonlinear three-dimensional crossfade here. It has been found that then the audio quality in the delay mixing becomes better and artifact-free, although the weighting factors g 1 , g 2 , g 3 could also be used to drive the scaler in the delay mixing block 451. The output signals of the scaler in the delay-mixing block are then summed to obtain a delay-mix audio signal at an output 453.
  • the delay processing of the invention (block 45 in FIG Fig. 5 ) also perform a delay interpolation.
  • an audio signal having the (interpolated) delay provided via block 67b and additionally smoothed in delay ramping block 68 is read from ring buffer 450.
  • the same audio signal, but delayed by a sample less also read.
  • These two audio signals or just considered samples of the audio signals are then fed to an IIR filter for interpolation in order to obtain at an output 453b an audio signal which has been generated due to an interpolation.
  • the audio signal at input 453a has little filter artifact due to the delay mix.
  • the audio signal at output 453b is hardly filter artifact-free.
  • this audio signal may have frequency level shifts. If the delay is interpolated from a long delay value to a short delay value, the frequency shift will be a shift to higher frequencies, while if the delay is interpolated from a short delay to a long delay, the frequency shift will be a shift to lower frequencies ,
  • the output 453a and the output 453b in the crossfade block 457 controlled by a control signal which comes from the block 65 and whose calculation will be discussed, are switched back and forth.
  • block 65 it is further controlled whether block 457 forwards the result of the blending or the interpolation, or in what proportion the results are blended.
  • the smoothed or filtered value from the block 68 is compared with the non-smoothed one in order to be dependent thereon which is greater to make the (weighted) switching in 457.
  • the block diagram in Fig. 6 also includes a branch for a static source that sits in a directional area and does not need to be crossfaded.
  • the delay for that source is the delay assigned to the speaker for that directional group.
  • the delay calculation algorithm therefore switches over too slow or too fast movements.
  • the same physical speaker is present in two directional areas with different level and delay settings.
  • the level is blinded and the delay is interpolated by means of an Alpass filter, ie the signal is taken at the output 453b.
  • this interpolation of the delay results in a pitch change of the signal which, however, is not critical in slow changes.
  • the speed of the interpolation exceeds a certain value, such as 10 ms per second, then these pitch changes can be perceived.
  • the delay is therefore no longer interpolated, but the signals with the two constant different delays are blinded, as shown in block 451. This will indeed cause comb filter artifacts. However, these will not be audible due to the high shutter speed.
  • the switching between the two outputs 453a and 453b takes place, depending on the movement of the source or, more precisely, depending on the delay value to be interpolated. If much delay has to be interpolated, the output 453a is switched through block 457. If, on the other hand, little delay has to be interpolated in a certain period of time, the output 453b is taken.
  • switching through block 457 does not take place hard.
  • the block 475 is formed such that a fade-out region exists around the threshold. Therefore, if the speed of the interpolation is at the threshold, block 457 is configured to compute the output side sample such that the current sample on output 453a and the current sample on output 453b are added and the result is divided by two , The block 457 therefore makes a smooth transition from the output 453b to the output 453a or vice versa in a fade range around the threshold.
  • This cross-fade range can be made arbitrarily large, such that the block 457 operates almost continuously in the crossfade mode. For a rather tougher switching, the cross-fade range can be made smaller, so that block 457 mostly switches through only either output 453a or only output 453b to scaler 66a.
  • the fade block 457 is further configured to perform jitter suppression over a low pass and a hysteresis of the delay change threshold. Due to the non-guaranteed duration of the control data flow between the configuration system and the DSP systems, jitter may occur in the control data, which may lead to artifacts in the audio signal processing. It is therefore preferred to compensate for this jitter by means of low-pass filtering of the control data stream at the input of the DSP system. This method reduces the response time of the timing. For this very large jitter fluctuations can be compensated. If, however, different threshold values are used for switching from delay interpolations to delay glare and delay glare to delay interpolation, then the jitter in the control data can alternatively be avoided for low-pass filtering without reducing the control data response time.
  • the fade block 457 is further configured to perform control data manipulation in fanning delay interpolations to delay fade.
  • the fade-over block 457 is designed to keep the delay control data constant until the complete fade-over fade-over is performed. Only then will the delay control data be adjusted to the actual value. With the aid of this control data manipulation, it is also possible to realize fast delay changes with a short control data reaction time without audible tone changes.
  • the drive system further includes a metering device 80 configured to perform a digital (imaginary) metering per directional area / audio output.
  • a metering device 80 configured to perform a digital (imaginary) metering per directional area / audio output. This is based on the Fig. 11a and 11b explained. So shows Fig. 11a an audio matrix 1110 while Fig. 11b the same audio matrix 1110 shows, but with special consideration of the static sources, while in Fig. 11a the audio matrix is shown considering the dynamic sources.
  • the DSP system part of which leads into Fig. 6 is shown, to calculate from the audio matrix at each Maxtrixddling a delay and a level, the level scaling value by AmP in Fig. 11a and Fig. 11b while the delay is designated by "dynamic-interval delay interpolation" or "static-source delay".
  • these settings are split into directional areas, and the directional areas are then assigned input signals.
  • Several input signals can also be assigned to a directional area.
  • a metering is indicated by the block 80 for the directional areas, which however is determined "virtually" from the levels of the nodal points of the matrix and the corresponding weightings.
  • metering 80 may also compute the overall level of a single sound source from multiple sound sources across all of the directional areas that are active for that sound source. This result would result if the matrix points for an input source be summed up for all outputs. In contrast, a contribution of a directional group to a switching source can be achieved by summing up the outputs of the total number of outputs belonging to the considered directional group, while disregarding the other outputs.
  • the concept according to the invention provides a universal operating concept for the representation of sources independently of the reproduction system used.
  • a hierarchy is used.
  • the lowest hierarchy member is the single speaker.
  • the middle hierarchy level is a directional area, and loudspeakers may also be present in two different directional areas.
  • the top hierarchy area are directional area presets, such that for certain audio objects / applications, certain directional areas taken together may be considered as an "over-directional area" on the user interface.
  • the sound source positioning system is divided into main components comprising a system for performing a performance, a system for configuring a performance, a DSP system for calculating the delta stereophony, a DSP system for calculating the wave field synthesis and an accident.
  • System for emergency interventions In a preferred embodiment of the present invention, a graphical user interface is used to visually associate the actors with the stage or camera image.
  • the system operator is presented with a two-dimensional image of the 3D space, which can be configured as shown in FIG Fig. 1 however, it may also be implemented in the way as described in the Fig. 9a to 10b is shown for only a small number of directional groups.
  • the user arranges Directional areas and loudspeakers from the three-dimensional space of the two-dimensional illustration are accessible via a selected symbolism. This is done by a configuration setting.
  • the two-dimensional position of the directional areas on the screen is mapped to the real three-dimensional position of the loudspeakers assigned to the corresponding directional areas.
  • the operator is able to reconstruct the real three-dimensional position of directional areas and to realize an arrangement of sounds in the three-dimensional space.
  • the mixer according to a DSP Fig. 6 can include, the indirect positioning of the sound sources takes place in real three-dimensional space. With the help of this user interface, the user is able to position the sounds in all spatial dimensions without having to change the view, ie it is possible to position sounds in height and depth.
  • the following is a reference to the positioning of sound sources or a concept for flexible compensation of deviations from the programmed stage sequence according to Fig. 8 shown.
  • Fig. 8 shows an apparatus for controlling a plurality of loud speakers, preferably using a graphical user interface, grouped into at least three directional groups, each directional group being associated with a direction group position.
  • the apparatus first comprises means 800 for receiving a source path from a first direction group position to a second direction group position and motion information for the source path.
  • the device of Fig. 8 further comprises means 802 for calculating a source path parameter for different time points based on the motion information, the source being certain without compensation path, directly from the starting point to the new destination. This option is useful if the source determines that it has traveled a short distance on the source path and the advantage of taking a new compensation path is only a small advantage.
  • alternative implementations in which a compensation path is taken as an occasion to reverse and go back the source path without traversing the compensation path may occur when the compensation path would affect areas in the audience room that should not be areas for any other reason where a sound source is to be located.
  • a compensation path according to the invention is particularly advantageous in view of a system in which only complete paths between two directional areas are taken, since the time at which a source is in the new (changed) position, in particular when directional areas are widely spaced, significantly reduced. Furthermore, confusing or artificial paths of a source that would be perceived as strange are eliminated for the user. For example, considering the case that a source should originally move on the source path from left to right and now go to another leftmost position that is not very far from the originating position, then disallowing one Compensation paths cause the source runs almost twice over the entire stage, while according to the invention, this process is abbreviated.
  • the compensation path is made possible by defining a position no longer by two directional areas and a factor, but by defining a position by three directional areas and two factors, such that other points besides the direct connection line path parameters are defined indicates a location of an audio source on the source path.
  • the inventive apparatus further comprises means 804 for receiving a path change command to define a compensation path to the third directional area.
  • means 806 is provided for storing a value of the source path parameter at a location where the compensation path branches from the source path.
  • a means for calculating a compensation path parameter (BlendAC) indicative of a position of the audio source on the compensation path included in Fig. 8 shown at 808. Both the source path parameter computed by means 806 and the compensation path parameter computed by means 808 are fed to means 810 for calculating weighting factors for the loudspeakers of the three directional regions.
  • means 810 for calculating the weighting factors are configured to operate based on the source path, the stored value of the source path parameter, and information about the compensation path, wherein information about the compensation path includes either only the new destination, ie the directional area C, or wherein the information about the compensation path additionally comprises a position of the source on the compensation path, that is to say the compensation path parameter. It should be noted that this information of the position on the compensation path is not necessary if the compensation path is not yet taken, but the source is still on the source path.
  • the compensation path parameter which indicates a position of the source on the compensation path, is not necessarily necessary if the source does not take the compensation path but uses the compensation path to reverse on the source path back to the starting point between two directional group positions can be "driven" by a source.
  • Fig. 9a shows a rule case in which a source is located on a connecting line between the start-up area 11a and the aiming direction area 11c.
  • the exact position of the source between the start and target directional regions is described by a blend factor Blend AC.
  • the compensation case that occurs when the path of a source is changed while moving.
  • the change in the path of a source while moving can be represented by changing the destination of the source while the source is on its way to the destination. Then the source must be from its current source location on the source path 15a in Fig. 3b to their new position, namely the target 11c, blinded. This results in the compensation path 15b on which the source then runs until it has reached the new destination 11c.
  • the compensation path 15b thus goes from the original position of the source directly to the new ideal position of the source.
  • the source position is therefore formed over three directional areas and two glare values.
  • the directional area A, the directional area B and the blend factor BlendAB form the beginning of the compensation path.
  • the directional area C forms the end of the compensation path.
  • the blend factor BlendAbC defines the position of the source between the beginning and the end of the compensation path.
  • the directional area A remains.
  • the directional area C becomes the directional area B and the glare factor BlendAC becomes the blend area AB, and the new destination area is written in the destination area C.
  • the glare factor BlendAC at the time the direction change is to take place that is to say at the time when the source is to leave the source path and swivel onto the compensation path, is stored by means 806 and for the subsequent calculation as blend used.
  • the new destination area is written in direction area C.
  • source movements can be programmed so that sources jump, so they can move quickly from one place to another. This is the case, for example, when scenes are skipped, when ChannelHOLD mode is deactivated, or when a source ends up in scene 1 in a different direction than in scene 2. If source jumps are switched hard, audible artifacts would result. Therefore, according to the invention, a concept for preventing hard swelling of the source is used. For this purpose, again a compensation path is used which is selected on the basis of a specific compensation strategy. Generally, a source can be at different locations on a path. Depending on whether it is at the beginning or the end, between two or three directional areas, there are different ways in which a source comes to its desired position the fastest.
  • Fig. 9b shows a possible compensation strategy according to which a source located at a point of a compensation path (900) should be brought to a target position (902).
  • Position 900 is the position a source has, for example, when a scene ends. When starting the new scene, the source should come to its initial position there, namely position 906. To get there, According to the invention, an immediate switchover from 900 to 906 is dispensed with. Instead, the source first goes to its personal destination direction area, that is, to the direction area 904, and then from there to the initial direction area of the new scene, namely 906 to run. Thus, the source is at the point where it should have been at the start of the scene. However, after the scene has already started and the source has actually already started, the source to be compensated must still run at an increased speed on the programmed path between the directional area 906 and the directional area 908 until it has recovered its nominal position 902.
  • a simple compensation strategy is in Fig. 9d , This is called “InPathDual".
  • the target position of the source is indicated by the same directional regions A, B, C as the source position of the source.
  • a jump compensation device according to the invention is therefore designed to determine that the directional areas for defining the start position are identical to the directional areas for defining the target position.
  • the in Fig. 9d chosen strategy in which simply proceeded on the same source path. So if the position to be reached by compensation (ideal position) is between the same directional areas as the current position of the source (real position), then the InPath strategies are used. These have two types, namely InPathDual, as in Fig.
  • Fig. 9d is shown, and InPathTriple, as it is in Fig. 9e is shown.
  • Fig. 9e also shows the case that real and ideal position the source is not between two but between three directional areas.
  • the in Fig. 9e compensation strategy used In particular shows Fig. 9e the case where the source is already on a compensation path and this compensation path goes back to reach a certain point on the source path.
  • the position of a source is defined over a maximum of three directional areas. If ideal position and real position have exactly one common direction, then Adjacent strategies are used Fig. 9f are shown. There are three types, with the letters "A", "B” and "C” referring to the common directional area.
  • the current compensation means determines that the real position and the new ideal position are defined as throughputs of directional areas sharing a single directional area, which in the case of AdjacentA is the directional area A, which in the case of AdjacentB is the directional area B, and which in Case of AdjacentC the directional area C is how it looks Fig. 9f is apparent.
  • Outside strategies shown are used when the real position and the ideal position have no common direction area in common.
  • OutsideC is used when the real position is very close to the position of the directional zone C.
  • OutsideM is used when the real position of the source is between two directional areas, or when the location of the source is between three directional areas but very close to the knee.
  • each directional area may be connected to each directional area, that is, the source to be from one directional area to another never to cross a third directional area, but from each directional area to every other directional area there exists a programmable source pathway.
  • the source is moved manually, ie with a so-called cader.
  • Cader strategies that provide different compensation paths. It is desired that the Cader strategies usually create a compensation path that connects the directional area A and the directional area C to the ideal position of the source. Such a compensation path can be seen in Fig. 9h , The newly assumed real position is the directional area C of the ideal position, where in Fig. 9h the compensation path is formed when the real-position direction area C is changed from the direction area 920 to the direction area 921.
  • Fig. 9i There are three Cader strategies in total Fig. 9i are shown.
  • the left strategy in Fig. 9i is used when the target direction area C of the real position has been changed. From the trail, Cader follows the OutsideM strategy.
  • CaderInverse is used when the starting direction area A of the real position is changed. The resulting compensation path behaves in the same way as the compensation case in the normal case (Cader), but the calculation may differ within the DSP.
  • CaderTriplestart is used when the real position of the source is between three directional areas and a new scene is switched. In this case, a compensation path from the real position of the source to the starting direction area of the new scene must be built.
  • the cader can be used to perform an animation of a source. With regard to the calculation of the weighting factors, there is no difference, which depends on whether the source moves manually or automatically becomes. A principal difference, however, is that the movement of the source is not controlled by a timer, but is triggered by a cader event that receives the means (804) for receiving a path change instruction. The cader event is therefore the path change command.
  • a special case provided by the source animation according to the invention by means of Cader is the backward movement of sources. If the position of a source corresponds to the normal case, then the source moves either with the cadre or automatically on the intended path with the compensation case, but the backward movement of the source is subject to a special case.
  • the path of a source is divided into the source path 15a and the compensation path 15b, the default sector being a part of the source path 15a and the compensation sector in Fig. 10a represents the compensation path.
  • the default sector corresponds to the original programmed portion of the path of the source.
  • the compensation sector describes the path section that deviates from the programmed movement.
  • Moving the source backwards with the cader will have different effects, depending on whether the source is in the compensation sector or in the default sector. Assuming the source is in the compensation sector, movement of the cader to the left will result in backward movement of the source. As long as the source is still in the compensation sector, everything happens as expected. However, once the source leaves the compensation sector and enters the default sector, the following happens, the source moves normally in the default sector, but the compensation sector is recalculated so that once the cader is moved back to the right, the source does not open The default sector again runs along, but runs directly over the newly calculated compensation sector to the current target direction area. This situation is in Fig. 10b shown. By moving a source backwards and forwards then moving a source forward again will cause a changed compensation sector to be calculated if the backward move shortens a default sector.
  • A, B and C are the directional areas over which the position of a source is defined.
  • A, B and BlendAB describe the starting position of the Compensation sector.
  • C and BlendAbC describe the position of the source in the compensation sector.
  • BlendAC describes the location of the source on the overall path.
  • BlendAC BlendAC is set to zero then the source should be at the beginning of the path. If BlendAC equals 1 then the source should be positioned at the end of the path. Furthermore, the user should not be "bothered” with compensation sectors or default sectors when entering them. On the other hand, setting the value for BlendAC depends on whether the source is in the compensation sector or in the default sector. Generally, the in Fig. 10c above-described equation for BlendAC.
  • BlendAB and BlendAbC behave when BlendAC is set.
  • FadeAbC zero
  • FadeAC FadeAB / FadeAB + 1
  • Fig. 10d shows the determination of the parameters BlendAB and BlendAbC, depending on BlendAC, whereby a distinction is made in points 1 and 2, whether the source is in the default sector or in the compensation sector, and where in point 3 the values for the default sector while in point 4 the values for the compensation sector are calculated.
  • the resulting blend factors are then determined by Fig. 3b is used by the means for calculating the weighting factors to finally calculate the weighting factors g 1 , g 2 , g 3 , from which in turn the audio signals and interpolations, etc., as shown by Fig. 6 has been described can be calculated.
  • the inventive concept can be combined particularly well with wave field synthesis.
  • no wave field synthesis loudspeaker arrays can be placed on the stage for visual reasons, and instead, to achieve sound localization, delta stereophony must be used with directional groups, it is typically possible to have at least the sides of the Listening room and at the back of the auditorium wave field synthesis arrays. According to the invention, however, a user does not have to worry about whether a source is now is made audible by a wave field synthesis array or a directional group.
  • a corresponding mixed scenario is possible even if e.g. No wave field synthesis loudspeaker arrays are possible in a certain area of the stage, because otherwise they would disturb the visual impression, while wave field synthesis loudspeaker arrays can be used in another area of the stage. Again, a combination of delta-stereophony and wave-field synthesis occurs. However, in accordance with the present invention, the user will not have to worry about how his source is rendered since the graphical user interface also provides areas where wave field synthesis loudspeaker arrays are located as directional groups.
  • the directional area mechanism for positioning is always provided, such that in a common U-serinterface, the assignment of sources to wave field synthesis or to deltastereophonic directional sonication can take place without user intervention.
  • the concept of the directional areas can be applied universally, whereby the user always positions sound sources in the same way. In other words, the user does not see whether to position a sound source in a directional area that includes a wave field synthesis array, or to position a sound source in a directional area that actually has a supportive loudspeaker that operates on the principle of the first wavefront.
  • Source movement occurs solely in that the user provides motion paths between directional areas, such user-set motion path through the source path receiving means according to FIG Fig. 8 Will be received. It is only on the part of the configuration system that a decision is made as to whether a wave field synthesis source or a delta stereophonic source is to be prepared. In particular, it will do so decided that a property parameter of the directional area is being investigated.
  • Each directional area may in this case contain any number of loudspeakers and always exactly one wave field synthesis source, which is held by its virtual position at a fixed position within the loudspeaker array or with respect to the loudspeaker array and in this respect the (real) position of the support loudspeaker in a delta stereophonic System corresponds.
  • the wave field synthesis source then represents a channel of the wave field synthesis system, wherein in a wave field synthesis system, as it is known, per channel own audio object, so a separate source can be processed.
  • the wave field synthesis source is characterized by corresponding wave field synthesis-specific parameters.
  • the movement of the wave field synthesis source can be done in two ways, depending on the availability of the computing power.
  • the fix positioned wave field synthesis sources are driven by a transition. As a source moves out of a directional area, the speakers will be muted as the speakers of the directional area into which the source is entering are increasingly attenuated.
  • a new position can be interpolated from the entered fixed positions, which is then actually provided as a virtual position to a wave field synthesis renderer, so that a virtual position is generated without crossfading and true wave field synthesis, which in directional areas that are on the Base of delta stereophony work, of course, is not possible.
  • the present invention is advantageous in that free positioning of sources and assignments to the directional gates can occur, and in particular then, when overlapping directional areas are present, that is, when loudspeakers belong to multiple directional areas, a large number of directional areas can be achieved with a high resolution at directional location positions.
  • each loudspeaker on the stage could represent its own directional area having loudspeakers arranged around it, emitting at a greater delay to meet the volume requirements.
  • these (surrounding) speakers will suddenly become supportive speakers and will no longer be "auxiliary speakers”.
  • the concept according to the invention is further distinguished by an intuitive user interface which decreases the user as much as possible, and therefore enables secure operation even by users who are not proficient in all depths of the system.
  • a combination of the wave field synthesis with the delta stereophony is achieved via a common user interface, wherein in preferred embodiments, dynamic filtering is achieved in swelling movements due to the equalizer parameters and switched between two blend algorithms to produce artifact generation due to the transition from one direction region to another avoid the next directional area.
  • dynamic filtering is achieved in swelling movements due to the equalizer parameters and switched between two blend algorithms to produce artifact generation due to the transition from one direction region to another avoid the next directional area.
  • it is ensured that no level dips occur during the dimming between the directional areas, Furthermore, a dynamic glare is provided to reduce further artifacts.
  • the provision of a compensation path allows for live application capability since there are now opportunities to intervene, for example, to respond to the tracking of sounds when an actor leaves the specified path that has been programmed.
  • the present invention is particularly advantageous in the sound in theaters, musical stages, open-air stages with mostly larger auditoriums or in concert venues.
  • the method according to the invention can be implemented in hardware or in software.
  • the implementation may be on a digital storage medium, in particular a floppy disk or CD with electronically readable control signals, which may interact with a programmable computer system such that the method is performed.
  • the invention thus also consists in a computer program product with a program code stored on a machine-readable carrier for carrying out the method according to the invention, when the computer program product runs on a computer.
  • the invention can thus be realized as a computer program with a program code for carrying out the method when the computer program runs on a computer.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Stereophonic System (AREA)
  • Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)
  • Circuit For Audible Band Transducer (AREA)

Claims (16)

  1. Dispositif pour commander une pluralité de haut-parleurs regroupés en au moins trois groupes directionnels (10a, 10b, 10c), à chaque groupe directionnel étant associée une position de groupe directionnel (11a, 11b, 11c), aux caractéristiques suivantes:
    un moyen (800) destiné à recevoir un trajet de source d'une première position de groupe directionnel (11a) à une deuxième position de groupe directionnel (11b) et une information de déplacement pour le trajet de source;
    un moyen (802) destiné à calculer un paramètre de trajet de source (BlendAB) pour différents moments sur base de l'information de déplacement, le paramètre de trajet de source indiquant une position d'une source audio sur le trajet de source;
    caractérisé par
    un moyen (804) destiné à recevoir un ordre de changement de trajet par lequel peut être initié un trajet de compensation vers une troisième région directionnelle;
    un moyen (806) destiné à mémoriser une valeur du paramètre de trajet de source à un endroit auquel le trajet de compensation (15b) s'écarte du trajet de source (15a); et
    un moyen (810) destiné à calculer des facteurs de pondération pour les haut-parleurs des trois groupes directionnels sur base du trajet de source (15a), de la valeur mémorisée du paramètre de trajet de source (BlendAB) et des informations sur le trajet de compensation (15b).
  2. Dispositif selon la revendication 1, présentant par ailleurs un moyen (808) destiné à calculer un paramètre de trajet de compensation (BlendAbC) indiquant une position de la source audio sur le trajet de compensation (15b), et
    le moyen (810) destiné à calculer étant réalisé de manière à calculer en outre, à l'aide du paramètre de trajet de compensation, les facteurs de pondération pour les haut-parleurs des trois groupes directionnels.
  3. Dispositif selon la revendication 1 ou 2, dans lequel le moyen (802) destiné à calculer le paramètre de trajet de source est réalisé de manière à calculer, pour des moments successifs, les paramètres de trajet de source de sorte que la source se déplace sur le trajet de source à une vitesse donnée par les informations de déplacement.
  4. Dispositif selon l'une des revendications précédentes, dans lequel le moyen (808) destiné à calculer le paramètre de trajet de compensation est réalisé de manière à calculer, pour des moments successifs, des paramètres de trajet de compensation de sorte que la source se déplace sur le trajet de compensation à une vitesse prédéfinie qui est supérieure à une vitesse d'une source qui se déplace sur le trajet de source.
  5. Dispositif selon l'une des revendications précédentes,
    dans lequel le moyen (810) destiné à calculer les facteurs de pondération est réalisé de manière à calculer les facteurs de pondération comme suit:
    g1=(1-BlendAbC) (1-BlendAB);
    g2=(1-BlendAbC) BlendAB;
    g3=BlendAbC
    g1 étant un facteur de pondération pour un haut-parleur du premier groupe directionnel, g2 étant un facteur de pondération pour un haut-parleur du deuxième groupe directionnel, g3 étant un facteur de pondération pour un haut-parleur du troisième groupe directionnel, BlendAB étant le paramètre de trajet de source qui a été mémorisé par le moyen (806), et BlendAbC étant le paramètre de trajet de compensation.
  6. Dispositif selon l'une des revendications précédentes, dans lequel les trois groupes directionnels sont disposés à recouvrement, qu'il existe au moins un haut-parleur qui est présent dans les trois groupes directionnels et auquel est associée, pour chaque groupe directionnel, une valeur de paramètre différente pour un paramètre de haut-parleur, le dispositif présentant par ailleurs la caractéristique suivante:
    un moyen (42) destiné à calculer un signal de haut-parleur pour le haut-parleur à l'aide des valeurs de paramètre et des facteurs de pondération.
  7. Dispositif selon la revendication 6, dans lequel le moyen (42) destiné à calculer présente un moyen d'interpolation (46, 48), pour calculer une valeur interpolée sur base des facteurs de pondération, le moyen d'interpolation étant réalisé de manière à effectuer l'interpolation suivante: Z = g 1 x a 1 + g 2 x a 2 + g 3 x a 3 ,
    Figure imgb0006
    Z étant la valeur de paramètre de haut-parleur interpolée, g1 étant un premier facteur de pondération, g2 étant un deuxième facteur de pondération, et g3 étant un troisième facteur de pondération, a1 étant une valeur de paramètre du haut-parleur correspondant à un premier groupe directionnel, a2 étant une valeur de paramètre de haut-parleur correspondant à un deuxième groupe directionnel, et a3 étant une valeur de paramètre de haut-parleur correspondant à un troisième groupe directionnel.
  8. Dispositif selon la revendication 7, dans lequel le moyen d'interpolation est réalisé de manière à calculer une valeur de retard interpolée ou une valeur de modulation interpolée.
  9. Dispositif selon l'une des revendications précédentes, dans lequel le moyen destiné à recevoir un ordre de changement de trajet (804) est réalisé de manière à recevoir une entrée manuelle d'une interface graphique d'utilisateur.
  10. Dispositif selon l'une des revendications précédentes, présentant par ailleurs la caractéristique suivante:
    un moyen de compensation de saut destiné à déterminer un trajet de compensation de saut continu d'une première position de saut à une deuxième position de saut,
    le moyen (810) destiné à calculer les facteurs de pondération étant réalisé de manière à calculer, pour des positions de la source audio sur le trajet de compensation de saut, des facteurs de pondération.
  11. Disposition selon la revendication 10, dans lequel la première position de saut est donnée par trois groupes directionnels, et dans lequel la deuxième position de saut est donnée par trois groupes directionnels, et
    le moyen de compensation de saut étant réalisé de manière à sélectionner, lors d'une recherche d'un trajet de compensation de saut, une stratégie de compensation qui est fonction de si les trois régions directionnelles définissant la première position de saut et les trois régions directionnelles définissant la deuxième position de saut ont une région directionnelle ou plusieurs régions directionnelles en commun.
  12. Disposition selon la revendication 11, dans lequel le moyen de compensation de saut est réalisé de manière à utiliser, lorsque les trois régions directionnelles de la première position de saut et les trois régions directionnelles de la deuxième position de saut coïncident, une stratégie de compensation InpathDual ou une stratégie de compensation InpathTriple,
    à utiliser, lorsqu'au moins une région directionnelle de la première position de saut est identique avec une région directionnelle de la deuxième position de saut, une stratégie de compensation AdjacentA, une stratégie de compensation AdjacentB ou une stratégie de compensation AdjacentC,
    ou à utiliser, lorsque la première position de saut et la deuxième position de saut n'ont pas de région directionnelle en commun, une stratégie de compensation OutsideM ou une stratégie de compensation OutsideC.
  13. Dispositif selon l'une des revendications précédentes, dans lequel le moyen (804) destiné à recevoir un ordre de changement de trajet est réalisé de manière à recevoir une position de la source entre le premier et le troisième groupe directionnel, et
    le moyen (802) destiné à calculer le paramètre de trajet de source est réalisé de manière à constater si, à des moments où l'ordre de changement de trajet doit devenir actif, la source se trouve sur un trajet de source ou un trajet de compensation.
  14. Dispositif selon la revendication 13, dans lequel le moyen (802) destiné à calculer le paramètre de trajet de source ou le moyen (808) destiné à calculer le paramètre de trajet de compensation sont réalisés de manière à calculer, lorsque la source se trouve sur le trajet de compensation, le paramètre de trajet de compensation sur base d'une première règle de calcul et à calculer, lorsque la source se trouve sur le trajet de source, les paramètres de trajet sur base d'une deuxième règle de calcul.
  15. Procédé pour commander une pluralité de haut-parleurs regroupés en au moins trois groupes directionnels (10a, 10b, 10c), à chaque groupe directionnel étant associée une position de groupe directionnel (11a, 11b, 11c), aux étapes suivantes consistant à:
    recevoir (800) un trajet de source d'une première position de groupe directionnel (11a) à une deuxième position de groupe directionnel (11b) et une information de déplacement pour le trajet de source;
    calculer (802) un paramètre de trajet de source (BlendAB) pour différents moments sur base de l'information de déplacement, le paramètre de trajet de source indiquant une position d'une source audio sur le trajet de source;
    caractérisé par les étapes suivantes consistant à:
    recevoir (804) un ordre de changement de trajet par lequel peut être initié un trajet de compensation vers une troisième région directionnelle;
    mémoriser (806) une valeur du paramètre de trajet de source à un endroit auquel le trajet de compensation (15b) s'écarte du trajet de source (15a); et
    calculer (810) des facteurs de pondération pour les haut-parleurs des trois groupes directionnels sur base du trajet de source (15a), de la valeur mémorisée du paramètre de trajet de source (BlendAB) et des informations sur le trajet de compensation (15b).
  16. Programme d'ordinateur avec un code de programme pour réaliser le procédé selon la revendication 15 lorsque le programme d'ordinateur est exécuté sur un ordinateur.
EP06762422A 2005-07-15 2006-07-05 Dispositif et procede pour commander une pluralite de haut-parleurs au moyen d'une interface graphique d'utilisateur Not-in-force EP1872620B9 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102005033239A DE102005033239A1 (de) 2005-07-15 2005-07-15 Vorrichtung und Verfahren zum Steuern einer Mehrzahl von Lautsprechern mittels einer graphischen Benutzerschnittstelle
PCT/EP2006/006562 WO2007009597A1 (fr) 2005-07-15 2006-07-05 Dispositif et procede pour commander une pluralite de haut-parleurs au moyen d'une interface graphique d'utilisateur

Publications (3)

Publication Number Publication Date
EP1872620A1 EP1872620A1 (fr) 2008-01-02
EP1872620B1 true EP1872620B1 (fr) 2009-01-21
EP1872620B9 EP1872620B9 (fr) 2009-08-26

Family

ID=36954107

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06762422A Not-in-force EP1872620B9 (fr) 2005-07-15 2006-07-05 Dispositif et procede pour commander une pluralite de haut-parleurs au moyen d'une interface graphique d'utilisateur

Country Status (7)

Country Link
US (1) US8189824B2 (fr)
EP (1) EP1872620B9 (fr)
JP (1) JP4913140B2 (fr)
CN (1) CN101223817B (fr)
AT (1) ATE421842T1 (fr)
DE (2) DE102005033239A1 (fr)
WO (1) WO2007009597A1 (fr)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4107300B2 (ja) * 2005-03-10 2008-06-25 ヤマハ株式会社 サラウンドシステム
DE102005033238A1 (de) * 2005-07-15 2007-01-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Ansteuern einer Mehrzahl von Lautsprechern mittels eines DSP
US9202509B2 (en) 2006-09-12 2015-12-01 Sonos, Inc. Controlling and grouping in a multi-zone media system
US8788080B1 (en) 2006-09-12 2014-07-22 Sonos, Inc. Multi-channel pairing in a media system
US8483853B1 (en) 2006-09-12 2013-07-09 Sonos, Inc. Controlling and manipulating groupings in a multi-zone media system
DE102007059597A1 (de) * 2007-09-19 2009-04-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Eine Vorrichtung und ein Verfahren zur Ermittlung eines Komponentensignals in hoher Genauigkeit
EP2309781A3 (fr) * 2009-09-23 2013-12-18 Iosono GmbH Appareil et procédé pour le calcul de coefficients de filtres pour un agencement de haut-parleurs prédéfini
DE102010030534A1 (de) * 2010-06-25 2011-12-29 Iosono Gmbh Vorrichtung zum Veränderung einer Audio-Szene und Vorrichtung zum Erzeugen einer Richtungsfunktion
US11265652B2 (en) 2011-01-25 2022-03-01 Sonos, Inc. Playback device pairing
US11429343B2 (en) 2011-01-25 2022-08-30 Sonos, Inc. Stereo playback configuration and control
US9729115B2 (en) 2012-04-27 2017-08-08 Sonos, Inc. Intelligently increasing the sound level of player
EP2862370B1 (fr) * 2012-06-19 2017-08-30 Dolby Laboratories Licensing Corporation Représentation et reproduction d'audio spatial utilisant des systèmes audio à la base de canaux
US9008330B2 (en) 2012-09-28 2015-04-14 Sonos, Inc. Crossover frequency adjustments for audio speakers
WO2015017037A1 (fr) * 2013-07-30 2015-02-05 Dolby International Ab Réalisation de panoramique d'objets audio pour des agencements de haut-parleur arbitraires
JP6187131B2 (ja) * 2013-10-17 2017-08-30 ヤマハ株式会社 音像定位装置
US9226073B2 (en) 2014-02-06 2015-12-29 Sonos, Inc. Audio output balancing during synchronized playback
US9226087B2 (en) 2014-02-06 2015-12-29 Sonos, Inc. Audio output balancing during synchronized playback
US9671997B2 (en) 2014-07-23 2017-06-06 Sonos, Inc. Zone grouping
US10209947B2 (en) 2014-07-23 2019-02-19 Sonos, Inc. Device grouping
US10248376B2 (en) 2015-06-11 2019-04-02 Sonos, Inc. Multiple groupings in a playback system
CN105072553B (zh) * 2015-08-31 2018-06-05 三星电子(中国)研发中心 音响设备的扩音方法及装置
US10712997B2 (en) 2016-10-17 2020-07-14 Sonos, Inc. Room association based on name
KR102224216B1 (ko) * 2017-12-22 2021-03-08 주식회사 오드아이앤씨 공연 음악 플랫폼 시스템

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5412731A (en) * 1982-11-08 1995-05-02 Desper Products, Inc. Automatic stereophonic manipulation system and apparatus for image enhancement
DD242954A3 (de) * 1983-12-14 1987-02-18 Deutsche Post Rfz Grossraumbeschallungssystem
DD292805A5 (de) * 1988-12-22 1991-08-08 Wolfgang Ahnert Verfahren und anordnung fuer eine oertlich sowie zeitlich veraenderliche signalverteilung ueber eine grossbeschallungsanlage, insbesondere fuer audiovisuelle veranstaltungen in auditorien, vorzugsweise kuppelfoermigen raeumen
FR2692425B1 (fr) * 1992-06-12 1997-04-25 Alain Azoulay Dispositif de reproduction sonore par multiamplification active.
JP3158790B2 (ja) 1993-07-14 2001-04-23 株式会社デンソー 位置判別装置
GB9324240D0 (en) * 1993-11-25 1994-01-12 Central Research Lab Ltd Method and apparatus for processing a bonaural pair of signals
JP3370433B2 (ja) 1994-05-13 2003-01-27 株式会社竹中工務店 音像定位システム
US5506908A (en) * 1994-06-30 1996-04-09 At&T Corp. Directional microphone system
DE69637736D1 (de) * 1995-09-08 2008-12-18 Fujitsu Ltd Dreidimensionaler akustischer Prozessor mit Anwendung von linearen prädiktiven Koeffizienten
JP3874855B2 (ja) 1996-10-21 2007-01-31 株式会社竹中工務店 音像定位システム
GB2343347B (en) * 1998-06-20 2002-12-31 Central Research Lab Ltd A method of synthesising an audio signal
GB2374503B (en) * 2001-01-29 2005-04-13 Hewlett Packard Co Audio user interface with audio field orientation indication
US7483540B2 (en) * 2002-03-25 2009-01-27 Bose Corporation Automatic audio system equalizing
JP2004032463A (ja) 2002-06-27 2004-01-29 Kajima Corp 話者移動に追従して音像定位する分散拡声方法及び分散拡声システム
US7333622B2 (en) * 2002-10-18 2008-02-19 The Regents Of The University Of California Dynamic binaural sound capture and reproduction
EP1562403B1 (fr) * 2002-11-15 2012-06-13 Sony Corporation Procédé et dispositif de traitement de signal audio
US7706544B2 (en) * 2002-11-21 2010-04-27 Fraunhofer-Geselleschaft Zur Forderung Der Angewandten Forschung E.V. Audio reproduction system and method for reproducing an audio signal
US7606372B2 (en) * 2003-02-12 2009-10-20 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Device and method for determining a reproduction position
US7336793B2 (en) * 2003-05-08 2008-02-26 Harman International Industries, Incorporated Loudspeaker system for virtual sound synthesis
DE10321986B4 (de) * 2003-05-15 2005-07-14 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Pegel-Korrigieren in einem Wellenfeldsynthesesystem
DE10321980B4 (de) * 2003-05-15 2005-10-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Berechnen eines diskreten Werts einer Komponente in einem Lautsprechersignal
DE10328335B4 (de) * 2003-06-24 2005-07-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Wellenfeldsyntesevorrichtung und Verfahren zum Treiben eines Arrays von Lautsprechern
DE10355146A1 (de) * 2003-11-26 2005-07-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Erzeugen eines Tieftonkanals
JP4251077B2 (ja) * 2004-01-07 2009-04-08 ヤマハ株式会社 スピーカ装置
EP1749420A4 (fr) * 2004-05-25 2008-10-15 Huonlabs Pty Ltd Dispositif et procede audio
JP2006086921A (ja) * 2004-09-17 2006-03-30 Sony Corp オーディオ信号の再生方法およびその再生装置
JP4625671B2 (ja) * 2004-10-12 2011-02-02 ソニー株式会社 オーディオ信号の再生方法およびその再生装置
JP2006115396A (ja) * 2004-10-18 2006-04-27 Sony Corp オーディオ信号の再生方法およびその再生装置
WO2006050353A2 (fr) * 2004-10-28 2006-05-11 Verax Technologies Inc. Systeme et procede de creation d'evenements sonores
JP2006135611A (ja) 2004-11-05 2006-05-25 Matsushita Electric Ind Co Ltd 仮想音像制御装置
DE102004057500B3 (de) * 2004-11-29 2006-06-14 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zur Ansteuerung einer Beschallungsanlage und Beschallungsanlage
DE102005008333A1 (de) * 2005-02-23 2006-08-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Steuern einer Wellenfeldsynthese-Rendering-Einrichtung
DE102005008369A1 (de) * 2005-02-23 2006-09-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Simulieren eines Wellenfeldsynthese-Systems
DE102005027978A1 (de) * 2005-06-16 2006-12-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Erzeugen eines Lautsprechersignals aufgrund einer zufällig auftretenden Audioquelle
DE102005033238A1 (de) * 2005-07-15 2007-01-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Ansteuern einer Mehrzahl von Lautsprechern mittels eines DSP
DE102005057406A1 (de) * 2005-11-30 2007-06-06 Valenzuela, Carlos Alberto, Dr.-Ing. Verfahren zur Aufnahme einer Tonquelle mit zeitlich variabler Richtcharakteristik und zur Wiedergabe sowie System zur Durchführung des Verfahrens
DE102006010212A1 (de) * 2006-03-06 2007-09-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zur Simulation von WFS-Systemen und Kompensation von klangbeeinflussenden WFS-Eigenschaften
EP1858296A1 (fr) * 2006-05-17 2007-11-21 SonicEmotion AG Méthode et système pour produire une impression binaurale en utilisant des haut-parleurs
US20080298610A1 (en) * 2007-05-30 2008-12-04 Nokia Corporation Parameter Space Re-Panning for Spatial Audio
KR101292206B1 (ko) * 2007-10-01 2013-08-01 삼성전자주식회사 어레이 스피커 시스템 및 그 구현 방법
US8509454B2 (en) * 2007-11-01 2013-08-13 Nokia Corporation Focusing on a portion of an audio scene for an audio signal
US8213637B2 (en) * 2009-05-28 2012-07-03 Dirac Research Ab Sound field control in multiple listening regions
US8571192B2 (en) * 2009-06-30 2013-10-29 Alcatel Lucent Method and apparatus for improved matching of auditory space to visual space in video teleconferencing applications using window-based displays
US20100328419A1 (en) * 2009-06-30 2010-12-30 Walter Etter Method and apparatus for improved matching of auditory space to visual space in video viewing applications
EP2309781A3 (fr) * 2009-09-23 2013-12-18 Iosono GmbH Appareil et procédé pour le calcul de coefficients de filtres pour un agencement de haut-parleurs prédéfini

Also Published As

Publication number Publication date
ATE421842T1 (de) 2009-02-15
JP2009501462A (ja) 2009-01-15
EP1872620A1 (fr) 2008-01-02
CN101223817B (zh) 2011-08-17
JP4913140B2 (ja) 2012-04-11
DE102005033239A1 (de) 2007-01-25
EP1872620B9 (fr) 2009-08-26
US20080192965A1 (en) 2008-08-14
CN101223817A (zh) 2008-07-16
US8189824B2 (en) 2012-05-29
DE502006002717D1 (de) 2009-03-12
WO2007009597A1 (fr) 2007-01-25

Similar Documents

Publication Publication Date Title
EP1872620B1 (fr) Dispositif et procede pour commander une pluralite de haut-parleurs au moyen d'une interface graphique d'utilisateur
EP1782658B1 (fr) Dispositif et procede de commande d'une pluralite de haut-parleurs a l'aide d'un dsp
EP1671516B1 (fr) Procede et dispositif de production d'un canal a frequences basses
EP1800517B1 (fr) Dispositif et procede de commande d'une installation de sonorisation et installation de sonorisation correspondante
DE10328335B4 (de) Wellenfeldsyntesevorrichtung und Verfahren zum Treiben eines Arrays von Lautsprechern
DE10254404B4 (de) Audiowiedergabesystem und Verfahren zum Wiedergeben eines Audiosignals
EP1851998B1 (fr) Dispositif et procédé pour fournir des données dans un système a dispositifs de rendu multiples
EP1525776B1 (fr) Dispositif de correction de niveau dans un systeme de synthese de champ d'ondes
DE102006017791A1 (de) Wiedergabegerät und Wiedergabeverfahren
EP1606975B1 (fr) Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur
WO2007101498A1 (fr) Dispositif et procédé de simulation de systèmes wfs et de compensation de propriétés wfs influençant le son
EP2754151B1 (fr) Dispositif, procédé et système électroacoustique de prolongement d'un temps de réverbération

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070105

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
RIN1 Information on inventor provided before grant (corrected)

Inventor name: MELCHIOR, FRANK

Inventor name: ROEDER, THOMAS

Inventor name: REICHELT, KATRIN

Inventor name: RODIGAST, RENE

Inventor name: DAUSEL, MARTIN

Inventor name: DEGUARA, JOACHIM

Inventor name: GATZSCHE, GABRIEL

Inventor name: BECKINGER, MICHAEL

Inventor name: STRAUSS, MICHAEL

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIN1 Information on inventor provided before grant (corrected)

Inventor name: DEGUARA, JOACHIM

Inventor name: ROEDER, THOMAS

Inventor name: DAUSEL, MARTIN

Inventor name: STRAUSS, MICHAEL

Inventor name: MELCHIOR, FRANK

Inventor name: RODIGAST, RENE

Inventor name: BECKINGER, MICHAEL

Inventor name: REICHELT, KATRIN

Inventor name: GATZSCHE, GABRIEL

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REF Corresponds to:

Ref document number: 502006002717

Country of ref document: DE

Date of ref document: 20090312

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090502

REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090421

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090521

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090622

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

26N No opposition filed

Effective date: 20091022

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090421

BERE Be: lapsed

Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FORDERUNG DER ANGEWAN

Effective date: 20090731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090422

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090705

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090722

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090121

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20200729

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20200727

Year of fee payment: 15

Ref country code: GB

Payment date: 20200724

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: CH

Payment date: 20200724

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: AT

Payment date: 20210720

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20210721

Year of fee payment: 16

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: NL

Ref legal event code: MM

Effective date: 20210801

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20210705

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210731

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210705

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210801

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210731

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 502006002717

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: MM01

Ref document number: 421842

Country of ref document: AT

Kind code of ref document: T

Effective date: 20220705

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220705

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230201