EP1671516B1 - Procede et dispositif de production d'un canal a frequences basses - Google Patents

Procede et dispositif de production d'un canal a frequences basses Download PDF

Info

Publication number
EP1671516B1
EP1671516B1 EP04797996A EP04797996A EP1671516B1 EP 1671516 B1 EP1671516 B1 EP 1671516B1 EP 04797996 A EP04797996 A EP 04797996A EP 04797996 A EP04797996 A EP 04797996A EP 1671516 B1 EP1671516 B1 EP 1671516B1
Authority
EP
European Patent Office
Prior art keywords
loudspeaker
low
frequency
signal
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP04797996A
Other languages
German (de)
English (en)
Other versions
EP1671516A1 (fr
Inventor
Michael Beckinger
Sandra Brix
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Publication of EP1671516A1 publication Critical patent/EP1671516A1/fr
Application granted granted Critical
Publication of EP1671516B1 publication Critical patent/EP1671516B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/403Linear arrays of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/13Application of wave-field synthesis in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/307Frequency adjustment, e.g. tone control

Definitions

  • the present invention relates to generating one or more low frequency channels, and more particularly to generating one or more low frequency channels associated with a multi-channel audio system, such as a wave field synthesis system.
  • Wave Field Synthesis was researched at the TU Delft and first introduced in the late 1980s (Berkhout, AJ, de Vries, D .; Vogel, P .: Acoustic Control by Wave-field Synthesis, JASA 93, 1993).
  • Applied to the acoustics can be simulated by a large number of speakers, which are arranged side by side (a so-called speaker array), any shape of an incoming wavefront.
  • a so-called speaker array any shape of an incoming wavefront.
  • the audio signals of each speaker must be fed with a time delay and amplitude scaling so that the radiated sound fields of each speaker properly overlap.
  • the contribution to each speaker is calculated separately for each source and the resulting signals added together.
  • reflections can also be reproduced as additional sources via the loudspeaker array. The effort in the calculation therefore depends heavily on the number of strongly depending on the number of sound sources, the reflection characteristics of the recording room and the number of speakers.
  • the advantage of this technique is in particular that a natural spatial sound impression over a large area of the playback room is possible.
  • the direction and distance of sound sources are reproduced very accurately.
  • virtual sound sources can even be positioned between the real speaker array and the listener.
  • wavefield synthesis works well for environments whose characteristics are known, irregularities occur when the texture changes, or when wave field synthesis is performed based on environmental conditions that do not match the actual nature of the environment.
  • the technique of wave field synthesis can also be used advantageously to supplement a visual perception with a corresponding spatial audio perception.
  • production in virtual studios focused on providing an authentic visual impression of the virtual scene.
  • the matching to the image acoustic impression is usually impressed by manual operations in the so-called post-production subsequently the audio signal or classified as too complex and time-consuming in the realization and therefore neglected. This usually leads to a contradiction of the individual sense sensations, which leads to the space being designed, i. H. the designed scene is perceived as less authentic.
  • the canvas or image surface forms the viewing direction and the perspective of the viewer. This means that the sound should track the image in the form that it always coincides with the viewed image. This becomes even more important for virtual studios, as there is typically no correlation between the moderation sound, for example, and the environment in which the presenter is currently located.
  • An essential subjective characteristic of such a sonic concept in this context is the location of a sound source, as perceived by a viewer, for example, a movie screen.
  • Wave Field Synthesis In the audio field, the technique of Wave Field Synthesis (WFS) can be used to achieve a good spatial sound for a large listener area.
  • wave field synthesis is based on the principle of Huygens, according to which wavefronts can be formed and built up by superimposing elementary waves. After mathematically exact theoretical description, infinitely many sources at infinitely small distance would have to be used for the generation of the elementary waves. Practically, however, many speakers are finally used at a finite distance from each other.
  • Each of these loudspeakers is used according to the WFS principle with an audio signal from a virtual source that has a specific delay and a has certain level, driven. Levels and delays are usually different for all speakers.
  • the wave field synthesis system operates on the basis of the Huygens principle and reconstructs a given waveform of, for example, a virtual source located at a certain distance from a demonstration area or a listener in the show area by a plurality of single waves .
  • the wave field synthesis algorithm thus obtains information about the actual position of a single loudspeaker from the loudspeaker array and then to compute a component signal for that single loudspeaker that this loudspeaker ultimately has to radiate to allow the listener to superpose the loudspeaker signal from one loudspeaker to the loudspeaker signals from the other active ones Speaker performs a reconstruction in that the listener has the impression that he is not "sonicated" by many individual speakers, but only from a single speaker at the position of the virtual source.
  • each virtual source for each loudspeaker that is, the component signal of the first virtual source for the first loudspeaker, the second virtual source for the first loudspeaker, etc.
  • the contribution from each virtual source for each loudspeaker is calculated to then add up the component signals finally get the actual speaker signal.
  • the superimposition of the loudspeaker signals of all active loudspeakers on the listener would result in the listener not feeling that he is being sonicated by a large array of loudspeakers, but that the sound he hears is merely from three sound sources positioned at specific positions, which are equal to the virtual sources.
  • the calculation of the component signals is in practice usually characterized by the fact that the audio signal associated with a virtual source, depending on the position of the virtual source and position of the speaker at a given time with a delay and a scaling factor is applied to a delayed and / or scaled audio signal of the to obtain a virtual source that directly represents the loudspeaker signal when there is only one virtual source, or that adds to other component signals for the considered loudspeaker from other virtual sources then to the loudspeaker signal for the considered loudspeaker.
  • Typical wave field synthesis algorithms work regardless of how many loudspeakers are present in the loudspeaker array.
  • the underlying theory of wave field synthesis is that any sound field can be accurately reconstructed by an infinite number of individual speakers, with the individual individual speakers arranged infinitely close to each other. In practice, however, neither the infinitely high number nor the infinitely close arrangement can be realized. Instead, there are a limited number of speakers, which are also arranged at certain predetermined distances from each other. Thus, in real systems, only an approximation to the actual waveform that would occur if the virtual source were actually present, would be a real source.
  • the loudspeaker array is only viewed when viewing a movie theater, e.g. B. is arranged on the side of the movie screen.
  • the wave-field synthesis module would generate loudspeaker signals for these loudspeakers, the loudspeaker signals for these loudspeakers normally being the same as for corresponding loudspeakers in a loudspeaker array extending not only over the side of a cinema, for example, on which the screen is arranged, but also the left, right and behind the audience room is arranged.
  • this "360 °" speaker array will provide a better approximation to an exact wave field than just a one-sided array, for example, in front of the viewers.
  • a wave-field synthesis module typically does not receive feedback on how many speakers are present or whether it is a one-sided or multi-sided or even a 360 ° array or not.
  • a wave field synthesizer calculates a loudspeaker signal for a loudspeaker based on the position of the loudspeaker and regardless of which other loudspeakers still exist or are absent.
  • a listener of the virtual source senses a level of the source resulting from the individual levels of the component signals of the virtual source individual loudspeaker signals.
  • Wave field synthesis devices are also capable of replicating several different types of sources.
  • a prominent source form is the point source, where the level decreases proportionally 1 / r, where r is the distance between a listener and the position of the virtual source.
  • Another source form is a source that emits plane waves.
  • the level remains constant regardless of the distance to the listener, since plane waves can be generated by point sources, which are arranged at an infinite distance.
  • the so-called subwoofer principle is used in such existing five-channel systems or seven-channel systems.
  • the subwoofer principle is used in multi-channel playback systems to save expensive and large woofers.
  • a low-frequency channel is used, which contains only music signals with frequencies lower than a cut-off frequency of about 120 Hz. This low-frequency channel drives a woofer with a large diaphragm area, which achieves high sound pressure, especially at low frequencies.
  • the subwoofer principle makes use of the fact that the human ear can localize low-frequency sounds in the direction very difficult to locate.
  • an extra woofer channel is already mixed in the sound mixing for a special speaker arrangement (spatial arrangement).
  • Examples of such multi-channel playback systems are Dolby Digital, Sony SDDS and DTS.
  • the subwoofer channel can be mixed regardless of the room size to be sounded, since the spatial relationships change only to scale.
  • the speaker assembly remains the same to scale.
  • WFS wave field synthesis
  • the number of speaker channels is related to the size of the audience area.
  • the number of loudspeaker channels is determined by how densely the loudspeakers are distributed over the circumference of the surface to be sounded. The quality of the WFS playback system depends on this density.
  • the volume is related to the number of loudspeaker channels and the density of the loudspeakers, since all loudspeaker channels add up to a wave field.
  • the volume of a WFS system is therefore not readily predetermined.
  • the volume of the subwoofer channel is predetermined with the known parameters of the electric amplifier and the loudspeaker.
  • the object of the present invention is to provide a concept for generating a woofer channel in a multi-channel reproducing system which enables reduction of level artifacts.
  • This object is achieved by a device for generating a woofer channel according to claim 1 or a method for generating a woofer channel according to claim 25 or by a computer program according to claim 26.
  • the present invention is based on the finding that the woofer channel for a woofer or that several woof channels for several woofers in a multi-channel system is not already generated in a sound mixing process, which is independent of an actual Playback room takes place, but that reference is made to the actual playback room by the predetermined position of the woofer on the one hand and properties of audio objects, which are typically virtual sources, on the other hand also taken into account to produce the bass channel.
  • audio objects are assumed, wherein an audio object is assigned an object description on the one hand and an object signal on the other hand.
  • an audio object scaling value is calculated for each audio object signal, which is then used to scale each object signal, and then summed up the scaled object signals to obtain a summed signal. From the sum signal then the bass channel is derived, which is supplied to the woofer.
  • the scaled audio object signals are then summed to obtain a summed signal. From this summation signal is then derived again in the case in which only a single woofer, the bass channel. This can be done by simple low-pass filtering.
  • the low-pass filtering can already be performed with the still unscaled audio object signals, so that only low-pass signals are processed further, so that the sum signal is already the low-frequency channel itself.
  • a subwoofer channel is not already mixed in the sound mixing process from the virtual sources, ie the sound material for the wave field synthesis. Instead, the mix automatically occurs when playing in the wave-field synthesis system, regardless of the size of the system and the number of speakers.
  • the volume of the subwoofer signal depends on the number and on the circumference of the fringed area of the wave field synthesis system. Even prescribed loudspeaker arrangements no longer have to be complied with, since loudspeaker position and loudspeaker number are included in the generation of the bass channel.
  • the present invention is not limited to wave field synthesis systems, but can be generally applied to any multichannel reproduction system in which the mixing and generation, that is, the rendering, of the reproduction channels, that is, the speaker channels themselves, only at the actual playback takes place.
  • Systems of this type are, for example, 5.1 systems, 7.1 systems, etc.
  • the inventive low frequency channel generation is combined with a level artifact reduction to perform level corrections in a wave field synthesis system not only for low frequency channels but for all loudspeaker channels to be independent of the number and position of the speakers used with respect to the wave field synthesis algorithm used.
  • the woofer will not be located in a reference display position for which optimum level correction is performed.
  • the sum signal is scaled by taking into account the position of the woofer using a speaker scale value to be calculated.
  • This scaling will preferably be only amplitude scaling and no phase scaling, taking into account the fact that the ear does not have good localization at the low frequencies present in the low frequency channel, but only shows accurate amplitude / volume perception.
  • a scaling can be used as scaling, if such is desired in an application scenario.
  • a separate woofer channel is created for each woofer.
  • the bass channels of the individual woofers preferably differ only in their amplitude, but not in the signal itself. All woofers thus emit the same sum signal, but with different amplitude scaling, the amplitude scaling for a single one Subwoofer depending on the distance of the individual woofer to the reference reproduction point is done.
  • the invention ensures that the overall volume of all superimposed low-frequency channels at the reference playback position is equal to the volume of the sum signal or the loudspeaker of the sum signal corresponds at least within a predetermined tolerance range. For this purpose, a separate loudspeaker scaling value is calculated for each individual low-frequency channel, with which the summation signal is then scaled accordingly in order to obtain the individual low-frequency channel.
  • a subwoofer channel is particularly advantageous in that it leads to a significant price reduction, since the individual speakers z.
  • the present invention is further advantageous in that the one or more low-frequency channels for any speaker assemblies and multi-channel formats can be generated automatically, which requires only a small overhead, especially in the context of a wave field synthesis system, since the wave field synthesis system anyway performs a level correction.
  • each virtual source ie each sound object or audio object
  • the audio signal of each virtual source is scaled and delayed accordingly, and then summed up all the virtual sources. This will calculate the overall volume and delay of the subwoofer, depending on its distance from the reference point, if the subwoofer is not already in the reference point.
  • the individual volumes of all subwoofers it is preferred to first determine the individual volumes of all subwoofers depending on their distances to the reference point. In this case, it is preferable to adhere to the boundary condition that the sum of all subwoofer channels is equal to the reference volume at the reference reproduction position, which preferably corresponds to the midpoint of the wave field synthesis system.
  • corresponding scaling factors are calculated per subwoofer, but initially again individual volume and delay of each virtual source relative to the reference point are determined. Then each virtual source is again scaled accordingly and optionally delayed, then summing all the virtual sources to the sum signal, which is then scaled by the individual scaling factors for each subwoofer channel to obtain the individual bass channels for the different woofers.
  • the wave field synthesis algorithm calculates both volume and delay for each loudspeaker channel and each virtual source.
  • the position of the individual loudspeaker must be known.
  • This scaling of the individual audio object signals for the individual wave field synthesis system loudspeakers, ie the individual loudspeakers of the array, is based on the finding that the shortcomings of a wave field synthesis system with a (practically realizable) finite number of loudspeakers can at least be mitigated if a level correction is carried out, in that either the pre-wave field synthesis audio signal or the component signals for different loudspeakers originating from a virtual source are manipulated after wavefield synthesis using a correction value to detect a deviation between a desired amplitude state in a demonstration area and an actual area. Amplitude state in the demonstration area to reduce. The desired amplitude state results from the fact that depending on the position of the virtual source, and z. B.
  • a target level is determined as an example of a desired amplitude state, and further that an actual level as an example an actual amplitude state is determined at the listener. While the target amplitude state is determined independently of the actual grouping and type of individual loudspeakers only on the basis of the virtual source or its position, the actual situation is calculated taking into account the positioning, type and control of the individual loudspeakers of the loudspeaker array.
  • the sound level at the ear of the listener can be determined at the optimum point within the performance area due to component signals of the virtual source radiated through a single speaker.
  • the level at the ear of the listener can also be determined at the optimum point within the presentation area, and then, by combining these levels, the actual actual level at the ear of the listener To receive the listener.
  • the transfer function of each individual loudspeaker as well as the level of the signal on the loudspeaker and the distance of the listener in the considered point within the presentation area to the individual loudspeaker can be taken into account.
  • the transmission characteristic of the loudspeaker can be assumed to work as an ideal point source.
  • the directional characteristic of the individual loudspeakers can also be taken into account.
  • a significant advantage of this concept is that in one embodiment, where sound levels are considered, only multiplicative scalings occur, in that for a quotient between the desired level and the actual level that gives the correction value, not the absolute level the listener or the absolute level of the virtual source is required. Instead, the correction factor depends only on the position of the virtual source (and hence the positions of the individual speakers) and the optimal point within the demonstration area. However, these quantities are fixed with respect to the position of the optimum point and the positions and transmission characteristics of the individual speakers and are not dependent on a piece being played.
  • the concept can be implemented computationally efficiently as a look-up table, in that a look-up table is generated and used that includes position correction factor value pairs for all or a substantial portion of possible virtual positions. In this case, then no on-line setpoint determination, actual value determination and setpoint / actual value comparison algorithm is to be carried out.
  • These algorithms which can be computationally expensive, can be dispensed with if the look-up table is accessed on the basis of a position of a virtual source, in order to determine therefrom the correction factor valid for this position of the virtual source.
  • a virtual source with a particular calibration level would be placed at a particular virtual location.
  • a wave field synthesis module would compute the loudspeaker signals for the individual loudspeakers to eventually measure on the listener the actual level due to the virtual source.
  • a correction factor would then be determined such that it at least reduces the deviation from the desired level to the actual level, or preferably brings it to zero.
  • This correction factor would then be stored in the look-up table in association with the position of the virtual source so as to gradually generate, for many positions of the virtual source, the entire look-up table for a particular wave-field synthesis system in a particular presentation room.
  • the audio signal of the virtual source such as recorded in an audio track coming from a recording studio
  • the correction factor it is preferable to manipulate the audio signal of the virtual source, such as recorded in an audio track coming from a recording studio, with the correction factor, and then to feed the manipulated signal into a wave field synthesis module.
  • this automatically means that all component signals going back to this manipulated virtual source are also weighted accordingly, compared to the case where no correction has been made in accordance with the present invention.
  • the correction factor is not necessarily identical for all component signals got to. However, this is largely preferred so as not to overly affect the relative scaling of the component signals required to reconstruct the actual wave situation.
  • One advantage is that with relatively simple measures, at least during operation, a level correction can be made to the effect that the listener, at least with regard to perceived by him volume of a virtual source not notice that not the actually required infinitely many speakers are present but only a limited amount of speakers.
  • Another advantage is that even if a virtual source moves at a constant distance (eg, from left to right) with respect to the viewer, this source for the viewer sitting in the middle in front of the screen, for example, always the same is loud and not even louder and once quieter, which would be the case without correction.
  • a further advantage is that it provides the option of offering lower cost wave field synthesis systems with fewer speakers, yet without any level artifacts, particularly for moving sources, thus acting as well for a listener as to the level problem more complex wave field synthesis systems with a high number of speakers. Also for holes in the array may be corrected to low levels according to the invention.
  • Fig. 9 shows an apparatus for generating a woofer channel for a woofer located at a predetermined loudspeaker position.
  • the device shown in FIG. 9 initially comprises a device 900 for providing a plurality of audio objects, an audio object having an audio object signal 902 and an audio object description 904 associated therewith.
  • the audio object description will typically include an audio object position and possibly also the audio object type.
  • the audio object description may also directly include an indication of the audio object volume. If this is not the case, then the audio object volume is easily calculated from the audio object signal itself, for example by sample-by-sample squaring and summation over a certain period of time. If the transfer function, frequency response, etc.
  • the object description of the audio signal is supplied to a means 906 for calculating an audio object scaling value for each audio object.
  • the individual audio object scaling values 908 are then applied to means 910 for scaling the object signals, as shown in FIG. 9.
  • the means 906 for calculating the audio object scaling value is configured to calculate an audio object scaling value for each audio object depending on the object description. If it is a source that emits plane waves, then the audio object scaling value or correction factor will be equal to 1 because for such plane-wave audio objects, a spacing between the position of that object and the optimal reference playback position is insignificant because the virtual position in this case is adopted at infinity.
  • the audio object scale value becomes dependent on the volume of the object either in the object description or derived from the object signal and the distance between the virtual object The position of the audio object and the reference playback position is calculated.
  • the audio object scaling value it is preferable to calculate the audio object scaling value to be considered to be based on a target amplitude state in the demonstration area, wherein the target amplitude state depends on a position of the virtual source or a kind of the virtual source the correction value is further based on an actual amplitude state in the demonstration area based on the component signals for the individual speakers due to the considered virtual source.
  • the correction value is thus calculated so that a deviation between the desired amplitude state and the actual amplitude state is reduced by a manipulation of the audio signal assigned to the virtual source using the correction value.
  • a delay possibly due to different virtual positions so that the individual audio object signals, which are present as sequences of samples, are shifted in respect of a time reference, to sufficiently account for propagation time differences of the sound signal from the virtual position to the reference playback position.
  • the scaled and correspondingly delayed object signals are then sampled by means 914 to obtain a sum signal having a sequence of sum signal samples, designated 916 in FIG.
  • This sum signal 916 is supplied to a device 918 for providing the woofer channel for the one or more subwoofers, which outputs the subwoofer signal or the woof channel 920 on the output side.
  • the sound signal emitted by a woofer is not a full bandwidth sound signal but with an upwardly limited bandwidth.
  • the cut-off frequency of the sound signal emitted by a woofer be less than 250 Hz, and preferably even only 125 Hz.
  • the band limitation of this sound signal can be done at different locations. A simple measure is to provide the woofer with a full bandwidth excitation signal which is then band limited by the woofer itself, as it only translates low frequencies into sound signals but suppresses high frequencies.
  • the band limitation may also be performed in the means 918 for providing the bass channel, by low-pass filtering the signal there prior to digital-to-analog conversion, this low-pass filtering, since it can be performed on the digital side, is preferred so that clear conditions exist independently of the actual implementation of the subwoofer.
  • the low-pass filtering may already occur prior to means 910 for scaling the object signals, so that the operations performed by means 910, 914, 918 are now performed with low-pass signals rather than full-bandwidth signals.
  • the low-pass filtering in the device 918 so that the calculation of the audio object scaling values, the scaling of the object signals and the summation with full bandwidth signals is performed to best match the loudspeakers between woofer on the one hand and midtone and treble on the other hand sure.
  • FIG. 10 shows a preferred embodiment of the means 918 for providing now multiple low frequency channels for multiple subwoofers.
  • FIG. 11 schematically shows a wave field synthesis system with a plurality of individual loudspeakers 808.
  • the single speakers 808 form an array 800 of single speakers enclosing the demonstration area.
  • Preferably within the demonstration area is the reference reproduction position or reference point 1100.
  • FIG. 11 is also schematically an audio object 1102, which is referred to as a "virtual sound object".
  • the virtual sound object 1102 includes an object description representing a virtual position 1104.
  • the distance D of the virtual sound object 1102 from the reference reproduction position 100 can be determined.
  • a simple audio object scaling value calculation can already be carried out, namely on the basis of the law which will be explained in detail later in FIG. 7a.
  • FIG. 11 further shows a first woofer 1106 at a first predetermined loudspeaker position 1108 and a second woofer 1110 at a second woofer position 1112. As shown in FIG.
  • the second subwoofer 1110 and each other are not shown in FIG additional subwoofer optional.
  • the first subwoofer 1106 has a distance d1 from the reference point 1100
  • the second subwoofer 1110 has a distance d2 from the reference point.
  • a subwoofer n (not shown in FIG. 11) has a distance dn from the reference point 1100.
  • the means 918 for providing the bass channel is adapted to include, in addition to the sum signal 916, denoted by s in Fig. 10, also the distance d1 of the woofer 1, designated 930, to the distance d2 of the woofer 2 labeled 932 and the distance dn of the woofer n designated 934.
  • means 918 provides a first woofer channel 940, a second woofer channel 942, and an nth woofer channel 944. From Fig.
  • woof channels 940, 942, 944 are weighted versions of the sum signal 916, the respective weighting factors being a 1 , a 2 , ..., a n are designated.
  • the individual weighting factors a 1 , a 2 , a n depend on the one hand on the distances 930-934 and on the other hand from the general boundary condition that the volume of the bass channels at the reference point 100 is equal to the reference volume, that is, the target amplitude state for the bass channel at the reference playback position 1100 (FIG. 11).
  • the sum of the loudspeaker scaling values a 1 , a 2 , a n will be greater than 1 to account for the attenuation of the bass channels on the way from the corresponding subwoofer to the reference point. If only a single woofer (eg, 1106) is provided, the scaling factor a 1 will also be greater than 1, while no further scaling factors are to be calculated, as there is only a single woofer.
  • FIGS. 1-8 there is shown a level artifact correction device for the loudspeaker array 800 in FIGS. 8 and 11, respectively, which are preferably combined with the inventive low frequency channel calculation as illustrated in FIGS. 9-11 can.
  • the wave field synthesis system has a speaker array 800 placed relative to a demonstration area 802.
  • the loudspeaker array shown in Fig. 8 which is a 360 ° array, includes four array sides 800a, 800b, 800c, and 800d. If the demonstration area 802 z.
  • the cinema screen is on the same side of the screening area 802 on which the sub-array 800c is arranged with respect to the conventions front / back or right / left. In this case, the observer, who is sitting at the so-called optimal point P in the demonstration area 802, would see to the front, ie to the screen.
  • Each loudspeaker array consists of a number of different individual loudspeakers 808, each of which is driven by its own loudspeaker signals provided by a wave-field synthesis module 810 via a data bus 812 shown only schematically in FIG.
  • the wave-field synthesis module is configured to use the information about e.g. B.
  • the wave field synthesis module can also receive further inputs, such as information about the room acoustics of the demonstration area, etc.
  • the following embodiments of the present invention may in principle be performed for each point P in the demonstration area.
  • the optimum point can therefore be located anywhere in the demonstration area 802. It can also be several optimal points, z. B. on an optimal line, give. However, in order to obtain the best possible ratios for as many points as possible in the demonstration region 802, it is preferred to use the optimum point or the optimal line in the center or center of gravity of the wave field synthesis system which is passed through the loudspeaker sub-arrays 800a, 800b, 800c , 800d is defined to assume.
  • wave field synthesis module 800 A more detailed representation of the wave field synthesis module 800 is given below with reference to FIGS. 2 and 3 with reference to the wave field synthesis module 200 in FIG. 2 or to the arrangement shown in detail in FIG. 3.
  • Fig. 2 shows a wave field synthesis environment in which the present invention can be implemented.
  • a wave field synthesis module 200 comprising various inputs 202, 204, 206 and 208 as well as various outputs 210, 212, 214, 216.
  • the input 202 receives z.
  • the audio signal would be 1 z.
  • the audio signal 1 would then be the actual language of this actor, while the position information as a function of time represents the current position of the first actor in the recording setting.
  • the audio signal n would be the language of, for example, another actor moving in the same way or different from the first actor.
  • the current position of the other actor to whom the audio signal n is assigned is notified to the wave field synthesis module 200 by position information synchronized with the audio signal n.
  • various virtual sources exist depending on the recording setting, wherein the audio signal of each virtual source is supplied to the wave field synthesis module 200 as a separate audio track.
  • a wave field synthesis module feeds a plurality of loudspeakers LS1, LS2, LS3, LSm by outputting loudspeaker signals via the outputs 210 to 216 to the individual loudspeakers.
  • the wave field synthesis module 200 is informed via the input 206 of the positions of the individual speakers in a playback setting, such as a movie theater.
  • a playback setting such as a movie theater.
  • the Wellenfeldsynthesemodul 200 still other inputs can be communicated, such as information about the room acoustics, etc., to be able to simulate the actual prevailing during the recording setting room acoustics in a cinema.
  • the loudspeaker signal supplied to the loudspeaker LS1 via the output 210 will be a superposition of component signals of the virtual sources, in that the loudspeaker signal for the loudspeaker LS1 is a first component originating in the virtual source 1, a second one Component that originates from the virtual source 2 and an nth component that goes back to the virtual source n.
  • the individual component signals are superimposed linearly, that is to say added according to their calculation, in order to simulate the linear superposition at the ear of the listener, who in a real setting will hear a linear superimposition of the sound sources that he can perceive.
  • the wave field synthesis module 200 has a highly parallel construction in that, starting from the audio signal for each virtual source and based on the position information for the corresponding virtual source, first delay information V i and scale factors SF i calculated from the position information and the position of the currently considered one Speaker, z. B. the speaker with the ordinal number j, so LSj depend.
  • the calculation of a delay information V i and a scaling factor SF i on the basis of the position information of a virtual source and the position of the considered loudspeaker j occurs by known algorithms implemented in devices 300, 302, 304, 306.
  • the individual component signals are then summed by a summer 320 to determine the discrete value for the current time t A of the loudspeaker signal for the loudspeaker j, which is then used for the output (for example the output 214 if the loudspeaker j is the loudspeaker LS3). to which speaker can be supplied.
  • each value is calculated for each virtual source individually, based on a delay and scaling with a scaling factor at a current time, after which all the component signals for a loudspeaker are summed due to the different virtual sources. For example, if only one virtual source were present, the summer would be omitted and the signal applied to the output of the summer in FIG. For example, correspond to the signal output from the device 310 when the virtual source 1 is the only virtual source.
  • Fig. 1 is a block diagram of the level correction apparatus of the present invention in a wave field synthesis system set forth with reference to Fig. 8;
  • the wave field synthesis system includes the wave field synthesis module 810 and the speaker array 800 for sounding the presentation area 802, the wave field synthesis module 810 configured to receive source signal information associated with a virtual sound source and source position information associated with the virtual sound source, and component signals for the speakers based on the virtual speaker To calculate source.
  • the apparatus comprises first a means 100 for determining a correction value based on a target amplitude state in the demonstration area, wherein the target amplitude state depends on a position of the virtual source or a type of the virtual source, and wherein the correction value is further on a Actual amplitude state is based in the demonstration area, which depends on the component signals for the speakers due to the virtual source.
  • the device 100 has an input 102 for obtaining a position of the virtual source, when e.g. B. has a point source characteristic, or to obtain information about a type of source when the source z. B. is a source for generating plane waves.
  • the device 100 is designed to output on the output side a correction value 104, which is assigned to a device 106 for manipulating manipulating an audio signal associated with the virtual source (obtained via an input 108) or manipulating component signals for the speakers due to a virtual source (obtained via an input 110).
  • an engineered audio signal results at an output 112, which is then fed to the wave field synthesis module 200 instead of the original audio signal provided at the input 108 to generate the individual loudspeaker signals 210, 212, ..., 216.
  • manipulated component signals are obtained on the output side which still need to be summed loudspeaker wise (device 116), if appropriate manipulated component signals from other virtual sources provided via further inputs 118.
  • the device 116 again provides the loudspeaker signals 210, 212, ..., 216.
  • the alternatives of the upstream manipulation (output 112) or the embedded manipulation (output 114) shown in FIG. 1 are used alternatively to each other can. Depending on the embodiment, however, there may also be cases in which the weighting factor or correction value provided via the input 104 into the device 106 is split, as it were, so that an upstream manipulation and partly an embedded manipulation is partially performed.
  • the upstream manipulation would be to manipulate the audio signal of the virtual source fed to a device 310, 312, 314, and 316, respectively, prior to its injection.
  • the embedded manipulation would be that the component signals output by means 310, 312, 314, and 316, respectively, are manipulated prior to their summation to obtain actual loudspeaker signal.
  • FIGS. 6a and 6b show the embedded manipulation by the manipulation device 106, which is drawn in Fig. 6a as a multiplier.
  • a wave field synthesis device which consists for example of the blocks 300, 310 and 302, 312, and 304, 314 and 306 and 316 of FIG. 3, provides the component signals K 11 , K 12 , K 13 for the loudspeaker LS1 and the component signals K n1 , K n2 and K n3 for the loudspeaker LSn.
  • the first index of K ij indicates the speaker
  • the second index indicates the virtual source from which the component signal originates.
  • the virtual source 1 for example, is expressed in the component signal K l1 , ..., K nl .
  • a multiplication of the component signals belonging to the source 1 becomes , So the component signals whose index j points to the virtual source 1, take place with the correction factor F 1 .
  • the correction factors F 1 , F 2 and F 3 if all other geometric parameters equal are only dependent on the location of the corresponding virtual source. Would thus all three virtual sources z. For example, if point sources (that is, of the same kind) are at the same position, the correction factors for the sources would be identical. This law will be explained in more detail with reference to FIG. 4, since it is possible to use a lookup table with position information and respective associated correction factors, which must be created at some point, but which can be accessed quickly during operation, without having to In operation, a setpoint / actual value calculation and comparison operation must be performed constantly, which is also possible in principle.
  • Fig. 6b shows the inventive alternative to source manipulation.
  • the manipulation device is connected upstream of the wave field synthesis device and is operative to correct the audio signals of the sources with the corresponding correction factors to obtain manipulated audio signals for the virtual sources, which are then supplied to the wave field synthesis device to obtain the component signals, which are then output from the sources respective component summation means are accumulated to obtain the loudspeaker signals LS for the respective loudspeakers, such as the loudspeaker LS i .
  • the means 100 for determining the correction value is formed as a look-up table 400 which stores position correction factor value pairs.
  • the device 100 is preferably also provided with an interpolator 402 in order, on the one hand, to keep the table size of the lookup table 400 in a limited frame and, on the other hand, also for current positions of a virtual source, which are fed into the interpolator via an input 404, at least below Using one or more adjacent position correction factor value pairs stored in the look-up table, which are supplied to the interpolator 402 via an input 406 to produce an interpolated current correction factor at an output 408.
  • the interpolator 402 may also be omitted so that the means 100 for determining FIG.
  • the look-up table performs direct access to the look-up table using position information supplied at an input 410 and provides an appropriate correction factor at an output 412. If the current position information associated with the audio track of the virtual source does not correspond exactly to position information found in the look-up table, then the look-up table may still be assigned a simple round-up / round-down function to the nearest one stored in the table Support value instead of the current base value.
  • the means for determining may be configured to actually perform a setpoint-actual value comparison.
  • the device 100 of FIG. 1 includes a desired amplitude state determination device 500 and an actual amplitude state determination device 502 to provide a desired amplitude state 504 and an actual amplitude state 506, which are supplied to a comparison device 508
  • a quotient of the desired amplitude state 504 and the actual amplitude state 506 is calculated to produce a correction factor 510 that is applied to the device 106 for manipulating, which is shown in Fig. 1, is supplied for further use.
  • the correction value can also be stored in a lookup table.
  • the desired amplitude state calculation is designed to determine a target level at the optimum point for a virtual source configured at a specific position or in a specific manner.
  • the target amplitude state determination device 500 does not need any component signals, of course, since the target amplitude state is independent of the component signals.
  • Component signals are, however, as shown in FIG. 5, the actual amplitude determining means 502 which further depending on the embodiment also information about the speaker positions and information about speaker transfer functions and / or information on directional characteristics of the speakers can get to a To determine the actual situation as well as possible.
  • the actual amplitude state determination device 502 can also be embodied as an actual measurement system in order to determine an actual level situation at the optimum point for specific virtual sources at specific positions.
  • FIG. 7 a shows a diagram for determining a desired amplitude state at a predetermined point, which is designated "optimal point" in FIG. 7 a, and which lies in the demonstration area 802 of FIG. 8.
  • a virtual source 700 is shown as a point source, which generates a sound field with concentric wavefronts. Further, due to the audio signal for the virtual source 700, the level L v of the virtual source 700 is known.
  • the target amplitude state or, when the amplitude state is a level state, the target level at the point P in the demonstration area becomes easily obtained by the level L P at the point P is equal to the quotient of L v and a distance r has the point P to the virtual source 700.
  • the desired amplitude state can thus be easily determined by calculating the level L v of the virtual source and by calculating the distance r from the optimal point to the virtual source.
  • a coordinate transformation of the virtual coordinates into the coordinates of the presentation space or a coordinate transformation of the presentation space coordinates of the point P into the virtual coordinates must be performed, as is known to those skilled in the art of wave field synthesis.
  • the virtual source is an infinitely distant virtual source which generates plane waves at the point P
  • the distance between the point P and the source is not needed to determine the desired amplitude state, since this point goes to infinity anyway. In this case, only information about the type of source is needed.
  • the desired level at point P is then equal to the level associated with the planar wave field generated by the infinitely distant virtual source.
  • Fig. 7 is a diagram for explaining the actual amplitude state.
  • different speakers 808 drawn all of which are fed with its own speaker signal, the z. B. generated by the wave field synthesis module 810 of FIG. 8.
  • each speaker is modeled as a point source that outputs a concentric wave field.
  • the law of the concentric wave field is again that the level decreases according to 1 / r.
  • the signal generated by the speaker 808 directly on the speaker diaphragm or the level of this signal may be determined on the basis of the speaker characteristics and the component signal in the loudspeaker signal LSn, which goes back to the considered virtual source.
  • the distance between P and the loudspeaker diaphragm of the loudspeaker LSn can be calculated so that a level for the point P can be obtained on the basis of a component signal corresponding to the virtual source under consideration goes back and has been sent from the speaker LSn.
  • a corresponding procedure may also be performed for the other loudspeakers of the loudspeaker array, so that the point P results in a number of "sub-level values" representing a signal contribution of the considered virtual source which has passed from the individual loudspeakers to the listener at point P. , By summing these sub-level values, the entire actual amplitude state is then obtained at the point P which, as stated, can then be compared to the desired amplitude state by a correction value, which is preferably multiplicative, but in principle additive or subtractive could get.
  • the desired level for one point is thus calculated on the basis of certain source forms, that is to say the desired amplitude state. It is preferred that the optimal point or point in the demonstration area being viewed is usefully located in the middle of the wave field synthesis system. It should be noted at this point that an improvement is already achieved even if the point which was used to calculate the setpoint amplitude state does not coincide directly with the point which was used to determine the actual amplitude state.
  • the aim is to achieve the best possible artifact reduction for the largest possible number of points in the projection area, it is sufficient in principle for a desired amplitude state to be present for any point in the projection area is determined, and that an actual amplitude state is also determined for any point in the demonstration area, but it is preferred that the point to which the actual amplitude state is related, in a zone around the point for which the Desired amplitude state has been determined, this zone is preferably less than 2 meters for normal cinema applications. For best results, these points should essentially coincide.
  • the level practically generated by superposition is calculated at this point, which is called the optimum point in the demonstration area.
  • the levels of the individual speakers and / or sources are then corrected according to the invention with this factor.
  • FIG. 6b Attention is drawn in particular to FIG. 6b, in which the means 914 for summation is drawn in order to deliver the sum signal 916 on the output side, while on the input side the scaled object signals 912 are obtained which, as can be seen from FIG Scaling of the source signals of the sources 1, 2, 3 with the corresponding audio object scaling values or correction values F1, F2, F3 are obtained.
  • the version shown in Fig. 6b is preferred, in which already a scaling or manipulation or correction on audio object signal level and not on component level, as shown in Fig. 6a, is carried out.
  • the concept of component-level correction shown in FIG According to the concept of low-frequency channel generation according to the invention, at least the calculation of the audio object scaling values F1, F2,..., Fn must be performed only once.
  • the scaling of the subwoofer channel is thus similar to the scaling of the overall volume of all loudspeakers in the reference point of the wave field synthesis display system.
  • the inventive method is thus suitable for any number of subwoofer loudspeakers, which are all scaled so that they reach a reference level in the center of the wave field synthesis system.
  • the reference volume depends only on the position of the virtual sound source. With the known dependencies of distance of the sound object to the reference point and the associated attenuation of the volume, the single volume of the respective sound object for each subwoofer channel is preferably calculated. The delay of each source is calculated from the distance of the virtual source to the reference point of the volume scaling.
  • Each subwoofer speaker reflects the sum of all converted sound objects.
  • the method according to the invention for generating a bass channel can be implemented in hardware or in software.
  • the level correction method according to the invention can be implemented in hardware or in software.
  • the implementation may be on a digital storage medium, in particular a floppy disk or CD with electronically readable control signals, which may interact with a programmable computer system such that the method is executed.
  • the invention thus also consists in a computer program product with a program code stored on a machine-readable carrier for carrying out the method for level correction when the computer program product runs on a computer.
  • the invention can thus be realized as a computer program with a program code for carrying out the method when the computer program runs on a computer.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)

Abstract

L'invention concerne un procédé de production d'un canal à fréquences basses destiné à un haut-parleur à fréquences basses disposé dans une position prédéfinie, consistant à mettre en oeuvre une pluralité d'objets audio affectés à une position d'objet et à une description d'objet (900); à effectuer un calcul d'une valeur de mise à l'échelle d'objet pour chaque objet audio sur la base de la description d'objet (906), de telle manière qu'un état d'amplitude réel se rapproche au moins d'une position de reproduction de référence d'un état d'amplitude de consigne; à mettre à l'échelle chaque signal d'objet avec une valeur de mise à l'échelle d'objet audio correspondant (910) de manière à additionner ensuite les signaux d'objet mis à l'échelle (914); et à déduire un canal à fréquences basses pour le haut-parleur à fréquences basses à partir du signal d'addition obtenu et à fournir ce canal au haut-parleur à fréquences basses (918). En raison de la mise à l'échelle des signaux d'objet individuels des objets audio, cette manière de procéder est indépendante d'une situation réelle d'un système de reproduction multicanal en ce qui concerne le nombre et la densité des haut-parleurs et la taille de la plage de projection réelle.

Claims (26)

  1. Dispositif pour générer un canal à basses fréquences (940, 942, 944) pour un haut-parleur à basses fréquences (1106, 1110), aux caractéristiques suivantes :
    un dispositif (900) destiné à mettre à disposition une pluralité d'objets audio, à un objet audio étant associés un signal d'objet et une description d'objet ;
    un dispositif (906) destiné à calculer une valeur de modulation d'objet audio pour chaque objet audio en fonction de la description d'objet (904) ;
    un dispositif (910) destiné à moduler chaque signal d'objet avec une valeur de modulation d'objet audio associée )908), pour obtenir un signal d'objet modulé (912) pour chaque objet audio ;
    un dispositif (914) destiné à additionner les signaux d'objet modulés, pour obtenir un signal de somme (916) ; et
    un dispositif (918) destiné à mettre à disposition le canal à basses fréquences (920, 940, 942, 944) pour le haut-parleur à basses fréquences (1106, 1110) sur base du signal de somme (916).
  2. Dispositif selon la revendication 1, dans lequel le haut-parleur à basses fréquences est disposé à une position de haut-parleur prédéterminée (1108, 1112), la position de haut-parleur prédéterminée (1108) étant différente d'une position de reproduction de référence (100), et
    dans lequel le dispositif (918) destiné à mettre à disposition le canal à basses fréquences est réalisé de manière à calculer une valeur de modulation de haut-parleur pour le haut-parleur à basses fréquences en fonction de la position de haut-parleur prédéterminée (1108), de sorte qu'un signal à basses fréquences ait, à la position de reproduction de référence (1100), un volume sonore correspondant à un volume sonore du signal de somme (916) dans une plage de tolérances prédéterminée, et
    le dispositif (918) destiné à mettre à disposition étant, par ailleurs, réalisé de manière à moduler le signal de somme (916) avec la valeur de modulation de haut-parleur, pour générer le canal à basses fréquences (920, 940, 942, 944).
  3. Dispositif selon la revendication 1 ou 2, dans lequel chaque signal d'objet est un signal à basses fréquences avec une fréquence limite supérieure inférieure ou égale à 250 Hz.
  4. Dispositif selon la revendication 1 ou 2, dans lequel le signal de somme (916) a une fréquence limite supérieure qui est sujpérieure à 8 kHz, et dans lequel le dispositif (918) destiné à mettre à disposition le canal à basses fréquences est réalisé de manière à effectuer une filtration passe-bas avec une fréquence limite inférieure ou égale à 250 Hz.
  5. Dispositif selon l'une des revendications précédentes,
    dans lequel un objet audio parmi la pluralité d'objets audio comprend une description d'objet comprenant une position d'objet audio, et
    dans lequel le dispositif (906) destiné à calculer une valeur de modulation d'objet audio pour l'objet audio est réalisé de manière à effectuer la valeur de modulation d'objet audio en fonction de la position de l'objet audio et d'une position de reproduction de référence (1100) et en fonction d'un volume sonore d'objet associé à l'objet audio.
  6. Dispositif selon l'une des revendications précédentes,
    dans lequel peuvent être générés une pluralité de canaux à basses fréquences pour une pluralité de haut-parleurs à basses fréquences à des positions de haut-parleurs à basses fréquences prédéterminées, et
    dans lequel le dispositif (918) destiné à mettre à disposition est réalisé de manière à calculer, en fonction de la position d'un haut-parleur à basses fréquences et en fonction d'un nombre d'autres haut-parleurs à basses fréquences, une valeur de modulation de haut-parleur pour chaque haut-parleur à basses fréquences,
    de sorte qu'un signal à basses fréquences, qui est une superposition de signaux de sortie de tous les haut-parleurs à basses fréquences à la position de référence (1100), ait un volume sonore correspondant à un volume sonore du signal de somme (916) dans les limites d'une plage de tolérances prédéterminée.
  7. Dispositif selon l'une des revendications précédentes,
    dans lequel le dispositif (906) destiné à calculer des valeurs de modulation d'objet audio est, par ailleurs, réalisé de manière à calculer une valeur de retard d'objet audio pour chaque objet audio qui est fonction d'une position d'objet et d'une position de reproduction de référence, et
    dans lequel le dispositif (914) destiné à additionner est réalisé de manière à retarder chaque signal d'objet ou chaque signal d'objet modulé, avant l'addition, de la valeur de retard d'objet audio correspondante.
  8. Dispositif selon l'une des revendications précédentes,
    dans lequel le dispositif (918) destiné à mettre à disposition est réalisé de manière à calculer pour un haut-parleur à basses fréquences une valeur de retard de haut-parleur à basses fréquences qui est fonction d'une distance entre le haut-parleur à basses fréquences et la position de reproduction de référence, et
    dans lequel le dispositif (918) destiné à mettre à disposition est, par ailleurs, réalisé de manière à tenir compte de la valeur de retard de haut-parleur à basses fréquences lors de la mise à disposition du canal à basses fréquences.
  9. Dispositif selon la revendication 12, dans lequel sont prévus plusieurs haut-parleurs à basses fréquences, et dans lequel, par ailleurs, le dispositif (918) destiné à mettre à disposition est réalisé de manière à calculer les valeurs de modulation de haut-parleur de sorte que soit obtenue pour chaque haut-parleur à basses fréquences une valeur de modulation de haut-parleur selon l'équation suivante : ( a 1 + a 2 + + a n ) s = LSref ,
    Figure imgb0003

    LSref étant un volume sonore de référence à une position de reproduction de référence (1100), s étant le signal de somme (916), a1 étant la valeur de modulation de haut-parleur d'un premier haut-parleur à basses fréquences, a2 étant une valeur de modulation de haut-parleur d'un deuxième haut-parleur à basses fréquences, et an étant une valeur de modulation de haut-parleur d'un n-ième haut-parleur à basses fréquences.
  10. Dispositif selon la revendication 9, dans lequel la valeur de modulation de haut-parleur d'un haut-parleur à basses fréquences est fonction d'une distance entre le haut-parleur à basses fréquences et la position de reproduction de référence (1100).
  11. Dispositif selon l'une des revendications précédentes, réalisé de manière à fonctionner dans un système de synthèse de champ d'ondes avec un module de synthèse de champ d'ondes (810) et une rangée (800) de haut-parleurs (808) pour l'alimentation en sons d'une zone de représentation (802), le module de synthèse de champ d'ondes étant réalisé de manière à recevoir un signal audio associé à une source sonore virtuelle ainsi que des informations de position de source associées à la source sonore virtuelle et à calculer, en tenant compte des informations de position de haut-parleur, des signaux à composantes pour les haut-parleurs sur base de la source virtuelle, et
    dans lequel le dispositif (906) destiné à calculer les valeurs de modulation d'objet audio (908) comporte un dispositif (100) destiné à déterminer une valeur de correction comme valeur de modulation d'objet audio, le dispositif (100) destiné à déterminer étant réalisé de manière à calculer la valeur de modulation d'objet audio de sorte qu'elle soit basée sur un état d'amplitude de consigne dans la zone de représentation, l'état d'amplitude de consigne étant fonction d'une position de la source virtuelle ou d'un type de la source virtuelle, et qui soit, par ailleurs, basée sur un état d'amplitude réel dans la zone de représentation qui est basé sur les signaux à composantes pour le haut-parleur sur base de la source virtuelle.
  12. Dispositif selon la revendication 11, dans lequel le dispositif (100) destiné à déterminer la valeur de correction (104) est réalisé de manière à calculer l'état d'amplitude de consigne pour un point prédéterminé dans la zone de reproduction (500), et à déterminer l'état d'amplitude réel pour une zone dans la zone de reproduction (502) qui est égale au point prédéterminé ou qui s'étend dans les limites d'une plage de tolérances autour du point prédéterminé.
  13. Dispositif selon la revendication 12, dans lequel la plage de tolérances prédéterminée est une sphère à rayon inférieur à 2 mètres autour du point prédéterminé.
  14. Dispositif selon l'une des revendications 11 à 13, dans lequel la source virtuelle est une source d'ondes uniformes, et dans lequel le dispositif (100) destiné à déterminer la valeur de correction est réalisé de manière à déterminer une valeur de correction, dans lequel un état d'amplitude du signal audio associé à la source virtuelle est égal à l'état d'amplitude de consigne.
  15. Dispositif selon l'une des revendications 11 à 14, dans lequel la source virtuelle est une source ponctuelle, et dans lequel le dispositif (100) destiné à déterminer le facteur de correction est réalisé de manière à fonctionner sur base d'un état d'amplitude de consigne qui est égal à un quotient d'un état d'amplitude du signal audio associé à la source virtuelle et de la distance entre le zone de reproduction et la position de la source virtuelle.
  16. Dispositif selon l'une des revendications 11 à 15,
    dans lequel le dispositif (100) destiné à déterminer la valeur de correction est réalisé de manière à fonctionner sur base d'un état d'amplitude réel pour la détermination duquel il est tenu compte d'une fonction de transmission du haut-parleur (808).
  17. Dispositif selon l'une des revendications 11 à 16,
    dans lequel le dispositif (100) destiné à déterminer le facteur de correction est réalisé de manière à calculer pour chaque haut-parleur une valeur d'atténuation qui est fonction de la position du haut-parleur et d'un point à considérer dans la zone de représentation, et dans lequel le dispositif (100) destiné à déterminer est, par ailleurs, réalisé de manière à pondérer le signal à composantes d'un haut-parleur avec la valeur d'atténuation pour le haut-parleur, pour obtenir un signal à composantes pondéré, et à additionner, par ailleurs, les signaux à composantes ou les signaux à composantes pondérés en conséquence d'autres haut-parleurs, pour obtenir l'état d'amplitude réel au point considéré sur lequel est basée la valeur de correction (104).
  18. Dispositif selon l'une des revendications 11 à 17, dans lequel le dispositif (106) destiné à manipuler est réalisé de manière à utiliser la valeur de correction (104) comme facteur de correction qui est égal à un quotient de l'état d'amplitude réel et de l'état d'amplitude de consigne.
  19. Dispositif selon la revendication 18, dans lequel le dispositif (106) destiné à manipuler est réalisé de manière à moduler le signal audio associé à la soruce virtuelle, avant le calcul des signaux à composantes par le module de synthèse de champ d'ondes (810), avec le facteur de correction.
  20. Dispositif selon l'une des revendications 11 à 19,
    dans lequel l'état d'amplitude de consigne est un niveau sonore de consigne, et dans lequel l'état d'amplitude réel est un niveau sonore réel.
  21. Dispositif selon la revendication 20, dans lequel le niveau sonore de consigne et le niveau sonore réel se basent sur un volume sonore de consigne ou un volume sonore réel, le volume sonore étant une mesure d'une énergie qui tombe dans un laps de temps sur une surface de référence.
  22. Dispositif selon la revendication 20 ou 21, dans lequel le dispositif (100) destiné à déterminer la valeur de correction est réalisé de manière à calculer l'état d'amplitude de consigne par élévation au carré, par valeur de balayage, des valeurs de balayage du signal audio associé à la source virtuelle, et par addition d'un nombre de valeurs de balayage élevées au carré, le nombre étant une mesure d'un temps d'observation, et
    dans lequel le dispositif (100) destiné à déterminer la valeur de correction est, par ailleurs, réalisé de manière à calculer l'état d'amplitude réel en élévant au carré, par valeur de balayage, chaque signal à composantes et en additionnant un nombre de valeurs de balayage qui est égal au nombre de valeurs de balayage élevées au carré additionnées pour le calcul de l'état d'amplitude de consigne, et les résultats d'addition des signaux à composantes étant, par ailleurs, additionnés, pour obtenir une mesure de l'état d'amplitude réel.
  23. Dispositif selon l'une des revendications 11 à 22, dans lequel le dispositif (100) destiné à déterminer la valeur de correction (104) présente un tableau de consultation (400) dans lequel sont mémorisées les paires de valeurs de facteurs de correction de position, un facteur de correction d'une paire de valeurs étant fonction d'une disposition des haut-parleurs dans la rangée de haut-parleurs et d'une position d'une source virtuelle, et le facteur de correction étant choisi de sorte que soit au moins réduit un écart entre un état d'amplitude réel sur base de la source virtuelle à la position associée et un état d'amplitude de consigne lors d'une utilisation du facteur de correction par le dispositif (106) destiné à manipuler.
  24. Dispositif selon la revendication 23, dans lequel le dispositif (100) destiné à déterminer est, par ailleurs, réalisé de manière à interpoler un facteur de correction actuel pour une position actuelle de la source virtuelle à partir d'un ou de plusieurs facteurs de correction parmi des paires de valeurs de facteurs de correction de position (402) dont la position ou les positions sont adjacentes à la position actuelle.
  25. Procédé pour générer un canal à basses fréquences (940, 942, 944) pour un haut-parleur à basses fréquences (1106, 1110), aux étapes suivantes consistant à :
    mettre à disposition (900) une pluralité d'objets audio, à un objet audio étant associés un signal d'objet et une description d'objet ;
    calculer (906) une valeur de modulation d'objet audio pour chaque objet audio en fonction de la description d'objet (904) ;
    moduler (910) chaque signal d'objet avec une valeur de modulation d'objet audio associée )908), pour obtenir un signal d'objet modulé (912) pour chaque objet audio ;
    additionner (914) les signaux d'objet modulés, pour obtenir un signal de somme (916) ; et
    mettre à disposition (918) le canal à basses fréquences (920, 940, 942, 944) pour le haut-parleur à basses fréquences (1106, 1110) sur base du signal de somme (916).
  26. Programme d'ordinateur avec un code de programme pour réaliser le procédé selon la revendication 25 lorsque le programme se déroule sur un ordinateur.
EP04797996A 2003-11-26 2004-11-18 Procede et dispositif de production d'un canal a frequences basses Active EP1671516B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE10355146A DE10355146A1 (de) 2003-11-26 2003-11-26 Vorrichtung und Verfahren zum Erzeugen eines Tieftonkanals
PCT/EP2004/013130 WO2005060307A1 (fr) 2003-11-26 2004-11-18 Procede et dispositif de production d'un canal a frequences basses

Publications (2)

Publication Number Publication Date
EP1671516A1 EP1671516A1 (fr) 2006-06-21
EP1671516B1 true EP1671516B1 (fr) 2007-02-14

Family

ID=34638189

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04797996A Active EP1671516B1 (fr) 2003-11-26 2004-11-18 Procede et dispositif de production d'un canal a frequences basses

Country Status (6)

Country Link
US (1) US8699731B2 (fr)
EP (1) EP1671516B1 (fr)
JP (1) JP4255031B2 (fr)
CN (1) CN100588286C (fr)
DE (2) DE10355146A1 (fr)
WO (1) WO2005060307A1 (fr)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005033238A1 (de) * 2005-07-15 2007-01-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Ansteuern einer Mehrzahl von Lautsprechern mittels eines DSP
DE102005033239A1 (de) * 2005-07-15 2007-01-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Steuern einer Mehrzahl von Lautsprechern mittels einer graphischen Benutzerschnittstelle
US8180067B2 (en) 2006-04-28 2012-05-15 Harman International Industries, Incorporated System for selectively extracting components of an audio input signal
US8036767B2 (en) 2006-09-20 2011-10-11 Harman International Industries, Incorporated System for extracting and changing the reverberant content of an audio input signal
DE102006053919A1 (de) 2006-10-11 2008-04-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Erzeugen einer Anzahl von Lautsprechersignalen für ein Lautsprecher-Array, das einen Wiedergaberaum definiert
JP4962047B2 (ja) * 2007-03-01 2012-06-27 ヤマハ株式会社 音響再生装置
US9031267B2 (en) * 2007-08-29 2015-05-12 Microsoft Technology Licensing, Llc Loudspeaker array providing direct and indirect radiation from same set of drivers
JP5338053B2 (ja) * 2007-09-11 2013-11-13 ソニー株式会社 波面合成信号変換装置および波面合成信号変換方法
DE102007059597A1 (de) 2007-09-19 2009-04-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Eine Vorrichtung und ein Verfahren zur Ermittlung eines Komponentensignals in hoher Genauigkeit
KR100943215B1 (ko) * 2007-11-27 2010-02-18 한국전자통신연구원 음장 합성을 이용한 입체 음장 재생 장치 및 그 방법
KR101461685B1 (ko) * 2008-03-31 2014-11-19 한국전자통신연구원 다객체 오디오 신호의 부가정보 비트스트림 생성 방법 및 장치
US8620009B2 (en) * 2008-06-17 2013-12-31 Microsoft Corporation Virtual sound source positioning
EP2486737B1 (fr) 2009-10-05 2016-05-11 Harman International Industries, Incorporated Système pour l'extraction spatiale de signaux audio
US8553722B2 (en) * 2011-12-14 2013-10-08 Symboll Technologies, Inc. Method and apparatus for providing spatially selectable communications using deconstructed and delayed data streams
KR20140046980A (ko) * 2012-10-11 2014-04-21 한국전자통신연구원 오디오 데이터 생성 장치 및 방법, 오디오 데이터 재생 장치 및 방법
JP5590169B2 (ja) * 2013-02-18 2014-09-17 ソニー株式会社 波面合成信号変換装置および波面合成信号変換方法
CN105144751A (zh) * 2013-04-15 2015-12-09 英迪股份有限公司 用于产生虚拟对象的音频信号处理方法
WO2014204911A1 (fr) * 2013-06-18 2014-12-24 Dolby Laboratories Licensing Corporation Gestion des basses pour rendu audio
WO2015017037A1 (fr) 2013-07-30 2015-02-05 Dolby International Ab Réalisation de panoramique d'objets audio pour des agencements de haut-parleur arbitraires
DE102013218176A1 (de) 2013-09-11 2015-03-12 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und verfahren zur dekorrelation von lautsprechersignalen
WO2015147434A1 (fr) * 2014-03-25 2015-10-01 인텔렉추얼디스커버리 주식회사 Dispositif et procédé de traitement de signal audio
JP5743003B2 (ja) * 2014-05-09 2015-07-01 ソニー株式会社 波面合成信号変換装置および波面合成信号変換方法
JP2016100613A (ja) * 2014-11-18 2016-05-30 ソニー株式会社 信号処理装置、信号処理方法、およびプログラム
US9875756B2 (en) * 2014-12-16 2018-01-23 Psyx Research, Inc. System and method for artifact masking
WO2017031016A1 (fr) 2015-08-14 2017-02-23 Dts, Inc. Gestion des basses pour un système audio à base d'objets
US9794689B2 (en) * 2015-10-30 2017-10-17 Guoguang Electric Company Limited Addition of virtual bass in the time domain
EP3611937A4 (fr) * 2017-04-12 2020-10-07 Yamaha Corporation Dispositif de traitement d'informations, procédé de traitement d'informations et programme
US11102601B2 (en) * 2017-09-29 2021-08-24 Apple Inc. Spatial audio upmixing
EP3868129B1 (fr) * 2018-10-16 2023-10-11 Dolby Laboratories Licensing Corporation Méthodes et dispositifs pour la gestion des basses
US11968518B2 (en) 2019-03-29 2024-04-23 Sony Group Corporation Apparatus and method for generating spatial audio
JP2021048500A (ja) * 2019-09-19 2021-03-25 ソニー株式会社 信号処理装置、信号処理方法および信号処理システム

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL8800745A (nl) * 1988-03-24 1989-10-16 Augustinus Johannes Berkhout Werkwijze en inrichting voor het creeren van een variabele akoestiek in een ruimte.
JPH02296498A (ja) 1989-05-11 1990-12-07 Matsushita Electric Ind Co Ltd 立体音響再生装置および立体音響再生装置内蔵テレビセット
JP3067140B2 (ja) 1989-11-17 2000-07-17 日本放送協会 立体音響再生方法
GB9204485D0 (en) * 1992-03-02 1992-04-15 Trifield Productions Ltd Surround sound apparatus
US5495576A (en) * 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
US6240189B1 (en) 1994-06-08 2001-05-29 Bose Corporation Generating a common bass signal
GB2294854B (en) * 1994-11-03 1999-06-30 Solid State Logic Ltd Audio signal processing
JPH1063470A (ja) * 1996-06-12 1998-03-06 Nintendo Co Ltd 画像表示に連動する音響発生装置
DE19739425A1 (de) 1997-09-09 1999-03-11 Bosch Gmbh Robert Verfahren und Anordnung zur Wiedergabe eines sterophonen Audiosignals
US6349285B1 (en) * 1999-06-28 2002-02-19 Cirrus Logic, Inc. Audio bass management methods and circuits and systems using the same
JP2001224099A (ja) * 2000-02-14 2001-08-17 Pioneer Electronic Corp オーディオシステムにおける音場補正方法
GB0203895D0 (en) * 2002-02-19 2002-04-03 1 Ltd Compact surround-sound system

Also Published As

Publication number Publication date
DE10355146A1 (de) 2005-07-07
CN100588286C (zh) 2010-02-03
WO2005060307A1 (fr) 2005-06-30
US8699731B2 (en) 2014-04-15
JP2007512740A (ja) 2007-05-17
EP1671516A1 (fr) 2006-06-21
DE502004002926D1 (de) 2007-03-29
US20060280311A1 (en) 2006-12-14
JP4255031B2 (ja) 2009-04-15
CN1906971A (zh) 2007-01-31

Similar Documents

Publication Publication Date Title
EP1671516B1 (fr) Procede et dispositif de production d'un canal a frequences basses
EP1637012B1 (fr) Dispositif de synthese de champ electromagnetique et procede d'actionnement d'un reseau de haut-parleurs
EP1525776B1 (fr) Dispositif de correction de niveau dans un systeme de synthese de champ d'ondes
EP1872620B9 (fr) Dispositif et procede pour commander une pluralite de haut-parleurs au moyen d'une interface graphique d'utilisateur
EP1576847B1 (fr) Systeme de restitution audio et procede de restitution d'un signal audio
EP1800517B1 (fr) Dispositif et procede de commande d'une installation de sonorisation et installation de sonorisation correspondante
EP1782658B1 (fr) Dispositif et procede de commande d'une pluralite de haut-parleurs a l'aide d'un dsp
EP1972181B1 (fr) Dispositif et procédé de simulation de systèmes wfs et de compensation de propriétés wfs influençant le son
EP1606975B1 (fr) Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur
EP2754151B1 (fr) Dispositif, procédé et système électroacoustique de prolongement d'un temps de réverbération
DE10254470A1 (de) Vorrichtung und Verfahren zum Bestimmen einer Impulsantwort und Vorrichtung und Verfahren zum Vorführen eines Audiostücks

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20060509

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB NL

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

DAX Request for extension of the european patent (deleted)
RBV Designated contracting states (corrected)

Designated state(s): DE FR GB NL

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWAN

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB NL

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REF Corresponds to:

Ref document number: 502004002926

Country of ref document: DE

Date of ref document: 20070329

Kind code of ref document: P

GBT Gb: translation of ep patent filed (gb section 77(6)(a)/1977)

Effective date: 20070423

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20071115

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 14

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230524

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20231122

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20231123

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20231122

Year of fee payment: 20

Ref country code: DE

Payment date: 20231120

Year of fee payment: 20