EP1606975B1 - Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur - Google Patents

Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur Download PDF

Info

Publication number
EP1606975B1
EP1606975B1 EP04732100A EP04732100A EP1606975B1 EP 1606975 B1 EP1606975 B1 EP 1606975B1 EP 04732100 A EP04732100 A EP 04732100A EP 04732100 A EP04732100 A EP 04732100A EP 1606975 B1 EP1606975 B1 EP 1606975B1
Authority
EP
European Patent Office
Prior art keywords
time
delay
value
virtual source
weighting factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP04732100A
Other languages
German (de)
English (en)
Other versions
EP1606975A2 (fr
Inventor
Thomas Röder
Thomas Sporer
Sandra Brix
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Publication of EP1606975A2 publication Critical patent/EP1606975A2/fr
Application granted granted Critical
Publication of EP1606975B1 publication Critical patent/EP1606975B1/fr
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/13Application of wave-field synthesis in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution

Definitions

  • the present invention relates to wave-field synthesis systems, and more particularly to wave-field synthesis systems that permit moving virtual sources.
  • Applied to the acoustics can be simulated by a large number of speakers, which are arranged side by side (a so-called speaker array), any shape of an incoming wavefront.
  • a so-called speaker array any shape of an incoming wavefront.
  • the audio signals of each speaker must be fed with a time delay and amplitude scaling so that the radiated sound fields of each speaker properly overlap.
  • the contribution to each speaker is calculated separately for each source and the resulting signals added together.
  • reflections can also be reproduced as additional sources via the loudspeaker array. The effort in the calculation therefore depends heavily on the number of sound sources, the reflection properties of the recording room and the number of speakers.
  • the advantage of this technique is in particular that a natural spatial sound impression over a large area of the playback room is possible.
  • the direction and distance of sound sources are reproduced very accurately.
  • virtual sound sources can even be positioned between the real speaker array and the listener.
  • wavefield synthesis works well for environments whose characteristics are known, irregularities occur when the texture changes, or when wave field synthesis is performed based on environmental conditions that do not match the actual nature of the environment.
  • the technique of wave field synthesis can also be used advantageously to supplement a visual perception with a corresponding spatial audio perception.
  • production in virtual studios focused on providing an authentic visual impression of the virtual scene.
  • the matching to the image acoustic impression is usually impressed by manual operations in the so-called post-production subsequently the audio signal or classified as too complex and time-consuming in the realization and therefore neglected. This usually leads to a contradiction of the individual sense sensations, which leads to the space being designed, i. H. the designed scene is perceived as less authentic.
  • Zoom should be included in the sound design as well as a position of two speakers L and R.
  • tracking data of a virtual studio are written together with an associated time code from the system into a file.
  • image, sound and timecode are recorded on a VTR.
  • the camdump file is transferred to a computer which, from there, generates control data for an audio workstation and outputs it in synchronism with the image from the MAZ via a MIDI interface.
  • the actual audio editing such as positioning the sound source in the surround field and inserting early reflections and reverberation takes place within the audio workstation.
  • the signal is processed for a 5.1 surround loudspeaker system.
  • Camera tracking parameters as well as positions of sound sources in the recording setting can be recorded on real movie sets. Such data can also be generated in virtual studios.
  • an actor or presenter In a virtual studio, an actor or presenter is standing alone in a recording room. In particular, he stands in front of a blue wall, which is also referred to as blue box or blue panel. On this blue wall a pattern of blue and light blue stripes is applied. The special feature of this pattern is that the strips have different widths and thus result in a variety of strip combinations. Because of the unique stripe combinations on the blue wall, when the blue wall is replaced with a virtual background, it is possible to determine exactly in which direction the camera is facing. Using this information, the calculator can determine the background for the current camera viewing angle. Furthermore, sensors are evaluated on the camera that capture and output additional camera parameters.
  • Typical parameters of a camera which are detected by means of sensors, are the three degrees of translation x, y, z, the three degrees of rotation, also referred to as roll, tilt, pan are, and the focal length or the zoom, which is equivalent to the information about the opening angle of the camera.
  • a tracking system which consists of several infrared cameras, which determine the position of an infrared sensor attached to the camera.
  • the position of the camera is determined.
  • a real-time computer can now calculate the background for the current image. Then the blue hue that the blue background had is removed from the image so that the virtual background is played instead of the blue background.
  • the canvas or image surface forms the viewing direction and the perspective of the viewer. This means that the sound should track the image in the form that it always coincides with the viewed image. This becomes even more important for virtual studios, as there is typically no correlation between the sound of the moderation, for example, and the environment in which the presenter is currently located.
  • An essential subjective characteristic of such a sonic concept in this context is the location of a sound source, as perceived by a viewer, for example, a movie screen.
  • Wave Field Synthesis In the audio field, the technique of Wave Field Synthesis (WFS) can be used to achieve a good spatial sound for a large listener area.
  • wave field synthesis is based on the principle of Huygens, according to which wavefronts can be formed and built up by superimposing elementary waves. After mathematically exact theoretical description, infinitely many sources at infinitely small distance would have to be used for the generation of the elementary waves. Practically, however, many speakers are finally used at a finite distance from each other. Each of these speakers is driven according to the WFS principle with an audio signal from a virtual source having a particular delay and a certain level. Levels and delays are usually different for all speakers.
  • FIG. 7 shows a virtual source 700 moving from a first position, indicated by a circled “1” in FIG. 7, over time along a trajectory 702 to a second position shown in FIG circled “2" is shown.
  • three speakers 704 are shown schematically to symbolize a wave field synthesis speaker array.
  • a handset 706 which, in the example shown in Figure 7, is arranged such that the virtual source trajectory is a circular path extending around the listener which forms the center of this orbit.
  • the speakers 704 are not centered, in that at the time the virtual source 700 is at the first position, it has a first distance r 1 from a speaker, and then the source is in its second position has a second distance r 2 to the source.
  • r 1 is not equal to r 2
  • R 1 that is, the distance of the virtual source from the listener 706 is equal to the distance of the listener 706 to the virtual source at time 2. This means that no change in distance of the virtual source 700 takes place for the listener 706.
  • r 1 is not equal to r 2 .
  • the virtual source represents the primary transmitter while the speakers 704 represent the primary receiver. At the same time, the speakers 704 represent the secondary transmitter, while the listener 706 eventually represents the secondary receiver.
  • the transmission between the primary transmitter and the primary receiver is "virtual". This means that the wave field synthesis algorithms are responsible for the strain and compression of the wavefront of the waveforms.
  • a loudspeaker 704 receives a signal from the wave field synthesis module, there is initially no audible signal. The signal becomes audible only after output via the loudspeaker. This can lead to Doppler effects at various points.
  • each speaker will emit a signal with different Doppler effects, depending on its particular position relative to the moving virtual source, since the speakers are in different positions and the relative motions will be different for each speaker ,
  • the handset can move relative to the speakers.
  • this is a practical case, especially in a cinema setting, since the movement of the listener with respect to the speakers will always be a relatively slow movement with a correspondingly small Doppler effect, since the Doppler shift, as known in the art, is proportional to the relative movement between transmitter and receiver.
  • Doppler effect that is, when the virtual source moves relative to the speakers, may sound relatively natural, but also very unnatural. This depends on which direction the movement takes place. If the source is just moving away from the center of the system, it will have a more natural effect. Referring to Fig. 7, this would mean that the virtual source 700 is e.g. B. along the arrow R 1 would move away from the listener.
  • the object of the present invention is to provide an improved concept for calculating a discrete value at a current time of a component in a loudspeaker signal in which artefacts due to Doppler effects are reduced.
  • the present invention is based on the recognition that Doppler effects can be taken into account, since they form part of the information required for the position identification of a source. Should such a Doppler effect be completely dispensed with, this could lead to a non-optimal sound experience, since the Doppler effect is natural and would therefore lead to a less than optimum impression, for example when a virtual source moves towards a listener but no Doppler shift of the audio frequency takes place.
  • a discrete value for a current point in time in the cross-fade area is determined using a current-time sample of the audio signal at the first position, ie at a first time, and using a current-time sample of an audio signal virtual source at the second position, that is at the second time, calculated.
  • cross-fading occurs in that at the first time, that is, the first position changes and thus the first delay information are valid, a weighting factor for the audio signal delayed by the first delay is 100%, while a weighting factor for the order is the second delay delayed audio signal is 0%, and then, from the first time to the second time point, an opposite change of the two weighting factors is performed to "overshadow" smoothly from one position to the other position.
  • the inventive concept represents a compromise between on the one hand a certain loss of position information, since new position information of the source is no longer taken into account with each new current point in time, but because only a position update of the virtual source is carried out in rather rough steps, blending in between the one position of the source and the second position of the source taking place some time later , This is accomplished by first performing the delay for relatively coarse spatial increments, ie position information relatively distant in time (of course taking into account the speed of the source).
  • the delay change leading to the above-mentioned virtual Doppler effect between the primary transmitter and the primary receiver is smoothed, that is, continuously transitioned from one delay change to the other.
  • the fading or "panning” is carried out according to the invention by means of a volume scaling from one position to the next, in order to avoid spatial jumps, and thus audible “crackers".
  • the "hard” omission or addition of samples due to a delay change is replaced by a hard-edged rounded-corner waveform so that the delay changes are accommodated, but the hard influence on a loudspeaker signal resulting from artifacts is due to a change in position the virtual source is avoided.
  • FIG. 2 shows a classical wave field synthesis environment.
  • a wave field synthesis module 200 comprising various inputs 202, 204, 206 and 208 as well as various outputs 210, 212, 214, 216.
  • Upper inputs 202 to 204 are supplied to the wave field synthesis module various audio signals for virtual sources.
  • the input 202 receives z.
  • the audio signal 1 would then be the actual language of this actor, while the position information as a function of time represents the current position of the first actor in the recording setting.
  • the audio signal n would be the language of, for example, another actor moving in the same way or different from the first actor.
  • the current position of the other actor to whom the audio signal n is assigned is notified to the wave field synthesis module 200 by position information synchronized with the audio signal n.
  • a wave field synthesis module feeds a plurality of loudspeakers LS1, LS2, LS3, LSm by outputting loudspeaker signals via the outputs 210 to 216 to the individual loudspeakers.
  • the wave field synthesis module 200 is informed via the input 206 of the positions of the individual speakers in a playback setting, such as a movie theater.
  • a playback setting such as a movie theater.
  • the Wellenfeldsynthesemodul 200 still other inputs can be communicated, such as information about the room acoustics, etc., to be able to simulate the actual prevailing during the recording setting room acoustics in a cinema.
  • the loudspeaker signal supplied to the loudspeaker LS1 via the output 210 will be a superposition of component signals of the virtual sources, in that the loudspeaker signal for the loudspeaker LS1 is a first component originating in the virtual source 1, a second one Component that originates from the virtual source 2 and an nth component that goes back to the virtual source n.
  • the individual component signals are superimposed linearly, that is to say added according to their calculation, in order to simulate the linear superposition at the ear of the listener, who in a real setting will hear a linear superimposition of the sound sources that he can perceive.
  • the wave field synthesis module 200 has a highly parallel construction in that, starting from the audio signal for each virtual source and based on the position information for the corresponding virtual source, first delay information V i and scale factors SF i calculated from the position information and the position of the currently considered one Speaker, z. B. the speaker with the ordinal number j, so LSj depend.
  • the calculation of a delay information V i and a scaling factor SF i on the basis of the position information of a virtual source and the position of the considered loudspeaker j is done by known algorithms implemented in devices 300, 302, 304, 306 are.
  • the individual component signals are then summed by a summer 320 to determine the discrete value for the current time t A of the loudspeaker signal for the loudspeaker j, which is then used for the output (for example the output 214 if the loudspeaker j is the loudspeaker LS3). to which speaker can be supplied.
  • each value is calculated for each virtual source individually, based on a delay and scaling with a scaling factor at a current time, after which all the component signals for a loudspeaker are summed due to the different virtual sources. For example, if only one virtual source were present, the summer would be omitted and the signal applied to the output of the summer in FIG. For example, correspond to the signal output from the device 310 when the virtual source 1 is the only virtual source.
  • it is assumed without loss of generality that at the time t ' 0 a delay or delay of 0 samples has been calculated by the wave field synthesis module.
  • the switching time is further indicated by an arrow 404 in Fig. 4a.
  • the component for the loudspeaker signal on the basis of the virtual source shown in FIGS. 4a and 4b thus consists of the values shown in FIG. 4a from the time 0 to the time 8 and from the time 9 to a later point in time again a position change is signaled from the samples at the current times 9 to 12, which are shown in Fig. 4b.
  • This signal is shown in FIG. It can be seen that at the time of switching, that is, at the time of switching from one position to another position, where the switching in Fig. 8 is again indicated by 404, two samples have been omitted.
  • the device according to the invention shown in Fig. 1 particularly shows an apparatus for calculating a discrete value for a current time of a component K ij in a loudspeaker signal for a loudspeaker j due to a virtual source i in a wave field synthesis system comprising a wave field synthesis module and a plurality of loud speakers.
  • the wave-field synthesis module is configured to determine, using an audio signal associated with the virtual source and position information indicative of a position of the virtual source, delay information indicating how many samples the audio signal delays a time reference should occur in the component.
  • the apparatus shown in Figure 1 initially comprises means 10 for providing a first delay associated with a first position of the virtual source and providing a second delay associated with a second position of the virtual source.
  • the first position of the virtual source refers to a first time
  • the second position of the virtual source refers to a second time later than the first time.
  • the second position differs from the first position.
  • the second position is, for example, the position of the virtual source shown in FIG. 7 with the circled "2”
  • the first position is the position of the virtual source 700 shown in FIG. 7 with an encircled "1".
  • the provisioning device 10 thus provides on the output side a first delay 12a for the first time and a second delay 12b for the second time.
  • the device 10 is also designed to output not only the delays but also scaling factors for the two points in time, as will be explained later.
  • the two delays at the outputs 12a, 12b of the device 10 are signaled to a device 14 for determining a value of the delayed by the first delay audio signal which is supplied via an input 16 of the device 14 for the current time (the signal via an input 18 ) and for determining a second value of the audio delay delayed by the second delay for the current time.
  • the apparatus of the invention further comprises means 22 for weighting the first value of A 1 with a first weighting factor to obtain a weighted first value 24a.
  • the device 22 is further operative to weight the second value 20b of A 4 with a second weighting factor n to obtain a second weighted value 24b.
  • the two weighted values 24a and 24b are applied to a means 26 for summing the two values to actually obtain a "cross-faded" discrete value 28 for the instantaneous instant of the component K ij in a loudspeaker signal for a loudspeaker j due to the virtual source i.
  • the functionality of the device shown in FIG. 1 will be described below by way of example with reference to FIGS. 4c, 4d, 5 and 6.
  • neither the value from A 1 at the first time 401 nor the value from A 4 at the second time 402 is modified. According to the invention, however, all values are modified between t 1 401 and t 2 402, ie values which are assigned to a current time t A , which lies between the first time 401 and the second time 402.
  • the graph in FIG. 6, illustrates the first weighting factor m as a function of the actual times between the first time 401 and the second time 402.
  • the first weighting factor m is monotonically decreasing, while the second weighting factor n is monotonically increasing.
  • the two weighting factors will have a staircase-like course, since only for each sample, so can not be calculated continuously.
  • the step-shaped course will be a dashed or dashed line in FIG. 6 which, depending on the number of crossfade events or the predetermined computing capacity resources between the first time 401 and the second time 402, will be based on the continuous line.
  • FIG. 6 which is reflected in FIGS. 4c and 4d, two fade events between the first time 401 and the second time 402 have been resorted to.
  • the signal with the weighting factors associated with the first fade time m and n shown in a row 600 in Fig. 6 is represented by A 2 in Fig. 4c.
  • the signal associated with the second cross-fade time 602 is shown as A 3 in FIG. 4d.
  • the actual time course of the component K ij which is finally calculated (Figs.
  • Fig. 5 a new weighting factor is not calculated for each new sample value, that is to say with a period T A , but only every three sample periods. Therefore, for the current times 0, 1 and 2, the samples corresponding to these times are taken from Fig. 4a. For the current times 3, 4 and 5, the samples belonging to Fig. 4c for the times 3, 4 and 5 are taken. Further, for the times 6, 7 and 8, the samples associated with Fig. 4d are taken, while finally for the times 9, 10 and 11 and other times until a next position change or until a next fade the samples of Fig.
  • a "finer" smoothing could be achieved if the position updating interval PAI shown in FIG. 5 is performed not only as shown in FIG. 5, but every sampling so that the parameter N becomes 1 in FIG would. In this case, the staircase curve symbolizing the first weighting factor m would be correspondingly closer to the continuous curve.
  • the selection of whether a crossfade is performed for each sample, or whether only one crossfade, ie a position update, is performed every N samples may vary from case to case.
  • N a relatively high parameter ie to perform a new position update only after a relatively high number of samples, ie to generate a new "stage" in FIG. 6, while in the opposite case, that is, if the source moves quickly, a more frequent position update is preferred.
  • the first position information for the virtual source being considered was at the first time 401, while the second position information for the virtual source was at the second time 402, which was is nine samples past the first time.
  • the movement of the source for each intermediate position has been calculated in very small spatial and thus temporal steps to prevent an audible crackling in the audio signal from switching from one delay to another delay, this switching can be inhibited only if the Sample values before and after switching did not fall apart too much.
  • the current time t A must be between the first time 401 and the second time 402.
  • the minimum “step size”, ie the minimum distance between the first time 401 and the second time 402, will be two sampling periods according to the invention, so that the current time between the first time 401 and the second time 402 processes, for example, respective weighting factors of 0.5 can be.
  • a rather larger step size is preferred, firstly for reasons of computing time and secondly to produce a cross-fading effect that would no longer occur when the next time the next position is reached, which in turn leads to the unnatural Doppler effect in conventional wave field synthesis would lead.
  • step size ie for the distance from the first time 401 to the second time 402 will be that of course with increasing distance more and more position information that would actually be available, ignored due to the transition, which in extreme cases, to a loss the localizability of the virtual source will lead to the listener. Therefore, center-step increments are preferred, which additionally may depend on the speed of the virtual source, depending on the embodiment, to realize adaptive step-size control.
  • a linear course was selected as the "basis" for the stair curve for the first and second weighting factors.
  • a sinusoidal, square, cubic etc. course could be used.
  • the corresponding course of the other weighting factor would have to be complementary in that the sum of the first and the second weighting factor is always equal to 1 or within a predetermined tolerance range which extends for example by plus or minus 10% by 1, lies.
  • An option would be, for example, to take a gradient according to the square of the sine function for the first weighting factor and to take a curve according to the square of the cosine function for the second weighting factor, since the squares and cosines are squares for each argument, ie for each current time t A , is equal to 1.
  • each sample of the audio signal associated with a virtual source will have a certain amount B i .
  • the wave field synthesis module would then be operative to calculate a first scaling factor SF 1 for the first time 401 and a second scaling factor SF 2 for the second time 402.
  • the method according to the invention can be implemented in hardware or in software.
  • the implementation may be on a digital storage medium, in particular a floppy disk or CD with electronically readable control signals, which may interact with a programmable computer system such that the method is executed.
  • the invention thus also consists in a computer program product with a program code stored on a machine-readable carrier for Performing the method according to the invention, when the computer program product runs on a computer.
  • the invention can thus be realized as a computer program with a program code for carrying out the method when the computer program runs on a computer.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • Amplifiers (AREA)
  • Circuit For Audible Band Transducer (AREA)

Claims (19)

  1. Dispositif pour calculer une valeur discrète (28) pour un moment actuel d'une composante dans un signal de haut-parleur (322) pour un haut-parleur sur base d'une source virtuelle dans un système de synthèse de champ d'onde avec un module de synthèse de champ d'onde et une pluralité de haut-parleurs, le module de synthèse de champ d'onde étant réalisé de manière à déterminer, à l'aide d'un signal audio (16), qui est associé à la source virtuelle, et à l'aide d'une information de position, qui indique une position de la source virtuelle, une information de retard indiquant le nombre de valeurs de balayage duquel le signal audio doit se présenter retardé par rapport à une référence de temps dans la composante, aux caractéristiques suivantes :
    un dispositif (10) destiné à préparer un premier retard (12a), qui est associé à une première position de la source virtuelle à un premier moment, et à préparer un deuxième retard (12b), qui est associé à une deuxième position de la source virtuelle à un deuxième moment ultérieur, la deuxième position différant de la première position et le moment actuel étant situé entre le premier moment (400) et le deuxième moment (402) ;
    un dispositif (14) destiné à déterminer une valeur du signal audio retardé du premier retard pour le moment actuel et à déterminer une deuxième valeur du signal audio retardé du deuxième retard pour le moment actuel ;
    un dispositif (22) destiné à pondérer la première valeur par un premier facteur de pondération, pour obtenir une première valeur pondérée (24a), et la deuxième valeur par un deuxième facteur de pondération, pour obtenir une deuxième valeur pondérée (24b) ; et
    un dispositif destiné à additionner (26) la première valeur pondérée (24a) et la deuxième valeur pondérée (24b), pour obtenir la valeur discrète (28) pour le moment actuel.
  2. Dispositif selon la revendication 1, dans lequel le premier facteur de pondération et le deuxième facteur de pondération pour des valeurs comprises entre le premier et le deuxième moment (400, 402) sont réglés de sorte qu'ait lieu un fondu enchaîné du signal audio retardé du premier retard dans le signal audio retardé du deuxième retard.
  3. Dispositif selon la revendication 1 ou 2, dans lequel le premier facteur de pondération diminue entre le premier moment (400) et le deuxième moment (402), et dans lequel le deuxième facteur de pondération augmente entre le premier moment (400) et le deuxième moment (402).
  4. Dispositif selon l'une des revendications précédentes, dans lequel le premier facteur de pondération est, au premier moment, égal à 1 et est, au deuxième moment, égal à 0, et dans lequel le deuxième facteur de pondération est, au premier moment, égal à 0 et est, au deuxième moment, égal à 1.
  5. Dispositif selon l'une des revendications précédentes, dans lequel les premier et deuxième facteurs de pondération sont fonction d'une différence entre le moment actuel et le premier moment (400) ou le deuxième moment (402).
  6. Dispositif selon l'une des revendications précédentes, dans lequel le premier facteur de pondération diminue de manière monotone du premier moment au deuxième moment, et le deuxième facteur de pondération augmente de manière monotone du premier moment au deuxième moment.
  7. Dispositif selon l'une des revendications précédentes, dans lequel la somme du premier facteur de pondération et du deuxième facteur de pondération se situe dans les limites d'une plage de tolérance prédéterminée qui s'étend autour d'une valeur définie.
  8. Dispositif selon la revendication 7, dans lequel la plage de tolérance prédéterminée est de plus ou moins 10 %.
  9. Dispositif selon l'une des revendications précédentes, dans lequel le signal audio est une succession de valeurs discrètes distantes l'une de l'autre, chacune d'une période de balayage,
    dans lequel le premier moment et le deuxième moment sont distants l'un de l'autre de plus d'une période de balayage.
  10. Dispositif selon la revendication 9,
    dans lequel le premier moment et le deuxième moment sont réglés de manière fixe.
  11. Dispositif selon la revendication 9, dans lequel le dispositif (10) destiné à préparer le premier et le deuxième retard est réalisé de manière à régler un intervalle de temps entre le premier moment et le deuxième moment en fonction des informations de position, de sorte que l'intervalle de temps soit supérieur à un intervalle de référence lorsque la source virtuelle se déplace à une vitesse inférieure à une vitesse de référence et que l'intervalle de temps soit inférieur à un intervalle de référence lorsque la source virtuelle se déplace à une vitesse supérieure à une vitesse de référence.
  12. Dispositif selon l'une des revendications précédentes, dans lequel un intervalle de temps entre le premier moment et le deuxième moment est de N périodes de balayage, et
    dans lequel le dispositif (22) destiné à pondérer est réalisé de manière à utiliser pour un nombre de M valeurs discrètes actuelles successives le même premier facteur de pondération et le même deuxième facteur de pondération, M étant inférieur à N et supérieur ou égal à 2.
  13. Dispositif selon l'une des revendications précédentes, dans lequel le dispositif (22) destiné à pondérer est réalisé de manière à calculer, pour chaque valeur de balayage actuelle, un premier facteur de pondération actuel et un deuxième facteur de pondération actuel, de sorte que le premier et le deuxième facteur de pondération soient différents d'un premier et d'un deuxième facteur de pondération qui ont été déterminés pour une valeur de balayage précédente déterminée.
  14. Dispositif selon l'une des revendications précédentes, dans lequel
    le dispositif (10) destiné à préparer est réalisé de manière à estimer le deuxième retard pour le deuxième moment sur base d'un ou de plusieurs retards pour des moments précédents.
  15. Dispositif selon l'une des revendications précédentes, dans lequel les informations de position de la source virtuelle sont associées selon une grille de temps au signal audio pour la source virtuelle, le premier et le deuxième moment étant distants l'un de l'autre d'une durée qui est plus longue qu'un intervalle de temps entre deux points de la grille de temps.
  16. Dispositif selon l'une des revendications précédentes, dans lequel plusieurs signaux audio sont présents pour plusieurs sources virtuelles, dans lequel est calculée une composante pour chaque source virtuelle, et dans lequel toutes les composantes pour un haut-parleur sont additionnées, pour obtenir le signal pour le haut-parleur.
  17. Dispositif selon l'une des revendications précédentes,
    dans lequel le module de synthèse de champ d'onde est réalisé de manière à calculer, outre les informations de retard, également les informations de modulation qui indiquent le facteur de modulation par lequel doit être modulé le signal audio associé à la source virtuelle, et
    dans lequel le dispositif (22) destiné à pondérer est réalisé de manière à calculer la première valeur pondérée (24a) comme produit de la valeur de la composante pour le moment actuel et d'un premier facteur de modulation pour le moment actuel et le premier facteur de modulation, et
    dans lequel le dispositif (22) destiné à pondérer est, par ailleurs, réalisé de manière à calculer la deuxième valeur pondérée comme produit de la valeur de la composante pour le moment actuel, du deuxième facteur de modulation pour le moment actuel et du deuxième facteur de modulation.
  18. Procédé pour calculer une valeur discrète (28) pour un moment actuel d'une composante dans un signal de haut-parleur (322) pour un haut-parleur sur base d'une source virtuelle dans un système de synthèse de champ d'onde avec un module de synthèse de champ d'onde et une pluralité de haut-parleurs, le module de synthèse de champ d'onde étant réalisé de manière à déterminer, à l'aide d'un signal audio (16), qui est associé à la source virtuelle, et à l'aide d'une information de position, qui indique une position de la source virtuelle, une information de retard indiquant le nombre de valeurs de balayage duquel le signal audio doit se présenter retardé par rapport à une référence de temps dans la composante, aux étapes suivantes consistant à :
    préparer (10) un premier retard (12a), qui est associé à une première position de la source virtuelle à un premier moment, et à préparer un deuxième retard (12b), qui est associé à une deuxième position de la source virtuelle à un deuxième moment ultérieur, la deuxième position différant de la première position et le moment actuel étant situé entre le premier moment (400) et le deuxième moment (402) ;
    déterminer (14) une valeur du signal audio retardé du premier retard pour le moment actuel et à déterminer une deuxième valeur du signal audio retardé du deuxième retard pour le moment actuel ;
    pondérer (22) la première valeur par un premier facteur de pondération, pour obtenir une première valeur pondérée (24a), et la deuxième valeur par un deuxième facteur de pondération, pour obtenir une deuxième valeur pondérée (24b) ; et
    additionner (26) la première valeur pondérée (24a) et la deuxième valeur pondérée (24b), pour obtenir la valeur discrète (28) pour le moment actuel.
  19. Programme d'ordinateur avec un code de programme pour exécuter le procédé selon la revendication 18 lorsque le programme se déroule sur un ordinateur.
EP04732100A 2003-05-15 2004-05-11 Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur Expired - Lifetime EP1606975B1 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE10321980 2003-05-15
DE10321980A DE10321980B4 (de) 2003-05-15 2003-05-15 Vorrichtung und Verfahren zum Berechnen eines diskreten Werts einer Komponente in einem Lautsprechersignal
PCT/EP2004/005047 WO2004103022A2 (fr) 2003-05-15 2004-05-11 Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur

Publications (2)

Publication Number Publication Date
EP1606975A2 EP1606975A2 (fr) 2005-12-21
EP1606975B1 true EP1606975B1 (fr) 2007-01-24

Family

ID=33440864

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04732100A Expired - Lifetime EP1606975B1 (fr) 2003-05-15 2004-05-11 Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur

Country Status (8)

Country Link
US (1) US7734362B2 (fr)
EP (1) EP1606975B1 (fr)
JP (1) JP4698594B2 (fr)
KR (1) KR100674814B1 (fr)
CN (1) CN100553372C (fr)
AT (1) ATE352971T1 (fr)
DE (2) DE10321980B4 (fr)
WO (1) WO2004103022A2 (fr)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005008366A1 (de) * 2005-02-23 2006-08-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Ansteuern einer Wellenfeldsynthese-Renderer-Einrichtung mit Audioobjekten
DE102005008342A1 (de) 2005-02-23 2006-08-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Speichern von Audiodateien
DE102005008369A1 (de) 2005-02-23 2006-09-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Simulieren eines Wellenfeldsynthese-Systems
DE102005008343A1 (de) 2005-02-23 2006-09-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Liefern von Daten in einem Multi-Renderer-System
DE102005008333A1 (de) * 2005-02-23 2006-08-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Steuern einer Wellenfeldsynthese-Rendering-Einrichtung
DE102005027978A1 (de) 2005-06-16 2006-12-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Erzeugen eines Lautsprechersignals aufgrund einer zufällig auftretenden Audioquelle
US8031891B2 (en) * 2005-06-30 2011-10-04 Microsoft Corporation Dynamic media rendering
DE102005033239A1 (de) * 2005-07-15 2007-01-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Steuern einer Mehrzahl von Lautsprechern mittels einer graphischen Benutzerschnittstelle
DE102005033238A1 (de) * 2005-07-15 2007-01-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Ansteuern einer Mehrzahl von Lautsprechern mittels eines DSP
DE102006010212A1 (de) * 2006-03-06 2007-09-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zur Simulation von WFS-Systemen und Kompensation von klangbeeinflussenden WFS-Eigenschaften
DE102007059597A1 (de) 2007-09-19 2009-04-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Eine Vorrichtung und ein Verfahren zur Ermittlung eines Komponentensignals in hoher Genauigkeit
EP2478716B8 (fr) * 2009-11-04 2014-01-08 Fraunhofer Gesellschaft zur Förderung der angewandten Forschung e.V. Appareil et procédé permettant de calculer des coefficients de puissance pour des haut-parleurs d'un agencement de haut-parleur pour un signal audio associé à une source virtuelle
JP5361689B2 (ja) * 2009-12-09 2013-12-04 シャープ株式会社 オーディオデータ処理装置、オーディオ装置、オーディオデータ処理方法、プログラム及び記録媒体
JP2011124723A (ja) * 2009-12-09 2011-06-23 Sharp Corp オーディオデータ処理装置、オーディオ装置、オーディオデータ処理方法、プログラム及び当該プログラムを記録した記録媒体
CA3151342A1 (fr) * 2011-07-01 2013-01-10 Dolby Laboratories Licensing Corporation Systeme et outils pour la creation et le rendu de son multicanaux ameliore
US9357293B2 (en) * 2012-05-16 2016-05-31 Siemens Aktiengesellschaft Methods and systems for Doppler recognition aided method (DREAM) for source localization and separation
WO2013181272A2 (fr) * 2012-05-31 2013-12-05 Dts Llc Système audio orienté objet utilisant un panoramique d'amplitude sur une base de vecteurs
CN107393523B (zh) * 2017-07-28 2020-11-13 深圳市盛路物联通讯技术有限公司 一种噪音监控方法及系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5052685A (en) * 1989-12-07 1991-10-01 Qsound Ltd. Sound processor for video game
JPH04132499A (ja) 1990-09-25 1992-05-06 Matsushita Electric Ind Co Ltd 音像制御装置
US5598478A (en) * 1992-12-18 1997-01-28 Victor Company Of Japan, Ltd. Sound image localization control apparatus
JP2882449B2 (ja) 1992-12-18 1999-04-12 日本ビクター株式会社 テレビゲーム用の音像定位制御装置
JPH06245300A (ja) 1992-12-21 1994-09-02 Victor Co Of Japan Ltd 音像定位制御装置
US5495576A (en) * 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
GB2294854B (en) * 1994-11-03 1999-06-30 Solid State Logic Ltd Audio signal processing
JPH1063470A (ja) * 1996-06-12 1998-03-06 Nintendo Co Ltd 画像表示に連動する音響発生装置
DE60036958T2 (de) * 1999-09-29 2008-08-14 1...Ltd. Verfahren und vorrichtung zur ausrichtung von schall mit einer gruppe von emissionswandlern

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
CN1792118A (zh) 2006-06-21
CN100553372C (zh) 2009-10-21
DE502004002769D1 (de) 2007-03-15
EP1606975A2 (fr) 2005-12-21
JP4698594B2 (ja) 2011-06-08
US20060092854A1 (en) 2006-05-04
KR20060014050A (ko) 2006-02-14
WO2004103022A3 (fr) 2005-02-17
DE10321980B4 (de) 2005-10-06
WO2004103022A2 (fr) 2004-11-25
JP2007502590A (ja) 2007-02-08
KR100674814B1 (ko) 2007-01-25
DE10321980A1 (de) 2004-12-09
ATE352971T1 (de) 2007-02-15
US7734362B2 (en) 2010-06-08

Similar Documents

Publication Publication Date Title
EP1637012B1 (fr) Dispositif de synthese de champ electromagnetique et procede d'actionnement d'un reseau de haut-parleurs
EP1606975B1 (fr) Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur
EP1671516B1 (fr) Procede et dispositif de production d'un canal a frequences basses
EP1525776B1 (fr) Dispositif de correction de niveau dans un systeme de synthese de champ d'ondes
EP1872620B9 (fr) Dispositif et procede pour commander une pluralite de haut-parleurs au moyen d'une interface graphique d'utilisateur
DE10254404B4 (de) Audiowiedergabesystem und Verfahren zum Wiedergeben eines Audiosignals
EP1972181B1 (fr) Dispositif et procédé de simulation de systèmes wfs et de compensation de propriétés wfs influençant le son
EP1844628B1 (fr) Procede et dispositif d'amorçage d'une installation de moteur de rendu de synthese de front d'onde avec objets audio
EP1851998B1 (fr) Dispositif et procédé pour fournir des données dans un système a dispositifs de rendu multiples
EP1723825B1 (fr) Dispositif et procede pour reguler un dispositif de rendu de synthese de champ electromagnetique
EP1518443B1 (fr) Dispositif et procede pour determiner une position de reproduction
EP2754151B1 (fr) Dispositif, procédé et système électroacoustique de prolongement d'un temps de réverbération
DE102005027978A1 (de) Vorrichtung und Verfahren zum Erzeugen eines Lautsprechersignals aufgrund einer zufällig auftretenden Audioquelle
DE10254470A1 (de) Vorrichtung und Verfahren zum Bestimmen einer Impulsantwort und Vorrichtung und Verfahren zum Vorführen eines Audiostücks

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20051014

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL HR LT LV MK

RIC1 Information provided on ipc code assigned before grant

Ipc: H04S 3/00 20060101AFI20060113BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

DAX Request for extension of the european patent (deleted)
RBV Designated contracting states (corrected)

Designated state(s): AT CH DE FR GB LI NL

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWAN

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT CH DE FR GB LI NL

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REF Corresponds to:

Ref document number: 502004002769

Country of ref document: DE

Date of ref document: 20070315

Kind code of ref document: P

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20071025

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20070511

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080531

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080531

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 14

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 15

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230524

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20230519

Year of fee payment: 20

Ref country code: FR

Payment date: 20230517

Year of fee payment: 20

Ref country code: DE

Payment date: 20230519

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230522

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 502004002769

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MK

Effective date: 20240510

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20240510