WO2004103022A2 - Vorrichtung und verfahren zum berechnen eines diskreten werts einer komponente in einem lautsprechersignal - Google Patents
Vorrichtung und verfahren zum berechnen eines diskreten werts einer komponente in einem lautsprechersignal Download PDFInfo
- Publication number
- WO2004103022A2 WO2004103022A2 PCT/EP2004/005047 EP2004005047W WO2004103022A2 WO 2004103022 A2 WO2004103022 A2 WO 2004103022A2 EP 2004005047 W EP2004005047 W EP 2004005047W WO 2004103022 A2 WO2004103022 A2 WO 2004103022A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- time
- point
- delay
- value
- weighting factor
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S1/00—Two-channel systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/13—Application of wave-field synthesis in stereophonic audio systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/002—Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
Definitions
- the present invention relates to wave field synthesis systems and in particular to wave field synthesis systems which allow moving virtual sources.
- WFS Wave-Field Synthesis
- Every point that is captured by a wave is the starting point of an elementary wave that propagates in a spherical or circular manner.
- a large number of loudspeakers that are arranged next to each other can be used to simulate any shape of an incoming wavefront.
- the audio signals of each loudspeaker must be fed with a time delay and amplitude scaling in such a way that the emitted sound fields of the individual loudspeakers are superimposed correctly. If there are several sound sources, the contribution to each loudspeaker is calculated separately for each source and the resulting signals are added. In a virtual room with reflecting walls, reflections can also be reproduced as additional sources via the loudspeaker array. The effort involved in the calculation therefore depends heavily on the number of sound sources, the flexion properties of the recording room and the number of speakers.
- the particular advantage of this technique is that a natural spatial sound impression is possible over a large area of the playback room.
- the direction and distance of sound sources are reproduced very precisely.
- virtual sound sources can even be positioned between the real speaker array and the listener.
- wave field synthesis works well for environments whose properties are known, irregularities do occur when the nature changes or when the wave field synthesis is carried out on the basis of an environment condition that does not match the actual nature of the environment.
- the technique of wave field synthesis can also be used advantageously to complement a visual perception with a corresponding spatial audio perception.
- the focus in production in virtual studios has been to convey an authentic visual impression of the virtual scene.
- the acoustic impression that goes with the image is usually imprinted on the audio signal by manual work steps in what is known as post-production, or is classified as too complex and time-consuming to implement and is therefore neglected. This usually leads to a contradiction of the individual sensations, which leads to the fact that the designed space, i. H. the designed scene, which is perceived as less authentic.
- “Hearing with the ears of the camera” is to be made possible in order to make a scene appear more real.
- the aim here is to achieve the highest possible correlation between the sound event location in the image and the hearing event location in the surround field.
- Camera parameters such as Zoom, should be included in the sound design as well as a position of two loudspeakers L and R.
- tracking data of a virtual studio are written into a file together with an associated time code by the system.
- picture, sound and time code are recorded on a MAZ.
- the camdump file is transferred to a computer, which generates control data for an audio workstation and outputs it via a MIDI interface in sync with the image from the MAZ.
- the actual audio processing such as positioning the sound source in the surround field and inserting early reflections and reverberation takes place within the audio workstation.
- the signal is processed for a 5.1 surround speaker system.
- Camera tracking parameters as well as positions of sound sources in the recording setting can be recorded in real film sets. Such data can also be generated in virtual studios.
- an actor or presenter stands alone in a recording room.
- he stands in front of a blue wall, which is also known as a blue box or blue panel.
- a pattern of blue and light blue stripes is applied to this blue wall.
- the special thing about this pattern is that the stripes are of different widths and thus result in a multitude of stripe combinations. Due to the unique stripe combinations on the blue wall, it is possible to determine exactly in which direction the camera is looking when the post-processing is replaced by a virtual background. With the help of this information, the computer can determine the background for the current camera viewing angle. Sensors on the camera are also evaluated, which record and output additional camera parameters.
- Typical parameters of a camera which are recorded by means of sensors, are the three degrees of translation x, y, z, the three degrees of rotation, which can also be called roll, tilt, pan. are drawn, and the focal length or the zoom, which is synonymous with the information about the opening angle of the camera.
- a tracking system can be used that consists of several infrared cameras that determine the position of an infrared sensor attached to the camera. This also determines the position of the camera.
- a real-time computer can now calculate the background for the current image. The blue hue that the blue background had was then removed from the image, so that the virtual background is imported instead of the blue background.
- wave field synthesis In the audio area, the technology of wave field synthesis (WFS) can be used to achieve good spatial sound for a large range of listeners.
- wave field synthesis is based on the principle of Huygens, according to which wave fronts can be shaped and built up by superimposing elementary waves. According to a mathematically exact theoretical description, an infinite number of sources at infinitely small distances would have to be used to generate the elementary waves. In practice, however, many loudspeakers are finally used at a finite distance apart. Each of these loudspeakers is controlled according to the WFS principle with an audio signal from a virtual source, which has a specific delay and a specific level. Levels and delays are usually different for all speakers.
- a Doppler effect also exists in wave field synthesis or sound field synthesis. It is physically based on the same background as the natural Doppler effect described above. In contrast to the natural Doppler effect, there is no direct path between the transmitter and the receiver in sound field synthesis. Instead, a distinction is made in that there is a primary transmitter and a primary receiver. There is also a secondary transmitter and a secondary receiver. This scenario is illustrated below with the aid of FIG. 7.
- FIG. 7 shows a virtual source 700 which moves from a first position, which is denoted by a circled “1” in FIG. 7, over time along a movement path 702 to a second position, which in FIG - A circled “2" is shown.
- three loudspeakers 704 are shown schematically, which are intended to symbolize a wave field synthesis loudspeaker array.
- a receiver 706 which in the example shown in FIG.
- the path of movement of the virtual source is a circular path that extends around the receiver that forms the center of this circular path
- the loudspeakers 704 are not arranged in the center, in that, at the point in time at which the virtual source 700 is in the first position, it is at a first distance ri from a loudspeaker and that the source is then in its second Position has a second distance r 2 to the source.
- ri is not equal to r 2
- Ri that is to say the distance of the virtual source from the listener 706, is equal to the distance from the listener 706 to the virtual source at time 2. This means that there is no change in the distance of the virtual source 700 for the receiver 706.
- the virtual source 700 changes position relative to the loudspeakers 704, since ri is not equal to r 2 .
- the virtual source represents the primary transmitter, while speakers 704 represent the primary receiver.
- the loudspeakers 704 represent the secondary transmitter, while the listener 706 finally represents the secondary receiver.
- the transmission between the primary transmitter and the primary receiver is "virtual." This means that the wave field synthesis algorithms are responsible for the stretching and compression of the wave front of the waveforms.
- a speaker 704 receives a signal from the wave field synthesis module , there is no audible signal at first, the signal only becomes audible after being output via the loudspeaker, which can result in Doppler effects at various points.
- each loudspeaker reproduces a signal with a different Doppler effect, depending on its specific position with regard to the moving virtual source, since the loudspeakers are in different positions and the relative movements for each sound - Speakers are different.
- the listener can also move relative to the speakers.
- this is a case which is insignificant in practice, in particular in a cinema setting, since the movement of the listener with respect to the loudspeakers will always be a relatively slow movement with a correspondingly small Doppler effect, since the Doppler shift, as is known in the art, is proportional to the relative movement between sender and receiver.
- the first-mentioned Doppler effect i.e. when the virtual source moves relative to the speakers, can sound relatively natural, but also very unnatural. This depends on the direction in which the movement takes place. If the source moves straight away from the center of the system, there is a more natural effect. Referring to FIG. 7, this would mean that the virtual source 700 e.g. B. would move along the arrow R x away from the listener.
- the virtual source 700 "circles" the listener 706, as is shown with reference to FIG. 7, there is a very unnatural effect, since the relative movements between the primary source and the primary receiver (loudspeaker) are very strong and also very different within the different primary receivers are what is in stark contrast to nature, where there is no Doppler effect when the source is surrounded by the listener since there is no change in distance between the source and listener.
- the object of the present invention is to provide an improved concept for calculating a discrete value at a current point in time of a component in a loudspeaker signal, in which artifacts due to Doppler effects are reduced.
- the present invention is based on the knowledge that Doppler effects can be taken into account since they are a component of the information required for the position identification of a source. If such Doppler effects would have to be completely dispensed with, this could lead to a less than optimal sound experience, since the Doppler effect is natural and would therefore lead to a less than optimal impression if, for example, a virtual source moves towards a listener , but there is no Doppler shift in the audio frequency.
- a "blending" from one position to another position is carried out to "blur" the Doppler effect, to the extent that it is present, but that its effects lead to no or only reduced artifacts.
- a discrete value for a current point in time in the cross-fade area is used in the cross-fade area using a sample value of the audio signal valid for the current point in time at the first position, ie at a first point in time, and using a sample value belonging to a current point in time Audio signal of the virtual source at the second position, that is to say at the second point in time.
- a crossfading preferably takes place in such a way that at the first point in time, that is to say the first position changes and thus the first delay information are valid, a weighting factor for the audio signal which is delayed with the first delay is 100%, while a weighting factor for the the second delay delayed audio signal is 0%, and then, from the first point in time to the second point in time, an opposite change in the two weighting factors is carried out in order to "blend", so to speak, "smoothly" from one position to the other position.
- the concept according to the invention represents a compromise between, on the one hand, a certain loss of positional Formations, since new position information of the source is no longer taken into account with each new current point in time, but only a position update of the virtual source is carried out in rather rough steps, whereby between the one position of the source and the second position of the source, which takes place some time later is faded.
- This is done in that the delay is initially carried out for relatively coarse spatial step sizes, ie position information which is relatively far away in time (of course taking into account the speed of the source).
- the delay change that leads to the above-mentioned virtual Doppler effect between the primary transmitter and the primary receiver is thus smoothed out, that is, continuously transferred from one delay change to another.
- the cross-fading or "panning” takes place according to the invention by means of a volume scale from one position to the next in order to avoid spatial jumps and thus audible "crackling".
- the "hard" omission or addition of samples due to a delay change is replaced by a waveform with rounded corners adapted to the hard signal shape, so that the delay changes are taken into account, but that the hard influence on a loudspeaker signal leading to artefacts is caused a change in position of the virtual source is avoided.
- FIG. 1 shows a block diagram of a device according to the invention
- FIG. 2 shows a basic circuit diagram of a wave field synthesis environment as can be used for the present invention
- FIG. 3 shows a more detailed illustration of the wave field synthesis module shown in FIG. 2;
- FIG. 4c shows a first cross-faded version based on the audio signals shown in FIGS. 4a and 4b in a period between the first point in time at which FIG. 4a is valid and a second point in time at which FIG. 4b is valid;
- FIG. 4d shows a further cross-fade representation at a later point in time with respect to FIG. 4c, at which the signal shown in FIG. 4b is valid;
- FIG. 5 shows a time profile of the component Ki j in a loudspeaker signal based on a virtual source i, which is composed of the time profiles of FIGS. 4a to 4d;
- FIG. 6 shows a detailed illustration of the weighting factors m, n which have been used in the calculation of the audio signals shown in FIGS. 4a to 4d;
- FIG. 1 shows a classic wave synthesis environment.
- the center of a wave field synthesis environment is a wave field synthesis module 200, which comprises various inputs 202, 204, 206 and 208 and various outputs 210, 212, 214, 216.
- Various audio signals for virtual sources are fed to the wave field synthesis module via inputs 202 to 204. So the input 202 receives z. B. an audio signal of the virtual source 1 and associated position information of the virtual source.
- the audio signal 1 would be e.g. B. the language of an actor who moves from a left side of the screen to a right side of the screen and possibly additionally away from the viewer or towards the viewer.
- the audio signal 1 would then be the actual language of this actor, while the position information as a function of time represents the current position of the first actor in the recording setting at a certain point in time.
- the audio signal n would be the language of, for example, another actor who moves the same or different than the first actor.
- the current position of the other actor to whom the audio signal n is assigned is communicated to the wave field synthesis module 200 by position information synchronized with the audio signal n.
- a wave field synthesis module feeds a plurality of loudspeakers LSI, LS2, LS3, LSm by outputting loudspeaker signals via the outputs 210 to 216 to the individual loudspeakers.
- the positions of the individual loudspeakers in a playback setting, such as a cinema, are communicated to the wave field synthesis module 200 via the input 206.
- the wave field synthesis module 200 In the cinema hall there are many individual loudspeakers grouped around the cinema audience, preferably in arrays are arranged such that there are loudspeakers both in front of the viewer, for example behind the screen, and behind the viewer and to the right and left of the viewer.
- other inputs can be communicated to the wave field synthesis module 200, such as information about the room acoustics, etc., in order to be able to simulate the actual room acoustics prevailing during the recording set-up in a cinema hall.
- the loudspeaker signal which is supplied to the loudspeaker LSI via the output 210 will be a superimposition of component signals of the virtual sources, in that the loudspeaker signal for the loudspeaker LSI is a first component which originates from the virtual source 1, a second Component, which goes back to the virtual source 2, as well as an nth component, which goes back to the virtual source n.
- the individual component signals are linearly superimposed, i.e. added after their calculation, in order to simulate the linear superposition at the ear of the listener, who will hear a linear superposition of the sound sources perceivable in a real setting.
- the wave field synthesis module 200 has a strongly parallel structure in that, starting from the audio signal for each virtual source and starting from the position information for the corresponding virtual source, delay information Vi and scaling factors SFi are first calculated, which are based on the position information and the position of the loudspeaker under consideration, z. B. depend on the loudspeaker with the order number j, i.e. LSj.
- a delay information Vi and a scaling factor SFi are calculated on the basis of the position information of a virtual source and the position of the loudspeaker j in question using known algorithms which are implemented in devices 300, 302, 304, 306. are mented.
- a discrete value AWi (t A ) for the component signal Kij is combined in one for a current time t A ultimately obtained speaker signal calculated. This is done by means 310, 312, 314, 316, as shown schematically in FIG. 3. 3 also shows, so to speak, a "flash light recording" at time t A for the individual component signals.
- the individual component signals are then summed by a summer 320 to determine the discrete value for the current time t A of the loudspeaker signal for loudspeaker j, which then for the output (e.g. output 214 if speaker j is speaker LS3) can be fed to the speaker.
- a value that is valid due to a delay and scaling with a scaling factor at a current point in time is first calculated individually for each virtual source, after which all component signals for a loudspeaker are summed due to the different virtual sources. If, for example, there were only one virtual source, the summer would be omitted and the signal present at the output of the summer in FIG. B. correspond to the signal output by the device 310 when the virtual source 1 is the only virtual source.
- it is assumed that at time t ' 0 there is a delay of 0 sample values has been calculated by the wave field synthesis module.
- the time of switching is also identified by an arrow 404 in FIG. 4a.
- the component for the loudspeaker signal on the basis of the virtual source shown in FIGS. 4a and 4b thus consists of the values shown in FIG. 4a from time 0 to time 8 and from time 9 to a later time, at which a change in position is signaled again, from the samples at the current times 9 to 12, which are shown in FIG. 4b.
- This signal is shown in Fig. 8. It can be seen that at the time of switching, that is to say at the time of switching from one position to the other position, the switching again being designated by 404 in FIG. 8, two samples were omitted.
- the device according to the invention shown in FIG. 1 is used for artifacts caused by another delay.
- 1 shows in particular a device for calculating a discrete value for a current point in time of a component Kj in a loudspeaker signal for a loudspeaker j on the basis of a virtual source i in a wave field synthesis system with a wave field synthesis module and a plurality of loudspeakers.
- the wave field synthesis module is designed to determine, using an audio signal associated with the virtual source and using position information that indicates a position of the virtual source, delay information that indicates how many samples the audio signal is delayed with respect to a time reference should occur in the component.
- first comprises a device 10 for providing a first delay which is associated with a first position of the virtual source and for providing a second delay which is associated with a second position of the virtual source.
- first position of the virtual source relates to a first point in time
- second position of the virtual source relates to a second point in time that is later than the first point in time.
- the second position differs from the first position.
- the second position is, for example, the position of the virtual source shown in FIG. 7 with the circled "2", while the first position is the position of the virtual source 700 shown in FIG. 7 with a circled "1".
- the device 10 for providing thus provides a first delay 12a for the first point in time and a second delay 12b for the second point in time.
- the device 10 is also designed to output scaling factors for the two times in addition to the delays, as will be explained later.
- the two delays at the outputs 12a, 12b of the device 10 are a device 14 for determining a value of the audio signal delayed by the first delay, which is supplied via an input 16 to the device 14, for the current time (that via an input 18 can be signaled) and fed to determine a second value of the audio signal delayed by the second delay for the current point in time.
- the device according to the invention further comprises means 22 for weighting the first value from Ai with a first weighting factor in order to obtain a weighted first value 24a.
- the device 22 is further operative to determine the second value 20b of A 4 with a second weighting factor n to be weighted, weighted by a second value to obtain 24b.
- the two weighted values 24a and 24b are fed to a device 26 for summing the two values in order to actually obtain a “faded” discrete value 28 for the current time of the component Kij in a loudspeaker signal for a loudspeaker j on the basis of the virtual source i.
- the functionality of the device shown in FIG. 1 is shown by way of example with reference to FIGS. 4c, 4d, 5 and 6.
- neither the value from Ai at the first time 401 nor the value from A 4 at the second time 402 is modified.
- all values between ti 401 and t 2 402 are modified according to the invention, that is to say values which are assigned to a current time t A which lies between the first time 401 and the second time 402.
- the graph in FIG. 6, represents the first weighting factor m as a function of the current times between the first time 401 and the second time 402.
- the first weighting factor m is monotonically falling, while the second weighting factor n is monotonically increasing.
- the two weighting factors will have a step-like course, since it is only possible to calculate continuously for each sample value, ie not continuously.
- the step-shaped course will be a course shown in dashed or dotted lines in FIG. 6, which, depending on the number of crossfading events or the predefined computing capacity resources, will be based on the continuous line between the first point in time 401 and the second point in time 402 accordingly often.
- FIG. 6 For example only, in the embodiment shown in FIG. 6, which is reflected in FIGS. 4c and 4d, two cross-fading events between the first time 401 and the second time 402 were used.
- the signal with the weighting associated with the first transition time factors m and n, which are shown in a line 600 in FIG. 6, are represented by A 2 in FIG. 4c.
- the signal associated with the second crossfade instant 602 is shown with A 3 in FIG. 4d.
- the actual course of time of component K 13 which is ultimately calculated (FIGS. 4a to 4d are only for illustration), is shown in FIG. 5.
- FIGS. 5 and 6 a new weighting factor is not calculated for each new sample value, that is to say with a period T, but only every three sampling time periods.
- the sampling values corresponding to these times are therefore taken from FIG. 4a for the current times 0, 1 and 2.
- the sample values for the points in time 3, 4 and 5 belonging to FIG. 4c are taken.
- the sampling values belonging to FIG. 4d are taken for the times 6, 7 and 8, while finally the sampling values from FIG. 4 are taken for the times 9, 10 and 11 and further times until a next position change or a next crossfading action 4b which correspond to the current times 9, 10 and 11, respectively.
- a "finer" smoothing could be achieved if the position update interval PAI shown in FIG. 5 is carried out not only every three samples, as shown in FIG. 5, but for each sample, so that the parameter N in FIG. 5 increases
- the stair curve symbolizing the first weighting factor m would be approximated closer to the continuous curve, however, the position update interval could alternatively be made even larger than 3, for example that only an update in the middle of the interval between the second time 402 04/103022
- the current time t A must lie between the first time 401 and the second time 402.
- the minimum “step size”, that is to say the minimum distance between the first time 401 and the second time 402, will be two sampling periods according to the invention, so that the current time between the first time 401 and the second time 402 is processed with, for example, respective weighting factors of 0.5
- a rather large step size is preferred, on the one hand for reasons of computing time and on the other hand to produce a cross-fading effect which would no longer occur if the following position has already been reached at the next point in time, which in turn contributes to the unnatural Doppler effect
- An upper limit for the step size, that is to say for the distance from the first point in time 401 to the second point in time 402 will be that, of course, with increasing distance, more and more position information that would actually be available due to the cross-fading ignored, which in extreme cases will lead to a loss of the localizability of the virtual source for the listener.
- a linear course was chosen as the “basis” for the staircase curve for the first and second weighting factors.
- a sinusoidal, square, cubic etc. course could also be used.
- the corresponding course would have to be used
- the course of the other weighting factor must be complementary in that the sum of the first and the second weighting factor is always equal to 1 or within a predetermined tolerance range, which extends for example by plus or minus 10% around 1. lies.
- one option would be to take a curve according to the square of the sine function for the first weighting factor and to take a curve according to the square of the cosine function for the second weighting factor, since the squares of sine and cosine for each argument, ie for every current point in time t A is equal to 1.
- the scaling factors at the first time 401 and at the second time 402 are both equal to 1. However, this does not necessarily have to be the case. So each sample of the audio signal associated with a virtual source will have a certain amount Bi.
- the wave field synthesis module would then be effective to calculate a first scaling factor SFi for the first time 401 and a second scaling factor SF 2 for the second time 402.
- the actual sample value at a current time t A between the first time 401 and the second time 402 would then be as follows:
- the method according to the invention can be implemented in hardware or in software.
- the implementation can take place on a digital storage medium, in particular a floppy disk or CD with electronically readable control signals, which can interact with a programmable computer system in such a way that the method is carried out.
- the invention thus also consists in a computer program product with a program code stored on a machine-readable carrier Carrying out the method according to the invention when the computer program product runs on a computer.
- the invention can thus be implemented as a computer program with a program code for carrying out the method if the computer program runs on a computer.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
- Amplifiers (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006529784A JP4698594B2 (ja) | 2003-05-15 | 2004-05-11 | スピーカ信号内のコンポーネントの離散値を算出する装置および方法 |
DE502004002769T DE502004002769D1 (de) | 2003-05-15 | 2004-05-11 | Vorrichtung und verfahren zum berechnen eines diskreten werts einer komponente in einem lautsprechersignal |
EP04732100A EP1606975B1 (de) | 2003-05-15 | 2004-05-11 | Vorrichtung und verfahren zum berechnen eines diskreten werts einer komponente in einem lautsprechersignal |
US11/257,781 US7734362B2 (en) | 2003-05-15 | 2005-10-25 | Calculating a doppler compensation value for a loudspeaker signal in a wavefield synthesis system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE10321980.3 | 2003-05-15 | ||
DE10321980A DE10321980B4 (de) | 2003-05-15 | 2003-05-15 | Vorrichtung und Verfahren zum Berechnen eines diskreten Werts einer Komponente in einem Lautsprechersignal |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/257,781 Continuation US7734362B2 (en) | 2003-05-15 | 2005-10-25 | Calculating a doppler compensation value for a loudspeaker signal in a wavefield synthesis system |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2004103022A2 true WO2004103022A2 (de) | 2004-11-25 |
WO2004103022A3 WO2004103022A3 (de) | 2005-02-17 |
Family
ID=33440864
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2004/005047 WO2004103022A2 (de) | 2003-05-15 | 2004-05-11 | Vorrichtung und verfahren zum berechnen eines diskreten werts einer komponente in einem lautsprechersignal |
Country Status (8)
Country | Link |
---|---|
US (1) | US7734362B2 (de) |
EP (1) | EP1606975B1 (de) |
JP (1) | JP4698594B2 (de) |
KR (1) | KR100674814B1 (de) |
CN (1) | CN100553372C (de) |
AT (1) | ATE352971T1 (de) |
DE (2) | DE10321980B4 (de) |
WO (1) | WO2004103022A2 (de) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006133812A1 (de) * | 2005-06-16 | 2006-12-21 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und verfahren zum erzeugen eines lautsprechersignals aufgrund einer zufällig auftretenden audioquelle |
WO2007101498A1 (de) * | 2006-03-06 | 2007-09-13 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und verfahren zur simulation von wfs-systemen und kompensation von klangbeeinflussenden wfs-eigenschaften |
JP2008532374A (ja) * | 2005-02-23 | 2008-08-14 | フラウンホーファーゲゼルシャフト ツール フォルデルング デル アンゲヴァンテン フォルシユング エー.フアー. | オーディオオブジェクトを用いて波面合成レンダラ手段を制御するための装置および方法 |
JP2008532372A (ja) * | 2005-02-23 | 2008-08-14 | フラウンホーファーゲゼルシャフト ツール フォルデルング デル アンゲヴァンテン フォルシユング エー.フアー. | 波面合成レンダリング手段を制御するための装置および方法 |
US7809453B2 (en) | 2005-02-23 | 2010-10-05 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for simulating a wave field synthesis system |
US7813826B2 (en) | 2005-02-23 | 2010-10-12 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for storing audio files |
US7962231B2 (en) | 2005-02-23 | 2011-06-14 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for providing data in a multi-renderer system |
US8160280B2 (en) | 2005-07-15 | 2012-04-17 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for controlling a plurality of speakers by means of a DSP |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8031891B2 (en) * | 2005-06-30 | 2011-10-04 | Microsoft Corporation | Dynamic media rendering |
DE102005033239A1 (de) * | 2005-07-15 | 2007-01-25 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zum Steuern einer Mehrzahl von Lautsprechern mittels einer graphischen Benutzerschnittstelle |
DE102007059597A1 (de) * | 2007-09-19 | 2009-04-02 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Eine Vorrichtung und ein Verfahren zur Ermittlung eines Komponentensignals in hoher Genauigkeit |
KR101407200B1 (ko) * | 2009-11-04 | 2014-06-12 | 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. | 가상 소스와 연관된 오디오 신호를 위한 라우드스피커 배열의 라우드스피커들에 대한 구동 계수를 계산하는 장치 및 방법 |
JP2011124723A (ja) * | 2009-12-09 | 2011-06-23 | Sharp Corp | オーディオデータ処理装置、オーディオ装置、オーディオデータ処理方法、プログラム及び当該プログラムを記録した記録媒体 |
JP5361689B2 (ja) * | 2009-12-09 | 2013-12-04 | シャープ株式会社 | オーディオデータ処理装置、オーディオ装置、オーディオデータ処理方法、プログラム及び記録媒体 |
ES2909532T3 (es) | 2011-07-01 | 2022-05-06 | Dolby Laboratories Licensing Corp | Aparato y método para renderizar objetos de audio |
US9357293B2 (en) * | 2012-05-16 | 2016-05-31 | Siemens Aktiengesellschaft | Methods and systems for Doppler recognition aided method (DREAM) for source localization and separation |
WO2013181272A2 (en) * | 2012-05-31 | 2013-12-05 | Dts Llc | Object-based audio system using vector base amplitude panning |
CN107393523B (zh) * | 2017-07-28 | 2020-11-13 | 深圳市盛路物联通讯技术有限公司 | 一种噪音监控方法及系统 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001023104A2 (en) * | 1999-09-29 | 2001-04-05 | 1...Limited | Method and apparatus to direct sound using an array of output transducers |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5052685A (en) * | 1989-12-07 | 1991-10-01 | Qsound Ltd. | Sound processor for video game |
JPH04132499A (ja) | 1990-09-25 | 1992-05-06 | Matsushita Electric Ind Co Ltd | 音像制御装置 |
JP2882449B2 (ja) | 1992-12-18 | 1999-04-12 | 日本ビクター株式会社 | テレビゲーム用の音像定位制御装置 |
US5598478A (en) * | 1992-12-18 | 1997-01-28 | Victor Company Of Japan, Ltd. | Sound image localization control apparatus |
JPH06245300A (ja) | 1992-12-21 | 1994-09-02 | Victor Co Of Japan Ltd | 音像定位制御装置 |
US5495576A (en) * | 1993-01-11 | 1996-02-27 | Ritchey; Kurtis J. | Panoramic image based virtual reality/telepresence audio-visual system and method |
GB2294854B (en) * | 1994-11-03 | 1999-06-30 | Solid State Logic Ltd | Audio signal processing |
JPH1063470A (ja) * | 1996-06-12 | 1998-03-06 | Nintendo Co Ltd | 画像表示に連動する音響発生装置 |
-
2003
- 2003-05-15 DE DE10321980A patent/DE10321980B4/de not_active Expired - Fee Related
-
2004
- 2004-05-11 EP EP04732100A patent/EP1606975B1/de not_active Expired - Lifetime
- 2004-05-11 KR KR1020057021712A patent/KR100674814B1/ko active IP Right Grant
- 2004-05-11 JP JP2006529784A patent/JP4698594B2/ja not_active Expired - Lifetime
- 2004-05-11 WO PCT/EP2004/005047 patent/WO2004103022A2/de active IP Right Grant
- 2004-05-11 AT AT04732100T patent/ATE352971T1/de not_active IP Right Cessation
- 2004-05-11 DE DE502004002769T patent/DE502004002769D1/de not_active Expired - Lifetime
- 2004-05-11 CN CNB2004800133099A patent/CN100553372C/zh not_active Expired - Lifetime
-
2005
- 2005-10-25 US US11/257,781 patent/US7734362B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001023104A2 (en) * | 1999-09-29 | 2001-04-05 | 1...Limited | Method and apparatus to direct sound using an array of output transducers |
Non-Patent Citations (3)
Title |
---|
BERKHOUT A J ET AL: "ACOUSTIC CONTROL BY WAVE FIELD SYNTHESIS" JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, AMERICAN INSTITUTE OF PHYSICS. NEW YORK, US, Bd. 93, Nr. 5, 1. Mai 1993 (1993-05-01), Seiten 2764-2778, XP000361413 ISSN: 0001-4966 * |
S. SPORS, H. TEUTSCH, R. RABENSTEIN: "High-Quality Acoustic Rendering with Wave Field Synthesis" VISION, MODELING, AND VISUALIZATION 2002, [Online] 20. November 2002 (2002-11-20), - 22. November 2002 (2002-11-22) Seiten 101-108, XP002306015 ERLANGEN, GERMANY Gefunden im Internet: URL:http://www.lnt.de/LMS/research/project s/WFS/index.php?lang=eng> [gefunden am 2004-11-15] * |
SPORS S ET AL: "LISTENING ROOM COMPENSATION FOR WAVE FIELD SYNTHESIS" IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, XX, XX, Bd. 1, 9. Juli 2003 (2003-07-09), Seiten I-725, XP008036698 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008532374A (ja) * | 2005-02-23 | 2008-08-14 | フラウンホーファーゲゼルシャフト ツール フォルデルング デル アンゲヴァンテン フォルシユング エー.フアー. | オーディオオブジェクトを用いて波面合成レンダラ手段を制御するための装置および方法 |
JP2008532372A (ja) * | 2005-02-23 | 2008-08-14 | フラウンホーファーゲゼルシャフト ツール フォルデルング デル アンゲヴァンテン フォルシユング エー.フアー. | 波面合成レンダリング手段を制御するための装置および方法 |
US7809453B2 (en) | 2005-02-23 | 2010-10-05 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for simulating a wave field synthesis system |
US7813826B2 (en) | 2005-02-23 | 2010-10-12 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for storing audio files |
US7930048B2 (en) | 2005-02-23 | 2011-04-19 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for controlling a wave field synthesis renderer means with audio objects |
US7962231B2 (en) | 2005-02-23 | 2011-06-14 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for providing data in a multi-renderer system |
WO2006133812A1 (de) * | 2005-06-16 | 2006-12-21 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und verfahren zum erzeugen eines lautsprechersignals aufgrund einer zufällig auftretenden audioquelle |
JP2008547255A (ja) * | 2005-06-16 | 2008-12-25 | フラウンホッファー−ゲゼルシャフト ツァ フェルダールング デァ アンゲヴァンテン フォアシュンク エー.ファオ | ランダムに発生する音源のためのスピーカ信号の生成方法および装置 |
US8160280B2 (en) | 2005-07-15 | 2012-04-17 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for controlling a plurality of speakers by means of a DSP |
WO2007101498A1 (de) * | 2006-03-06 | 2007-09-13 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und verfahren zur simulation von wfs-systemen und kompensation von klangbeeinflussenden wfs-eigenschaften |
CN101406075B (zh) * | 2006-03-06 | 2010-12-01 | 弗劳恩霍夫应用研究促进协会 | 用于波场合成系统中混叠校正的设备和方法 |
Also Published As
Publication number | Publication date |
---|---|
EP1606975B1 (de) | 2007-01-24 |
EP1606975A2 (de) | 2005-12-21 |
US20060092854A1 (en) | 2006-05-04 |
DE502004002769D1 (de) | 2007-03-15 |
JP2007502590A (ja) | 2007-02-08 |
ATE352971T1 (de) | 2007-02-15 |
JP4698594B2 (ja) | 2011-06-08 |
WO2004103022A3 (de) | 2005-02-17 |
CN1792118A (zh) | 2006-06-21 |
CN100553372C (zh) | 2009-10-21 |
DE10321980A1 (de) | 2004-12-09 |
KR20060014050A (ko) | 2006-02-14 |
DE10321980B4 (de) | 2005-10-06 |
US7734362B2 (en) | 2010-06-08 |
KR100674814B1 (ko) | 2007-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1637012B1 (de) | Wellenfeldsynthesevorrichtung und verfahren zum treiben eines arrays von lautsprechern | |
EP1525776B1 (de) | Vorrichtung zum pegel-korrigieren in einem wellenfeldsynthesesystem | |
DE10321980B4 (de) | Vorrichtung und Verfahren zum Berechnen eines diskreten Werts einer Komponente in einem Lautsprechersignal | |
EP1671516B1 (de) | Vorrichtung und verfahren zum erzeugen eines tieftonkanals | |
DE10254404B4 (de) | Audiowiedergabesystem und Verfahren zum Wiedergeben eines Audiosignals | |
EP1652405B1 (de) | Vorrichtung und verfahren zum erzeugen, speichern oder bearbeiten einer audiodarstellung einer audioszene | |
EP1872620B9 (de) | Vorrichtung und verfahren zum steuern einer mehrzahl von lautsprechern mittels einer graphischen benutzerschnittstelle | |
EP1782658B1 (de) | Vorrichtung und verfahren zum ansteuern einer mehrzahl von lautsprechern mittels eines dsp | |
EP1972181B1 (de) | Vorrichtung und verfahren zur simulation von wfs-systemen und kompensation von klangbeeinflussenden wfs-eigenschaften | |
EP1723825B1 (de) | Vorrichtung und verfahren zum steuern einer wellenfeldsynthese-rendering-einrichtung | |
EP1880577B1 (de) | Vorrichtung und verfahren zum erzeugen eines lautsprechersignals aufgrund einer zufällig auftretenden audioquelle | |
EP1518443B1 (de) | Vorrichtung und verfahren zum bestimmen einer wiedergabeposition | |
EP2754151B1 (de) | Vorrichtung, verfahren und elektroakustisches system zur nachhallzeitverlängerung | |
DE10254470A1 (de) | Vorrichtung und Verfahren zum Bestimmen einer Impulsantwort und Vorrichtung und Verfahren zum Vorführen eines Audiostücks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DPEN | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004732100 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11257781 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020057021712 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20048133099 Country of ref document: CN Ref document number: 2006529784 Country of ref document: JP |
|
WWP | Wipo information: published in national office |
Ref document number: 2004732100 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057021712 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 11257781 Country of ref document: US |
|
WWG | Wipo information: grant in national office |
Ref document number: 1020057021712 Country of ref document: KR |
|
WWG | Wipo information: grant in national office |
Ref document number: 2004732100 Country of ref document: EP |