EP1606975A2 - Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur - Google Patents

Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur

Info

Publication number
EP1606975A2
EP1606975A2 EP04732100A EP04732100A EP1606975A2 EP 1606975 A2 EP1606975 A2 EP 1606975A2 EP 04732100 A EP04732100 A EP 04732100A EP 04732100 A EP04732100 A EP 04732100A EP 1606975 A2 EP1606975 A2 EP 1606975A2
Authority
EP
European Patent Office
Prior art keywords
time
point
delay
weighting factor
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP04732100A
Other languages
German (de)
English (en)
Other versions
EP1606975B1 (fr
Inventor
Thomas Röder
Thomas Sporer
Sandra Brix
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Publication of EP1606975A2 publication Critical patent/EP1606975A2/fr
Application granted granted Critical
Publication of EP1606975B1 publication Critical patent/EP1606975B1/fr
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/13Application of wave-field synthesis in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution

Definitions

  • the present invention relates to wave field synthesis systems and in particular to wave field synthesis systems which allow moving virtual sources.
  • WFS Wave-Field Synthesis
  • Every point that is captured by a wave is the starting point of an elementary wave that propagates in a spherical or circular manner.
  • a large number of loudspeakers that are arranged next to each other can be used to simulate any shape of an incoming wavefront.
  • the audio signals of each loudspeaker must be fed with a time delay and amplitude scaling in such a way that the emitted sound fields of the individual loudspeakers are superimposed correctly. If there are several sound sources, the contribution to each loudspeaker is calculated separately for each source and the resulting signals are added. In a virtual room with reflecting walls, reflections can also be reproduced as additional sources via the loudspeaker array. The effort involved in the calculation therefore depends heavily on the number of sound sources, the flexion properties of the recording room and the number of speakers.
  • the particular advantage of this technique is that a natural spatial sound impression is possible over a large area of the playback room.
  • the direction and distance of sound sources are reproduced very precisely.
  • virtual sound sources can even be positioned between the real speaker array and the listener.
  • wave field synthesis works well for environments whose properties are known, irregularities do occur when the nature changes or when the wave field synthesis is carried out on the basis of an environment condition that does not match the actual nature of the environment.
  • the technique of wave field synthesis can also be used advantageously to complement a visual perception with a corresponding spatial audio perception.
  • the focus in production in virtual studios has been to convey an authentic visual impression of the virtual scene.
  • the acoustic impression that goes with the image is usually imprinted on the audio signal by manual work steps in what is known as post-production, or is classified as too complex and time-consuming to implement and is therefore neglected. This usually leads to a contradiction of the individual sensations, which leads to the fact that the designed space, i. H. the designed scene, which is perceived as less authentic.
  • “Hearing with the ears of the camera” is to be made possible in order to make a scene appear more real.
  • the aim here is to achieve the highest possible correlation between the sound event location in the image and the hearing event location in the surround field.
  • Camera parameters such as Zoom, should be included in the sound design as well as a position of two loudspeakers L and R.
  • tracking data of a virtual studio are written into a file together with an associated time code by the system.
  • picture, sound and time code are recorded on a MAZ.
  • the camdump file is transferred to a computer, which generates control data for an audio workstation and outputs it via a MIDI interface in sync with the image from the MAZ.
  • the actual audio processing such as positioning the sound source in the surround field and inserting early reflections and reverberation takes place within the audio workstation.
  • the signal is processed for a 5.1 surround speaker system.
  • Camera tracking parameters as well as positions of sound sources in the recording setting can be recorded in real film sets. Such data can also be generated in virtual studios.
  • an actor or presenter stands alone in a recording room.
  • he stands in front of a blue wall, which is also known as a blue box or blue panel.
  • a pattern of blue and light blue stripes is applied to this blue wall.
  • the special thing about this pattern is that the stripes are of different widths and thus result in a multitude of stripe combinations. Due to the unique stripe combinations on the blue wall, it is possible to determine exactly in which direction the camera is looking when the post-processing is replaced by a virtual background. With the help of this information, the computer can determine the background for the current camera viewing angle. Sensors on the camera are also evaluated, which record and output additional camera parameters.
  • Typical parameters of a camera which are recorded by means of sensors, are the three degrees of translation x, y, z, the three degrees of rotation, which can also be called roll, tilt, pan. are drawn, and the focal length or the zoom, which is synonymous with the information about the opening angle of the camera.
  • a tracking system can be used that consists of several infrared cameras that determine the position of an infrared sensor attached to the camera. This also determines the position of the camera.
  • a real-time computer can now calculate the background for the current image. The blue hue that the blue background had was then removed from the image, so that the virtual background is imported instead of the blue background.
  • wave field synthesis In the audio area, the technology of wave field synthesis (WFS) can be used to achieve good spatial sound for a large range of listeners.
  • wave field synthesis is based on the principle of Huygens, according to which wave fronts can be shaped and built up by superimposing elementary waves. According to a mathematically exact theoretical description, an infinite number of sources at infinitely small distances would have to be used to generate the elementary waves. In practice, however, many loudspeakers are finally used at a finite distance apart. Each of these loudspeakers is controlled according to the WFS principle with an audio signal from a virtual source, which has a specific delay and a specific level. Levels and delays are usually different for all speakers.
  • a Doppler effect also exists in wave field synthesis or sound field synthesis. It is physically based on the same background as the natural Doppler effect described above. In contrast to the natural Doppler effect, there is no direct path between the transmitter and the receiver in sound field synthesis. Instead, a distinction is made in that there is a primary transmitter and a primary receiver. There is also a secondary transmitter and a secondary receiver. This scenario is illustrated below with the aid of FIG. 7.
  • FIG. 7 shows a virtual source 700 which moves from a first position, which is denoted by a circled “1” in FIG. 7, over time along a movement path 702 to a second position, which in FIG - A circled “2" is shown.
  • three loudspeakers 704 are shown schematically, which are intended to symbolize a wave field synthesis loudspeaker array.
  • a receiver 706 which in the example shown in FIG.
  • the path of movement of the virtual source is a circular path that extends around the receiver that forms the center of this circular path
  • the loudspeakers 704 are not arranged in the center, in that, at the point in time at which the virtual source 700 is in the first position, it is at a first distance ri from a loudspeaker and that the source is then in its second Position has a second distance r 2 to the source.
  • ri is not equal to r 2
  • Ri that is to say the distance of the virtual source from the listener 706, is equal to the distance from the listener 706 to the virtual source at time 2. This means that there is no change in the distance of the virtual source 700 for the receiver 706.
  • the virtual source 700 changes position relative to the loudspeakers 704, since ri is not equal to r 2 .
  • the virtual source represents the primary transmitter, while speakers 704 represent the primary receiver.
  • the loudspeakers 704 represent the secondary transmitter, while the listener 706 finally represents the secondary receiver.
  • the transmission between the primary transmitter and the primary receiver is "virtual." This means that the wave field synthesis algorithms are responsible for the stretching and compression of the wave front of the waveforms.
  • a speaker 704 receives a signal from the wave field synthesis module , there is no audible signal at first, the signal only becomes audible after being output via the loudspeaker, which can result in Doppler effects at various points.
  • each loudspeaker reproduces a signal with a different Doppler effect, depending on its specific position with regard to the moving virtual source, since the loudspeakers are in different positions and the relative movements for each sound - Speakers are different.
  • the listener can also move relative to the speakers.
  • this is a case which is insignificant in practice, in particular in a cinema setting, since the movement of the listener with respect to the loudspeakers will always be a relatively slow movement with a correspondingly small Doppler effect, since the Doppler shift, as is known in the art, is proportional to the relative movement between sender and receiver.
  • the first-mentioned Doppler effect i.e. when the virtual source moves relative to the speakers, can sound relatively natural, but also very unnatural. This depends on the direction in which the movement takes place. If the source moves straight away from the center of the system, there is a more natural effect. Referring to FIG. 7, this would mean that the virtual source 700 e.g. B. would move along the arrow R x away from the listener.
  • the virtual source 700 "circles" the listener 706, as is shown with reference to FIG. 7, there is a very unnatural effect, since the relative movements between the primary source and the primary receiver (loudspeaker) are very strong and also very different within the different primary receivers are what is in stark contrast to nature, where there is no Doppler effect when the source is surrounded by the listener since there is no change in distance between the source and listener.
  • the object of the present invention is to provide an improved concept for calculating a discrete value at a current point in time of a component in a loudspeaker signal, in which artifacts due to Doppler effects are reduced.
  • the present invention is based on the knowledge that Doppler effects can be taken into account since they are a component of the information required for the position identification of a source. If such Doppler effects would have to be completely dispensed with, this could lead to a less than optimal sound experience, since the Doppler effect is natural and would therefore lead to a less than optimal impression if, for example, a virtual source moves towards a listener , but there is no Doppler shift in the audio frequency.
  • a "blending" from one position to another position is carried out to "blur" the Doppler effect, to the extent that it is present, but that its effects lead to no or only reduced artifacts.
  • a discrete value for a current point in time in the cross-fade area is used in the cross-fade area using a sample value of the audio signal valid for the current point in time at the first position, ie at a first point in time, and using a sample value belonging to a current point in time Audio signal of the virtual source at the second position, that is to say at the second point in time.
  • a crossfading preferably takes place in such a way that at the first point in time, that is to say the first position changes and thus the first delay information are valid, a weighting factor for the audio signal which is delayed with the first delay is 100%, while a weighting factor for the the second delay delayed audio signal is 0%, and then, from the first point in time to the second point in time, an opposite change in the two weighting factors is carried out in order to "blend", so to speak, "smoothly" from one position to the other position.
  • the concept according to the invention represents a compromise between, on the one hand, a certain loss of positional Formations, since new position information of the source is no longer taken into account with each new current point in time, but only a position update of the virtual source is carried out in rather rough steps, whereby between the one position of the source and the second position of the source, which takes place some time later is faded.
  • This is done in that the delay is initially carried out for relatively coarse spatial step sizes, ie position information which is relatively far away in time (of course taking into account the speed of the source).
  • the delay change that leads to the above-mentioned virtual Doppler effect between the primary transmitter and the primary receiver is thus smoothed out, that is, continuously transferred from one delay change to another.
  • the cross-fading or "panning” takes place according to the invention by means of a volume scale from one position to the next in order to avoid spatial jumps and thus audible "crackling".
  • the "hard" omission or addition of samples due to a delay change is replaced by a waveform with rounded corners adapted to the hard signal shape, so that the delay changes are taken into account, but that the hard influence on a loudspeaker signal leading to artefacts is caused a change in position of the virtual source is avoided.
  • FIG. 1 shows a block diagram of a device according to the invention
  • FIG. 2 shows a basic circuit diagram of a wave field synthesis environment as can be used for the present invention
  • FIG. 3 shows a more detailed illustration of the wave field synthesis module shown in FIG. 2;
  • FIG. 4c shows a first cross-faded version based on the audio signals shown in FIGS. 4a and 4b in a period between the first point in time at which FIG. 4a is valid and a second point in time at which FIG. 4b is valid;
  • FIG. 4d shows a further cross-fade representation at a later point in time with respect to FIG. 4c, at which the signal shown in FIG. 4b is valid;
  • FIG. 5 shows a time profile of the component Ki j in a loudspeaker signal based on a virtual source i, which is composed of the time profiles of FIGS. 4a to 4d;
  • FIG. 6 shows a detailed illustration of the weighting factors m, n which have been used in the calculation of the audio signals shown in FIGS. 4a to 4d;
  • FIG. 1 shows a classic wave synthesis environment.
  • the center of a wave field synthesis environment is a wave field synthesis module 200, which comprises various inputs 202, 204, 206 and 208 and various outputs 210, 212, 214, 216.
  • Various audio signals for virtual sources are fed to the wave field synthesis module via inputs 202 to 204. So the input 202 receives z. B. an audio signal of the virtual source 1 and associated position information of the virtual source.
  • the audio signal 1 would be e.g. B. the language of an actor who moves from a left side of the screen to a right side of the screen and possibly additionally away from the viewer or towards the viewer.
  • the audio signal 1 would then be the actual language of this actor, while the position information as a function of time represents the current position of the first actor in the recording setting at a certain point in time.
  • the audio signal n would be the language of, for example, another actor who moves the same or different than the first actor.
  • the current position of the other actor to whom the audio signal n is assigned is communicated to the wave field synthesis module 200 by position information synchronized with the audio signal n.
  • a wave field synthesis module feeds a plurality of loudspeakers LSI, LS2, LS3, LSm by outputting loudspeaker signals via the outputs 210 to 216 to the individual loudspeakers.
  • the positions of the individual loudspeakers in a playback setting, such as a cinema, are communicated to the wave field synthesis module 200 via the input 206.
  • the wave field synthesis module 200 In the cinema hall there are many individual loudspeakers grouped around the cinema audience, preferably in arrays are arranged such that there are loudspeakers both in front of the viewer, for example behind the screen, and behind the viewer and to the right and left of the viewer.
  • other inputs can be communicated to the wave field synthesis module 200, such as information about the room acoustics, etc., in order to be able to simulate the actual room acoustics prevailing during the recording set-up in a cinema hall.
  • the loudspeaker signal which is supplied to the loudspeaker LSI via the output 210 will be a superimposition of component signals of the virtual sources, in that the loudspeaker signal for the loudspeaker LSI is a first component which originates from the virtual source 1, a second Component, which goes back to the virtual source 2, as well as an nth component, which goes back to the virtual source n.
  • the individual component signals are linearly superimposed, i.e. added after their calculation, in order to simulate the linear superposition at the ear of the listener, who will hear a linear superposition of the sound sources perceivable in a real setting.
  • the wave field synthesis module 200 has a strongly parallel structure in that, starting from the audio signal for each virtual source and starting from the position information for the corresponding virtual source, delay information Vi and scaling factors SFi are first calculated, which are based on the position information and the position of the loudspeaker under consideration, z. B. depend on the loudspeaker with the order number j, i.e. LSj.
  • a delay information Vi and a scaling factor SFi are calculated on the basis of the position information of a virtual source and the position of the loudspeaker j in question using known algorithms which are implemented in devices 300, 302, 304, 306. are mented.
  • a discrete value AWi (t A ) for the component signal Kij is combined in one for a current time t A ultimately obtained speaker signal calculated. This is done by means 310, 312, 314, 316, as shown schematically in FIG. 3. 3 also shows, so to speak, a "flash light recording" at time t A for the individual component signals.
  • the individual component signals are then summed by a summer 320 to determine the discrete value for the current time t A of the loudspeaker signal for loudspeaker j, which then for the output (e.g. output 214 if speaker j is speaker LS3) can be fed to the speaker.
  • a value that is valid due to a delay and scaling with a scaling factor at a current point in time is first calculated individually for each virtual source, after which all component signals for a loudspeaker are summed due to the different virtual sources. If, for example, there were only one virtual source, the summer would be omitted and the signal present at the output of the summer in FIG. B. correspond to the signal output by the device 310 when the virtual source 1 is the only virtual source.
  • it is assumed that at time t ' 0 there is a delay of 0 sample values has been calculated by the wave field synthesis module.
  • the time of switching is also identified by an arrow 404 in FIG. 4a.
  • the component for the loudspeaker signal on the basis of the virtual source shown in FIGS. 4a and 4b thus consists of the values shown in FIG. 4a from time 0 to time 8 and from time 9 to a later time, at which a change in position is signaled again, from the samples at the current times 9 to 12, which are shown in FIG. 4b.
  • This signal is shown in Fig. 8. It can be seen that at the time of switching, that is to say at the time of switching from one position to the other position, the switching again being designated by 404 in FIG. 8, two samples were omitted.
  • the device according to the invention shown in FIG. 1 is used for artifacts caused by another delay.
  • 1 shows in particular a device for calculating a discrete value for a current point in time of a component Kj in a loudspeaker signal for a loudspeaker j on the basis of a virtual source i in a wave field synthesis system with a wave field synthesis module and a plurality of loudspeakers.
  • the wave field synthesis module is designed to determine, using an audio signal associated with the virtual source and using position information that indicates a position of the virtual source, delay information that indicates how many samples the audio signal is delayed with respect to a time reference should occur in the component.
  • first comprises a device 10 for providing a first delay which is associated with a first position of the virtual source and for providing a second delay which is associated with a second position of the virtual source.
  • first position of the virtual source relates to a first point in time
  • second position of the virtual source relates to a second point in time that is later than the first point in time.
  • the second position differs from the first position.
  • the second position is, for example, the position of the virtual source shown in FIG. 7 with the circled "2", while the first position is the position of the virtual source 700 shown in FIG. 7 with a circled "1".
  • the device 10 for providing thus provides a first delay 12a for the first point in time and a second delay 12b for the second point in time.
  • the device 10 is also designed to output scaling factors for the two times in addition to the delays, as will be explained later.
  • the two delays at the outputs 12a, 12b of the device 10 are a device 14 for determining a value of the audio signal delayed by the first delay, which is supplied via an input 16 to the device 14, for the current time (that via an input 18 can be signaled) and fed to determine a second value of the audio signal delayed by the second delay for the current point in time.
  • the device according to the invention further comprises means 22 for weighting the first value from Ai with a first weighting factor in order to obtain a weighted first value 24a.
  • the device 22 is further operative to determine the second value 20b of A 4 with a second weighting factor n to be weighted, weighted by a second value to obtain 24b.
  • the two weighted values 24a and 24b are fed to a device 26 for summing the two values in order to actually obtain a “faded” discrete value 28 for the current time of the component Kij in a loudspeaker signal for a loudspeaker j on the basis of the virtual source i.
  • the functionality of the device shown in FIG. 1 is shown by way of example with reference to FIGS. 4c, 4d, 5 and 6.
  • neither the value from Ai at the first time 401 nor the value from A 4 at the second time 402 is modified.
  • all values between ti 401 and t 2 402 are modified according to the invention, that is to say values which are assigned to a current time t A which lies between the first time 401 and the second time 402.
  • the graph in FIG. 6, represents the first weighting factor m as a function of the current times between the first time 401 and the second time 402.
  • the first weighting factor m is monotonically falling, while the second weighting factor n is monotonically increasing.
  • the two weighting factors will have a step-like course, since it is only possible to calculate continuously for each sample value, ie not continuously.
  • the step-shaped course will be a course shown in dashed or dotted lines in FIG. 6, which, depending on the number of crossfading events or the predefined computing capacity resources, will be based on the continuous line between the first point in time 401 and the second point in time 402 accordingly often.
  • FIG. 6 For example only, in the embodiment shown in FIG. 6, which is reflected in FIGS. 4c and 4d, two cross-fading events between the first time 401 and the second time 402 were used.
  • the signal with the weighting associated with the first transition time factors m and n, which are shown in a line 600 in FIG. 6, are represented by A 2 in FIG. 4c.
  • the signal associated with the second crossfade instant 602 is shown with A 3 in FIG. 4d.
  • the actual course of time of component K 13 which is ultimately calculated (FIGS. 4a to 4d are only for illustration), is shown in FIG. 5.
  • FIGS. 5 and 6 a new weighting factor is not calculated for each new sample value, that is to say with a period T, but only every three sampling time periods.
  • the sampling values corresponding to these times are therefore taken from FIG. 4a for the current times 0, 1 and 2.
  • the sample values for the points in time 3, 4 and 5 belonging to FIG. 4c are taken.
  • the sampling values belonging to FIG. 4d are taken for the times 6, 7 and 8, while finally the sampling values from FIG. 4 are taken for the times 9, 10 and 11 and further times until a next position change or a next crossfading action 4b which correspond to the current times 9, 10 and 11, respectively.
  • a "finer" smoothing could be achieved if the position update interval PAI shown in FIG. 5 is carried out not only every three samples, as shown in FIG. 5, but for each sample, so that the parameter N in FIG. 5 increases
  • the stair curve symbolizing the first weighting factor m would be approximated closer to the continuous curve, however, the position update interval could alternatively be made even larger than 3, for example that only an update in the middle of the interval between the second time 402 04/103022
  • the current time t A must lie between the first time 401 and the second time 402.
  • the minimum “step size”, that is to say the minimum distance between the first time 401 and the second time 402, will be two sampling periods according to the invention, so that the current time between the first time 401 and the second time 402 is processed with, for example, respective weighting factors of 0.5
  • a rather large step size is preferred, on the one hand for reasons of computing time and on the other hand to produce a cross-fading effect which would no longer occur if the following position has already been reached at the next point in time, which in turn contributes to the unnatural Doppler effect
  • An upper limit for the step size, that is to say for the distance from the first point in time 401 to the second point in time 402 will be that, of course, with increasing distance, more and more position information that would actually be available due to the cross-fading ignored, which in extreme cases will lead to a loss of the localizability of the virtual source for the listener.
  • a linear course was chosen as the “basis” for the staircase curve for the first and second weighting factors.
  • a sinusoidal, square, cubic etc. course could also be used.
  • the corresponding course would have to be used
  • the course of the other weighting factor must be complementary in that the sum of the first and the second weighting factor is always equal to 1 or within a predetermined tolerance range, which extends for example by plus or minus 10% around 1. lies.
  • one option would be to take a curve according to the square of the sine function for the first weighting factor and to take a curve according to the square of the cosine function for the second weighting factor, since the squares of sine and cosine for each argument, ie for every current point in time t A is equal to 1.
  • the scaling factors at the first time 401 and at the second time 402 are both equal to 1. However, this does not necessarily have to be the case. So each sample of the audio signal associated with a virtual source will have a certain amount Bi.
  • the wave field synthesis module would then be effective to calculate a first scaling factor SFi for the first time 401 and a second scaling factor SF 2 for the second time 402.
  • the actual sample value at a current time t A between the first time 401 and the second time 402 would then be as follows:
  • the method according to the invention can be implemented in hardware or in software.
  • the implementation can take place on a digital storage medium, in particular a floppy disk or CD with electronically readable control signals, which can interact with a programmable computer system in such a way that the method is carried out.
  • the invention thus also consists in a computer program product with a program code stored on a machine-readable carrier Carrying out the method according to the invention when the computer program product runs on a computer.
  • the invention can thus be implemented as a computer program with a program code for carrying out the method if the computer program runs on a computer.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Amplifiers (AREA)
EP04732100A 2003-05-15 2004-05-11 Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur Expired - Lifetime EP1606975B1 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE10321980A DE10321980B4 (de) 2003-05-15 2003-05-15 Vorrichtung und Verfahren zum Berechnen eines diskreten Werts einer Komponente in einem Lautsprechersignal
DE10321980 2003-05-15
PCT/EP2004/005047 WO2004103022A2 (fr) 2003-05-15 2004-05-11 Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur

Publications (2)

Publication Number Publication Date
EP1606975A2 true EP1606975A2 (fr) 2005-12-21
EP1606975B1 EP1606975B1 (fr) 2007-01-24

Family

ID=33440864

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04732100A Expired - Lifetime EP1606975B1 (fr) 2003-05-15 2004-05-11 Dispositif et procede de calcul d'une valeur discrete dans un signal de haut-parleur

Country Status (8)

Country Link
US (1) US7734362B2 (fr)
EP (1) EP1606975B1 (fr)
JP (1) JP4698594B2 (fr)
KR (1) KR100674814B1 (fr)
CN (1) CN100553372C (fr)
AT (1) ATE352971T1 (fr)
DE (2) DE10321980B4 (fr)
WO (1) WO2004103022A2 (fr)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005008366A1 (de) 2005-02-23 2006-08-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Ansteuern einer Wellenfeldsynthese-Renderer-Einrichtung mit Audioobjekten
DE102005008342A1 (de) 2005-02-23 2006-08-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Speichern von Audiodateien
DE102005008333A1 (de) * 2005-02-23 2006-08-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Steuern einer Wellenfeldsynthese-Rendering-Einrichtung
DE102005008343A1 (de) 2005-02-23 2006-09-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Liefern von Daten in einem Multi-Renderer-System
DE102005008369A1 (de) 2005-02-23 2006-09-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Simulieren eines Wellenfeldsynthese-Systems
DE102005027978A1 (de) * 2005-06-16 2006-12-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Erzeugen eines Lautsprechersignals aufgrund einer zufällig auftretenden Audioquelle
US8031891B2 (en) * 2005-06-30 2011-10-04 Microsoft Corporation Dynamic media rendering
DE102005033239A1 (de) * 2005-07-15 2007-01-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Steuern einer Mehrzahl von Lautsprechern mittels einer graphischen Benutzerschnittstelle
DE102005033238A1 (de) 2005-07-15 2007-01-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Ansteuern einer Mehrzahl von Lautsprechern mittels eines DSP
DE102006010212A1 (de) * 2006-03-06 2007-09-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zur Simulation von WFS-Systemen und Kompensation von klangbeeinflussenden WFS-Eigenschaften
DE102007059597A1 (de) * 2007-09-19 2009-04-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Eine Vorrichtung und ein Verfahren zur Ermittlung eines Komponentensignals in hoher Genauigkeit
EP2663099B1 (fr) * 2009-11-04 2017-09-27 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Appareil et procédé pour fournir des signaux d'entraînement pour lesdits haut-parleurs sur la base d'un signal audio associé à une source virtuelle
JP5361689B2 (ja) * 2009-12-09 2013-12-04 シャープ株式会社 オーディオデータ処理装置、オーディオ装置、オーディオデータ処理方法、プログラム及び記録媒体
JP2011124723A (ja) * 2009-12-09 2011-06-23 Sharp Corp オーディオデータ処理装置、オーディオ装置、オーディオデータ処理方法、プログラム及び当該プログラムを記録した記録媒体
KR101843834B1 (ko) 2011-07-01 2018-03-30 돌비 레버러토리즈 라이쎈싱 코오포레이션 향상된 3d 오디오 오서링과 렌더링을 위한 시스템 및 툴들
US9357293B2 (en) * 2012-05-16 2016-05-31 Siemens Aktiengesellschaft Methods and systems for Doppler recognition aided method (DREAM) for source localization and separation
WO2013181272A2 (fr) * 2012-05-31 2013-12-05 Dts Llc Système audio orienté objet utilisant un panoramique d'amplitude sur une base de vecteurs
CN107393523B (zh) * 2017-07-28 2020-11-13 深圳市盛路物联通讯技术有限公司 一种噪音监控方法及系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5052685A (en) * 1989-12-07 1991-10-01 Qsound Ltd. Sound processor for video game
JPH04132499A (ja) 1990-09-25 1992-05-06 Matsushita Electric Ind Co Ltd 音像制御装置
US5598478A (en) * 1992-12-18 1997-01-28 Victor Company Of Japan, Ltd. Sound image localization control apparatus
JP2882449B2 (ja) 1992-12-18 1999-04-12 日本ビクター株式会社 テレビゲーム用の音像定位制御装置
JPH06245300A (ja) 1992-12-21 1994-09-02 Victor Co Of Japan Ltd 音像定位制御装置
US5495576A (en) * 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
GB2294854B (en) * 1994-11-03 1999-06-30 Solid State Logic Ltd Audio signal processing
JPH1063470A (ja) * 1996-06-12 1998-03-06 Nintendo Co Ltd 画像表示に連動する音響発生装置
EP1224037B1 (fr) * 1999-09-29 2007-10-31 1... Limited Procede et dispositif permettant de diriger le son

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2004103022A2 *

Also Published As

Publication number Publication date
ATE352971T1 (de) 2007-02-15
KR20060014050A (ko) 2006-02-14
WO2004103022A2 (fr) 2004-11-25
WO2004103022A3 (fr) 2005-02-17
JP2007502590A (ja) 2007-02-08
DE10321980A1 (de) 2004-12-09
US7734362B2 (en) 2010-06-08
US20060092854A1 (en) 2006-05-04
CN1792118A (zh) 2006-06-21
KR100674814B1 (ko) 2007-01-25
JP4698594B2 (ja) 2011-06-08
DE502004002769D1 (de) 2007-03-15
EP1606975B1 (fr) 2007-01-24
CN100553372C (zh) 2009-10-21
DE10321980B4 (de) 2005-10-06

Similar Documents

Publication Publication Date Title
EP1637012B1 (fr) Dispositif de synthese de champ electromagnetique et procede d'actionnement d'un reseau de haut-parleurs
EP1525776B1 (fr) Dispositif de correction de niveau dans un systeme de synthese de champ d'ondes
DE10321980B4 (de) Vorrichtung und Verfahren zum Berechnen eines diskreten Werts einer Komponente in einem Lautsprechersignal
EP1671516B1 (fr) Procede et dispositif de production d'un canal a frequences basses
DE10254404B4 (de) Audiowiedergabesystem und Verfahren zum Wiedergeben eines Audiosignals
EP1652405B1 (fr) Dispositif et procede de production, de mise en memoire ou de traitement d'une representation audio d'une scene audio
EP1872620B9 (fr) Dispositif et procede pour commander une pluralite de haut-parleurs au moyen d'une interface graphique d'utilisateur
EP1782658B1 (fr) Dispositif et procede de commande d'une pluralite de haut-parleurs a l'aide d'un dsp
EP1972181B1 (fr) Dispositif et procédé de simulation de systèmes wfs et de compensation de propriétés wfs influençant le son
EP1723825B1 (fr) Dispositif et procede pour reguler un dispositif de rendu de synthese de champ electromagnetique
EP1880577B1 (fr) Dispositif et procede permettant de generer un signal de haut-parleur sur la base d'une source audio d'apparition aleatoire
EP1518443B1 (fr) Dispositif et procede pour determiner une position de reproduction
EP2754151B1 (fr) Dispositif, procédé et système électroacoustique de prolongement d'un temps de réverbération
DE10254470A1 (de) Vorrichtung und Verfahren zum Bestimmen einer Impulsantwort und Vorrichtung und Verfahren zum Vorführen eines Audiostücks

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20051014

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL HR LT LV MK

RIC1 Information provided on ipc code assigned before grant

Ipc: H04S 3/00 20060101AFI20060113BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

DAX Request for extension of the european patent (deleted)
RBV Designated contracting states (corrected)

Designated state(s): AT CH DE FR GB LI NL

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWAN

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT CH DE FR GB LI NL

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REF Corresponds to:

Ref document number: 502004002769

Country of ref document: DE

Date of ref document: 20070315

Kind code of ref document: P

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20071025

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20070511

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080531

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080531

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 14

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 15

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230524

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20230519

Year of fee payment: 20

Ref country code: FR

Payment date: 20230517

Year of fee payment: 20

Ref country code: DE

Payment date: 20230519

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230522

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 502004002769

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MK

Effective date: 20240510

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20240510

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20240510

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20240510