EP1652405A2 - Device and method for the generation, storage or processing of an audio representation of an audio scene - Google Patents
Device and method for the generation, storage or processing of an audio representation of an audio sceneInfo
- Publication number
- EP1652405A2 EP1652405A2 EP04763715A EP04763715A EP1652405A2 EP 1652405 A2 EP1652405 A2 EP 1652405A2 EP 04763715 A EP04763715 A EP 04763715A EP 04763715 A EP04763715 A EP 04763715A EP 1652405 A2 EP1652405 A2 EP 1652405A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- audio
- user interface
- channel
- assigned
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 title claims description 24
- 238000003860 storage Methods 0.000 title description 4
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 50
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 50
- 230000005236 sound signal Effects 0.000 claims abstract description 32
- 238000002156 mixing Methods 0.000 claims description 26
- 238000003384 imaging method Methods 0.000 claims description 24
- 230000008859 change Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 238000009877 rendering Methods 0.000 abstract description 13
- 238000013459 approach Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 9
- 238000004519 manufacturing process Methods 0.000 description 9
- 239000000203 mixture Substances 0.000 description 8
- 238000004088 simulation Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 238000009826 distribution Methods 0.000 description 6
- 238000013461 design Methods 0.000 description 4
- 230000008447 perception Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- UHZZMRAGKVHANO-UHFFFAOYSA-M chlormequat chloride Chemical compound [Cl-].C[N+](C)(C)CCCl UHZZMRAGKVHANO-UHFFFAOYSA-M 0.000 description 2
- 235000009508 confectionery Nutrition 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/008—Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/18—Vocoders using multiple modes
- G10L19/20—Vocoders using multiple modes using sound class specific coding, hybrid encoders or object based coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/12—Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/20—Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/13—Application of wave-field synthesis in stereophonic audio systems
Definitions
- the present invention is in the field of wave field synthesis and relates in particular to devices and methods for generating, storing or editing an audio representation of an audio scene.
- WFS Wave-Field Synthesis
- wave field synthesis Due to the enormous demands of this method on computer performance and transmission rates, wave field synthesis has so far only rarely been used in practice. It is only the advances in the areas of microprocessor technology and audio coding that allow this technology to be used in concrete applications. The first products in the professional sector are expected next year. The first wave field synthesis applications for the consumer sector are also expected to be launched in a few years.
- Every point that is captured by a wave is the starting point for an elementary wave that propagates in a spherical or circular manner.
- a large number of loudspeakers that are arranged next to each other can be used to simulate any shape of an incoming wavefront.
- the audio signals of each loudspeaker have to be fed with a time delay and amplitude scaling in such a way that the emitted sound fields of the individual loudspeakers overlap correctly. If there are several sound sources, the contribution to each loudspeaker is calculated separately for each source and the resulting signals are added. If the sources to be reproduced are in a room with reflecting walls, then reflections must also be reproduced as additional sources via the loudspeaker array the. The effort involved in the calculation therefore depends heavily on the number of sound sources, the reflection properties of the recording room and the number of speakers.
- the particular advantage of this technique is that a natural spatial sound impression is possible over a large area of the playback room.
- the direction and distance of sound sources are reproduced very precisely.
- virtual sound sources can even be positioned between the real speaker array and the listener.
- wave field synthesis works well for environments whose properties are known, irregularities occur when the nature changes or when the wave field synthesis is carried out on the basis of an environment condition that does not correspond to the actual condition of the environment.
- the technique of wave field synthesis can also be used advantageously to complement a visual perception with a corresponding spatial audio perception.
- the focus in production in virtual studios has been to provide an authentic visual impression of the virtual scene.
- the acoustic impression that goes with the image is usually imprinted on the audio signal by manual work steps in what is known as post-production, or is classified as too complex and time-consuming to implement and is therefore neglected. This usually leads to a contradiction of the individual sensations, which leads to the fact that the designed space, i. H. the designed scene, which is perceived as less authentic.
- the audio material for a film for example, consists of a large number of audio objects.
- An audio object is a sound source in the film setting. If, for example, you think of a film scene in which two people face each other and are in a dialogue, and at the same time e.g. For example, if a rider and a train are approaching, a total of four sound sources exist in this scene over a certain period of time, namely the two people, the approaching rider and the approaching train. If it is assumed that the two people who are in dialogue do not speak at the same time, then at least two audio objects are likely to be active at a time, namely the rider and the train, if both people are currently silent.
- an audio object presents itself in such a way that the audio object describes a sound source in a film setting that is active or “alive” at a certain point in time. This means that an audio object is further characterized by a start time and an end time.
- the rider and the train are active throughout the setting, and when both approach, the listener will notice this by making the rider and the train noisier and possibly - in an optimal wave field synthesis setting - as well change the positions of these sound sources accordingly
- the two speakers in dialogue are constantly generating new audio objects, since whenever a speaker stops speaking the current audio object has ended and when the other speaker starts speaking, a new audio object begins which in turn ends when the other S precher stops speaking, and when the first speaker starts speaking again, a new audio object is started again.
- Existing wave field synthesis rendering devices exist which are able to generate a certain number of loudspeaker signals from a certain number of input channels, with knowledge of the individual positions of the loudspeakers in a wave field synthesis loudspeaker array.
- the wave field synthesis renderer is to a certain extent the "heart" of a wave field synthesis system that correctly calculates the loudspeaker signals for the many loudspeakers of the loudspeaker array in terms of amplitude and phase, so that the user not only has an optimal optical impression but also an optimal one has an acoustic impression.
- Playback systems usually have fixed speaker positions, such as in the case of 5.1 the left channel nal ("left”), the middle channel (“center”), the right channel (“right”), the surround left channel (“Surround left”) and the surround right channel (“Surround right”)
- the ideal sound image the sound engineer is looking for is limited to a small number of seats, the so-called sweet spot, although the use of phantom sources between the 5.1 positions described above results in certain cases to improvements, but not always satisfactory results.
- the sound of a film usually consists of dialogues, effects, atmospheres and music. Each of these elements is mixed taking into account the limitations of 5.1 and 7.1 systems. Typically, the dialogue is mixed in the center channel (in 7.1 systems also on a half-left and a half-right position). This implies that when the actor moves across the screen, the sound does not follow. Movement sound object effects can only be realized if they move quickly, so that the listener is unable to recognize when the sound passes from one speaker to another.
- Lateral sources also cannot be positioned due to the large audible gap between the front and surround speakers so that objects cannot move slowly from back to front and vice versa.
- Surround loudspeakers are also placed in a diffuse array of loudspeakers and thus produce a sound image that represents a kind of envelope for the listener. Therefore, precisely positioned sound sources behind the listeners are avoided in order to avoid the unpleasant sound interference field that is associated with such precisely positioned sources.
- Wave field synthesis as a completely new way of building up the sound field that is heard by the listener overcomes these essential shortcomings. The consequence for cinema applications is that an accurate sound image can be achieved without restrictions with regard to a two-dimensional positioning of objects. This opens up a wide variety of possibilities in the design and mixing of sound for cinema purposes. Due to the complete sound image reproduction, which is achieved by the technique of wave field synthesis, sound sources can now be positioned freely. Furthermore, sound sources can be placed as focused sources inside the listener room as well as outside the listener room.
- stable sound source directions and stable sound source positions can be generated using point-shaped radiating sources or plane waves.
- sound sources can be moved freely inside, outside or through the listening room.
- the sound design ie the activity of the sound engineer
- the coding format and the number of speakers ie 5.1 systems or 7.1 systems, determine the reproduction setup.
- a special sound system requires a special encoding format.
- the channels are of no concern to a viewer / listener. He does not care which sound system a sound is generated from, whether an original sound description was object-oriented, was channel-oriented, etc. The listener also does not care whether and how an audio setting was mixed. All that counts for the listener is the sound impression, i.e. whether he likes a sound setting for a film or a sound setting without a film or not.
- the sound engineers are responsible for the sound mixing. Due to the channel-oriented paradigm, sound engineers are "calibrated" to work channel-oriented. For them it is actually the goal to mix the six channels for a cinema with a 5.1-sound system, for example audio signals recorded in a virtual studio and mix the final 5.1 or 7.1 loudspeaker signals, for example, not channel objects, but channel orientation, so in this case an audio object typically has no start time or no end time a signal for a loudspeaker to be active from the first second of the film to the last second of the film, due to the fact that one of the (few) loudspeakers of the typical cinema sound system always produces any sound since it is always there may be a sound source that is broadcast over the special speaker, even if it is just background music.
- existing wave field synthesis rendering units are used to work oriented so that they have a certain number of input channels, from which, when the audio signals and associated information are input into the input channels, the loudspeaker signals for the individual loudspeakers or loudspeaker groups of a wave field synthesis loudspeaker array are generated.
- the technique of wave field synthesis leads to the fact that an audio scene is much more "transparent", namely in that in principle an unlimited number of audio objects viewed via a film, ie viewed via an audio scene, can be present
- Channel-oriented wave field synthesis rendering devices can become problematic if the number of audio objects in an audio scene exceeds the typically always predetermined maximum number of input channels of the audio processing device.
- the object of the present invention is to create a concept for generating, storing or editing an audio representation of an audio scene, which has a high level of acceptance on the part of the users for whom corresponding tools are intended.
- This object is achieved by a device for generating, storing or editing an audio representation of an audio scene according to claim 1, a method for generating, storing or editing an audio representation of an audio Dioscene according to claim 15 or a computer program according to claim 16 solved.
- the present invention is based on the knowledge that for audio objects as they occur in a typical film setting, only an object-oriented description can be processed clearly and efficiently.
- the object-oriented description of the audio scene with objects that have an audio signal and to which a defined start and a defined end time are assigned correspond to the typical conditions in the real world, in which it is rare for a sound to be heard anyway Time is there. Instead, it is common, for example in a dialogue, that a dialogue partner begins to speak and stops speaking, or that noises typically have a beginning and an end.
- the object-oriented audio scene description which assigns each sound source its own object in real life, is adapted to the natural conditions and therefore optimal in terms of transparency, clarity, efficiency and intelligibility.
- an imaging device is used to map the object-oriented description of the audio scene onto a plurality of input channels of an audio processing device, such as, for example, a wave field synthesis rendering unit.
- the imaging device is designed to assign a first audio object to an input channel, and to assign a second audio object, the start time of which reads after an end time of the first audio object, to the same input channel, and a third audio object, the start time of which after the start time of the first audio object and before the end time of the first audio object is to assign another one of the plurality of input channels.
- This time allocation which assigns audio objects that occur simultaneously to different input channels of the wave field synthesis rendering unit, and which assigns audio objects that occur sequentially, has been found to be extremely channel-efficient.
- the user e.g. the sound engineer
- the user can get a quick overview of the complexity of an audio scene at a certain point in time without having to laboriously search from a variety of input channels to find out which object is currently active or which object is not currently active.
- the user can easily manipulate the audio objects, as in the object-oriented representation, using his or her usual channel controls.
- FIG. 1 shows a block diagram of the device according to the invention for generating an audio representation
- Fig. 2 is a schematic representation of a user interface for the concept shown in Fig. 1;
- FIG. 3a shows a schematic illustration of the user interface parts from FIG. 2 according to an exemplary embodiment of the present invention
- FIG. 3b shows a schematic illustration of the user interface from FIG. 2 according to another exemplary embodiment of the present invention
- FIG. 4 shows a block diagram of a device according to the invention in accordance with a preferred exemplary embodiment
- FIG. 5 shows a temporal representation of the audio scene with different audio objects
- FIG. 6 shows a comparison of a 1: 1 conversion between object and channel and an object-channel assignment according to the present invention for the audio scene shown in FIG. 5.
- the device according to the invention comprises a device 10 for providing an object-oriented description of the audio scene, the object-oriented description of the audio scene comprising a plurality of audio objects, with at least one audio signal, a start time and an end time being assigned to an audio object.
- the device according to the invention further comprises an audio processing device 12 for generating a plurality of loudspeaker signals LSi 14, which is channel-oriented and which generates the plurality of loudspeaker signals 14 from a plurality of input channels EKi.
- an imaging device 18 for mapping the object-oriented description of the audio scene onto the plurality of input channels 16 of the channel-oriented audio signal processing device 12 , wherein the imaging device 18 is designed to assign a first audio object to an input channel, such as EKI, and to assign a second audio object whose start time is after an end time of the first audio object to the same input channel, such as the input channel EKI, and to assign a third audio object whose start time is after the start time of the first audio object and before the end time of the first audio object to another input channel of the plurality of input channels, such as the input channel EK2.
- the imaging device 18 is thus designed so that audio objects that do not overlap in time are assigned to the same input channel. assign, and to assign overlapping audio objects to different parallel input channels.
- the audio objects are further specified in such a way that they are assigned a virtual position.
- This virtual position of an object can change during the lifetime of the object, which would correspond to the case in which, for example, a rider approaches a scene center, in such a way that the rider's gallop becomes louder and, in particular, comes closer and closer to the auditorium.
- an audio object includes not only the audio signal that is assigned to this audio object and a start time and an end time, but also a position of the virtual source that can change over time and possibly further properties of the audio object, such as whether it should have point source properties or whether it should emit a plane wave, which would correspond to a virtual position with an infinite distance to the viewer.
- Further properties for sound sources, ie for audio objects, are known in the art and can be taken into account depending on the equipment of the channel-oriented audio signal processing device 12 from FIG. 1.
- the structure of the device hierar ⁇ constructed chically, to the effect that the channel-based audio signal processing apparatus dioumbleen for receiving Au ⁇ is not directly combined with the means for providing, but is combined with the same via the exhaust school.
- the device shown in FIG. 1 is further provided with a user interface, as shown at 20 in FIG. 2.
- the user interface 20 is designed to have one user interface channel per input channel and preferably one manipulator for each user interface channel.
- the user interface 20 is coupled via its user interface input 22 to the imaging device 18 in order to receive the assignment information from the imaging device, since the occupancy of the input channels EKI to EKm is to be displayed by the user interface 20.
- the user interface 20 On the output side, if the user interface 20 has the manipulator feature for each user interface channel, it is coupled to the device 10 for providing.
- the user interface 20 is designed to provide manipulated audio objects of the device 10 for provision via its user interface output 24 with respect to the original version, which thus receives a changed audio scene, which is then returned to the imaging device 18 and, accordingly, distributed over the input channels Channel-oriented audio signal processing device 12 is provided.
- the user interface 20 is designed as a user interface, as shown in FIG. 3a, that is to say as a user interface, which always only shows the current objects.
- the user interface 20 is configured to be structured as in FIG. 3b, that is to say in such a way that all objects are always represented in an input channel.
- a time line 30 is shown which comprises objects A, B, C in chronological order, where for object A comprises a start time 31a and an end time 31b.
- the end time 31b of the first object A coincides with a start time of the second object B, which in turn has an end time 32b, which in turn coincides with a start time of the third object C, which in turn has an end time 33b.
- the start times 32a and 33b correspond to the end times 31b and 32b and are not shown in FIGS. 3a, 3b for reasons of clarity.
- a mixer channel symbol 34 is shown on the right in FIG. 3a, which comprises a slider 35 and stylized buttons 36, via the properties of the Audio signal of object B or virtual positions etc. can be changed.
- the time stamp in FIG. 3a which is represented by 37
- the stylized channel representation 34 would not display object B, but rather object C.
- the user interface in FIG. B. an object D would take place simultaneously with the object B, represent another channel, such as the input channel i + 1.
- 3a provides the sound engineer with a simple overview of the number of parallel audio objects at a time, that is to say the number of active channels that are displayed at all. Inactive input channels are not displayed at all in the embodiment of the user interface 20 of FIG. 2 shown in FIG. 3a.
- the input channel i to which the channels assigned in chronological order belong, is represented in triplicate, once as object channel A, another time as object channel B and again another time as object channel C.
- the channel such as input channel i for object B (reference symbol 38 in FIG. B. highlight in color or brightness to give the sound engineer on the one hand a clear overview of which object is currently being fed on the channel i in question, and which objects z. B.
- the user interface 20 of FIG. 2 and in particular the versions thereof in FIGS. 3a and 3b are thus designed to provide a visual representation as desired for the “assignment” of the input channels of the channel-oriented audio signal processing device that is generated by the imaging device 18 becomes.
- FIG. 5 shows an audio scene with different audio objects A, B, C, D, E, F and G. It can be seen that objects A, B, C and D overlap in time. In other words, these objects A, B, C and D are all active at a certain point in time 50. In contrast, object E does not overlap with objects A, B. Object E only overlaps with objects D and C, as can be seen at a point in time 52. The object F and the object D are again overlapping, as was the case at a point in time 54. B. can be seen. The same applies to objects F and G, which, for. B. overlap at a time 56 while object G does not overlap with objects A, B, C, D and E.
- a simple and in many respects disadvantageous channel assignment would be to assign each audio object to an input channel in the example shown in FIG. so that the 1: 1 conversion on the left in the table in Fig. 6 would be obtained.
- a disadvantage of this concept is that many input channels are required or that if there are many audio objects, which is very quickly the case in a film, the number of input channels of the wave field synthesis rendering unit is the number of virtual sources that can be processed in one limits the real film setting, which is of course not desirable, since technology limits should not impair the creative potential.
- this 1: 1 implementation is very confusing, in that, although at some point each input channel typically receives an audio object, that when a particular audio scene is viewed, relatively few input channels are typically active, but the user cannot easily determine this , because he must always have an overview of all audio channels.
- this concept of the 1: 1 assignment of audio objects to input channels of the audio processing device means that in order to limit the number of audio objects as little or not as possible, audio processing devices which have a very high number of input channels must be provided, which leads to an immediate increase in the computing complexity, the required computing power and the required storage capacity of the audio processing device in order to calculate the individual loudspeaker signals, which directly results in a higher price of such a system.
- the parallel audio objects A, B, C and D are sequentially assigned to the input channels EKI, EK2, EK3 and EK4.
- the object E no longer has to be assigned to the input channel EK5, as in the left half of FIG. 6, but They can be assigned to a free channel, such as the input channel EKI or, as indicated by the brackets, the input channel EK2.
- object F which can in principle be assigned to all channels except the input channel EK4.
- object G which can also be assigned to all channels except the channel to which object F was previously assigned (in the example the input channel EKI).
- the imaging device 18 is designed to always occupy channels with the lowest possible atomic number and to always occupy adjacent input channels EKi and EKi + 1 so that no holes arise.
- this "neighborhood feature" is not essential since a user of the audio authoring system according to the present invention is indifferent to whether he is currently using the first or the seventh or any other input channel of the audio processing device as long as he is through the the user interface according to the invention is enabled to manipulate precisely this channel, for example by means of a controller 35 or by buttons 36 of a mixer channel representation 34 of the current channel.
- the user interface channel i does not necessarily have to discuss the input channel i, but it can also do so a channel assignment takes place in such a way that the user interface channel i corresponds, for example, to the input channel EKm, while the user interface channel i + 1 corresponds to the input channel k, etc.
- the user interface channel remapping thus avoids channel holes, so that the sound engineer can always immediately and clearly see the current user interface channels displayed side by side.
- the concept of the user interface according to the invention can of course also be transferred to an existing hardware mixing console which includes actual hardware controls and hardware buttons which a Tomhoff will operate manually in order to achieve an optimal audio mix.
- An advantage of the present invention is that even such a sound mixer, which is typically very familiar and loved by the sound mixer, can also be used, for example by B. by indicators typically present on the mixing console, such as LEDs, the current channels are always clearly marked for the sound engineer.
- the present invention is also flexible in that it can deal with cases where the wave field synthesis speaker setup used for production is different from the reproduction setup e.g. B. deviates in a cinema. Therefore, according to the invention, the audio content is encoded in a format that can be processed by different systems.
- This format is the audio scene, i. H. the object-oriented audio representation and not the loudspeaker signal representation.
- the preparation process is understood as an adaptation of the content to the reproduction system.
- not only a few master channels but an entire object-oriented scene description are processed in the wave field synthesis reproduction process.
- the scenes are prepared for each reproduction. This is typically carried out in real time in order to adapt to the current situation.
- this adaptation takes into account the number of loudspeakers and their positions, the characteristics of the reproduction system, such as the frequency response, the sound pressure level etc., the room acoustic conditions or other image reproduction conditions.
- a major difference in the wave field synthesis mix compared to the channel-based approach of current systems consists in the freely available positioning of the sound objects.
- the position of the sound sources is relatively encoded. This is important for mixed concepts that belong to a visual content, such as cinema films, since positioning of the sound sources with respect to the image is attempted to be approximated by a correct system setup.
- the wave field synthesis system requires absolute positions for the sound objects, which is given to this audio object in addition to the audio signal of an audio object in addition to the start time and the end time of this audio object.
- the aim of the re-engineering of the post-production process is to minimize user training and integrate the integration of the new system according to the invention in the be ⁇ standing knowledge of the user.
- all tracks or objects that are to be prepared at different positions will exist within the master file / distribution format, which in contrast to conventional production facilities that are optimized to reduce the number of tracks during the production process.
- the wave field synthesis authoring tool according to the present invention is implemented as a workstation which has the possibility of recording the audio signals of the final mix and converting them to the distribution format in another step.
- the first is that all audio objects or tracks still exist in the final master.
- the second aspect is that positioning is not done in the mixing console. This means that so-called authoring is one of the last steps in the production chain.
- the wave field synthesis authoring system that is to say the device according to the invention for generating an audio representation
- the device according to the invention for generating an audio representation is implemented as an independent workstation, which can be integrated into different production environments by feeding audio outputs from the mixer into the system.
- the mixer represents the user interface, which is coupled to the device for generating the audio representation of an audio scene.
- FIG. 4 The system according to the invention according to a preferred embodiment of the present invention is shown in FIG. 4.
- the same reference numerals as in Fig. 1 or 2 indicate the same elements.
- the basic system design ba- is based on the goal of modularity and the possibility of integrating existing mixing consoles into the inventive wave field synthesis authoring system as user interfaces.
- a central controller 120 which communicates with other modules, is formed in the audio processing device 12. This enables the use of alternatives for certain modules as long as they all use the same communication protocol.
- the system shown in FIG. 4 is considered a black box, one generally sees a number of inputs (from the provision device 10) and a number of outputs (loudspeaker signals 14) as well as the user interface 20.
- the actual WFS renderer 122 Integrated in this black box next to the The user interface is the actual WFS renderer 122, which performs the actual wave field synthesis calculation of the loudspeaker signals using various input information.
- a room simulation module 124 is provided, which is designed to carry out certain room simulations that are used to generate room properties of a recording room or to manipulate room properties of a recording room.
- an audio recording device 126 and a recording playback device are provided.
- the device 126 is preferably provided with an external input.
- the entire audio signal is either already object-oriented or still provided and fed in in a channel-oriented manner. Then the audio signals do not come from the scene protocol, which then only performs control tasks.
- the fed-in audio data is then possibly converted into an object-oriented representation by the device 126 and then fed internally to the imaging device 18, which then carries out the object / channel mapping. All audio connections between the modules can be switched by a matrix module 128 in order to connect corresponding channels to corresponding channels as required by the central controller 120.
- the user has the option of feeding 64 input channels with signals for virtual sources into the audio processing device 12, so there are 64 input channels EK1-EK in this exemplary embodiment.
- Existing consoles can thus be used as user interfaces for premixing the virtual source signals.
- the spatial mixing is then carried out by the wave field synthesis authoring system and in particular by the heart, the WFS renderer 122.
- the complete scene description is stored in the provision device 10, which is also referred to as a scene protocol.
- the main communication or the required data traffic is carried out by the central controller 120.
- Changes in the scene description such as can be achieved, for example, by the user interface 20 and in particular by a hardware mixing console 200 or a software GUI, that is to say a graphical software user interface 202, are made via a user interface controller 204 of the provision device 10 fed as a changed scene record.
- the imaging device 18 assigns each sound object to a processing channel (input channel) in which the object exists for a specific time.
- a processing channel input channel
- a number of objects exist in chronological order on a specific channel, as has been illustrated with reference to FIGS. 3a, 3b and 6.
- the wave field synthesis renderer has to do the objects don't know yourself. It simply receives signals in the audio channels and a description of the way in which these channels have to be processed.
- the provision device with the scene protocol that is to say with knowledge of the objects and the assigned channels, can transform the object-related metadata (for example the source position) to channel-related metadata and transmit the same to the WFS renderer 122.
- the communication between other modules is carried out by special protocols in such a way that the other modules contain only necessary information, as is shown schematically by the function protocols block 129 in FIG. 4.
- the control module also supports hard disk storage of the scene description. It preferably differentiates between two file formats.
- a file format is an author format where the audio data is stored as uncompressed PCM data.
- session-related information such as a grouping of audio objects, that is to say of sources, layer information, etc., is also used to be stored in a special file format based on XML.
- the other type is the distribution file format.
- audio data can be stored in a compressed manner, and there is no need to additionally store the session-related data.
- the audio objects still exist in this format and that the MPEG-4 standard can be used for distribution.
- the one or more wave field synthesis renderer modules 122 are usually supplied with virtual source signals and a channel-oriented scene description.
- a wave field synthesis renderer calculates the driver signal for each speaker, i.e. a speaker signal of the speaker signals 14 of Fig. 4.
- the wave field synthesis renderer will also calculate signals for sobwoofer speakers, which are also required to the wave field synthesis system to support at low frequencies.
- Room simulation signals from the room simulation module 124 are rendered using a number (usually 8 to 12) of static plane waves. Based on this concept, it is possible to integrate different solutions for room simulation. Without using the room simulation module 124, the wave field synthesis system generates already acceptable sound images with stable perception of the source direction for the listening area.
- a room simulation model is used which reproduces wall reflections, which are modeled, for example, in such a way that a mirror source model is used to generate the early reflections.
- These mirror sources can in turn be treated as audio objects of the scene protocol or can actually only be added by the audio processing device itself.
- the recording / playback tools 126 are a useful addition. Sound objects that are ready for mixing in a conventional manner during premixing, so that only the spatial mixing needs to be performed, can be done from the conventional mixer an audio object Playback device.
- an audio recording module which records the output channels of the mixer in a time code-controlled manner and stores the audio data on the playback module.
- the playback module is received a start time code to play a particular audio object in connection with a respective output channel which is supplied to the playback device 126 by the imaging device 18.
- the recording / playback device can start and stop the playback of individual audio objects independently of one another, depending on the description of the start time and the stop time which is assigned to an audio object.
- the audio content can be taken from the playback device module and exported to the distribution file format.
- the distribution file format thus contains a finished scene report of a completely mixed scene.
- the aim of the user interface concept according to the invention is to implement a hierarchical structure which is adapted to the tasks of the cinema mixing process.
- an audio object is understood as a source that exists as a representation of the individual audio object for a given time.
- a start time and a stop / end time are typical for a source, i.e. for an audio object.
- the source or audio object requires system resources during the time the object or source "lives".
- Each sound source preferably includes metadata in addition to the start time and the stop time.
- This metadata is "type” (a plane wave or point source at a given time), "direction”, “volume”, “mute” and “flags” for directional loudness and directional delay. All of these metadata can be used automatically ,
- the authoring system according to the invention also serves the conventional channel concept in that, for. B. Ob- objects that are "alive” over the entire film or generally over the entire scene also get their own channel. This means that these objects are in principle simple channels in a 1: 1 implementation, as set out in FIG. 6 will represent.
- At least two objects can be grouped. For each group it is possible to choose which parameters should be grouped and how they should be calculated using the master of the group. Groups of sound sources exist for a given time, which is defined by the start time and the end time of the members.
- groups are to use them for standard virtual surround setups. These could be used for virtual fading out of a scene or for virtual zooming in on a scene. Alternatively, the grouping can also be used to integrate surround reverberation effects and record them in a WFS mix.
- Pre Dubs can be simulated in the audio workstation using layers. Layers can also be used to change display attributes during the authoring process, for example to show or hide different parts of the current mixed item.
- a scene consists of all the components previously discussed for a given period of time. This period could be a film reel or z. B. be the entire film, or else only z. B. a film section of certain duration, such as five minutes.
- the scene consists of a number of layers, groups and sources that belong to the scene.
- the complete user interface 20 should include both a graphics software part and a hardware part to allow haptic control.
- the user interface could also be completely implemented as a software module for cost reasons.
- a design concept for the graphic system is used, which is based on so-called "spaces". There are a small number of different spaces in the user interface. Each space is a special editing environment that shows the project from a different approach, with all tools for There are no more windows to look at, all the tools needed for an environment are in the space.
- the adaptive mixing space already described with reference to FIGS. 3a and 3b is used. It can be compared to a conventional mixer that only shows the active channels.
- audio object information is also presented instead of the pure channel information.
- these objects are assigned to input channels of the WFS rendering unit by the imaging device 18 of FIG. 1.
- timeline space which provides an overview of all input channels. Each channel is represented with its corresponding objects. The user has the option of object-to-channel mapping to be used, although automatic channel assignment is preferred for reasons of simplicity.
- Another space is the positioning and editing space, which shows the scene in a three-dimensional view. This space should enable the user to record or edit movements of the source objects. Movements can be generated using, for example, a joystick or using other input / display devices, as are known for graphic user interfaces.
- each room is described by a specific parameter set that is stored in a room preset library.
- different types of parameter sets as well as different graphical user interfaces can be used.
- the method according to the invention for generating an audio representation can be implemented in hardware or in software.
- the implementation can take place on a digital storage medium, in particular a floppy disk or CD with electronically readable control signals, which can cooperate with a programmed computer system in such a way that the method according to the invention is carried out.
- the invention thus also consists in a computer program product with a program code stored on a machine-readable carrier for carrying out the method according to the invention when the computer program product runs on a computer.
- the invention is thus also a computer program with a program code for executing the method when the computer program runs on a computer.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Stereophonic System (AREA)
- Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)
- Optical Recording Or Reproduction (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04763715A EP1652405B1 (en) | 2003-08-04 | 2004-08-02 | Device and method for the generation, storage or processing of an audio representation of an audio scene |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03017785 | 2003-08-04 | ||
DE10344638A DE10344638A1 (en) | 2003-08-04 | 2003-09-25 | Generation, storage or processing device and method for representation of audio scene involves use of audio signal processing circuit and display device and may use film soundtrack |
EP04763715A EP1652405B1 (en) | 2003-08-04 | 2004-08-02 | Device and method for the generation, storage or processing of an audio representation of an audio scene |
PCT/EP2004/008646 WO2005017877A2 (en) | 2003-08-04 | 2004-08-02 | Device and method for the generation, storage or processing of an audio representation of an audio scene |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1652405A2 true EP1652405A2 (en) | 2006-05-03 |
EP1652405B1 EP1652405B1 (en) | 2008-03-26 |
Family
ID=34178382
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP04763715A Expired - Lifetime EP1652405B1 (en) | 2003-08-04 | 2004-08-02 | Device and method for the generation, storage or processing of an audio representation of an audio scene |
Country Status (7)
Country | Link |
---|---|
US (1) | US7680288B2 (en) |
EP (1) | EP1652405B1 (en) |
JP (1) | JP4263217B2 (en) |
CN (1) | CN100508650C (en) |
AT (1) | ATE390824T1 (en) |
DE (1) | DE10344638A1 (en) |
WO (1) | WO2005017877A2 (en) |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050058307A1 (en) * | 2003-07-12 | 2005-03-17 | Samsung Electronics Co., Ltd. | Method and apparatus for constructing audio stream for mixing, and information storage medium |
DE102005008333A1 (en) * | 2005-02-23 | 2006-08-31 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Control device for wave field synthesis rendering device, has audio object manipulation device to vary start/end point of audio object within time period, depending on extent of utilization situation of wave field synthesis system |
DE102005008342A1 (en) * | 2005-02-23 | 2006-08-24 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Audio-data files storage device especially for driving a wave-field synthesis rendering device, uses control device for controlling audio data files written on storage device |
DE102005008343A1 (en) * | 2005-02-23 | 2006-09-07 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for providing data in a multi-renderer system |
DE102005027978A1 (en) * | 2005-06-16 | 2006-12-28 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for generating a loudspeaker signal due to a randomly occurring audio source |
CN101263739B (en) | 2005-09-13 | 2012-06-20 | Srs实验室有限公司 | Systems and methods for audio processing |
US7720240B2 (en) * | 2006-04-03 | 2010-05-18 | Srs Labs, Inc. | Audio signal processing |
CN101517637B (en) * | 2006-09-18 | 2012-08-15 | 皇家飞利浦电子股份有限公司 | Encoder and decoder of audio frequency, encoding and decoding method, hub, transreciver, transmitting and receiving method, communication system and playing device |
WO2008039038A1 (en) | 2006-09-29 | 2008-04-03 | Electronics And Telecommunications Research Institute | Apparatus and method for coding and decoding multi-object audio signal with various channel |
US10296561B2 (en) | 2006-11-16 | 2019-05-21 | James Andrews | Apparatus, method and graphical user interface for providing a sound link for combining, publishing and accessing websites and audio files on the internet |
US9361295B1 (en) | 2006-11-16 | 2016-06-07 | Christopher C. Andrews | Apparatus, method and graphical user interface for providing a sound link for combining, publishing and accessing websites and audio files on the internet |
KR102149019B1 (en) * | 2008-04-23 | 2020-08-28 | 한국전자통신연구원 | Method for generating and playing object-based audio contents and computer readable recordoing medium for recoding data having file format structure for object-based audio service |
KR101724326B1 (en) * | 2008-04-23 | 2017-04-07 | 한국전자통신연구원 | Method for generating and playing object-based audio contents and computer readable recordoing medium for recoding data having file format structure for object-based audio service |
ES2963744T3 (en) * | 2008-10-29 | 2024-04-01 | Dolby Int Ab | Signal clipping protection using pre-existing audio gain metadata |
TWI383383B (en) * | 2008-11-07 | 2013-01-21 | Hon Hai Prec Ind Co Ltd | Audio processing system |
EP2205007B1 (en) * | 2008-12-30 | 2019-01-09 | Dolby International AB | Method and apparatus for three-dimensional acoustic field encoding and optimal reconstruction |
ES2793958T3 (en) * | 2009-08-14 | 2020-11-17 | Dts Llc | System to adaptively transmit audio objects |
US9305550B2 (en) * | 2009-12-07 | 2016-04-05 | J. Carl Cooper | Dialogue detector and correction |
DE102010030534A1 (en) * | 2010-06-25 | 2011-12-29 | Iosono Gmbh | Device for changing an audio scene and device for generating a directional function |
WO2012122397A1 (en) | 2011-03-09 | 2012-09-13 | Srs Labs, Inc. | System for dynamically creating and rendering audio objects |
US8971917B2 (en) | 2011-04-04 | 2015-03-03 | Soundlink, Inc. | Location-based network radio production and distribution system |
JP5798247B2 (en) | 2011-07-01 | 2015-10-21 | ドルビー ラボラトリーズ ライセンシング コーポレイション | Systems and tools for improved 3D audio creation and presentation |
US9078091B2 (en) * | 2012-05-02 | 2015-07-07 | Nokia Technologies Oy | Method and apparatus for generating media based on media elements from multiple locations |
JP5973058B2 (en) * | 2012-05-07 | 2016-08-23 | ドルビー・インターナショナル・アーベー | Method and apparatus for 3D audio playback independent of layout and format |
US9264840B2 (en) * | 2012-05-24 | 2016-02-16 | International Business Machines Corporation | Multi-dimensional audio transformations and crossfading |
WO2014165806A1 (en) | 2013-04-05 | 2014-10-09 | Dts Llc | Layered audio coding and transmission |
EP3005353B1 (en) * | 2013-05-24 | 2017-08-16 | Dolby International AB | Efficient coding of audio scenes comprising audio objects |
RU2630754C2 (en) | 2013-05-24 | 2017-09-12 | Долби Интернешнл Аб | Effective coding of sound scenes containing sound objects |
EP2973551B1 (en) | 2013-05-24 | 2017-05-03 | Dolby International AB | Reconstruction of audio scenes from a downmix |
CA3211308A1 (en) | 2013-05-24 | 2014-11-27 | Dolby International Ab | Coding of audio scenes |
JP6022685B2 (en) | 2013-06-10 | 2016-11-09 | 株式会社ソシオネクスト | Audio playback apparatus and method |
CN104240711B (en) | 2013-06-18 | 2019-10-11 | 杜比实验室特许公司 | For generating the mthods, systems and devices of adaptive audio content |
CN105493182B (en) * | 2013-08-28 | 2020-01-21 | 杜比实验室特许公司 | Hybrid waveform coding and parametric coding speech enhancement |
WO2015150384A1 (en) * | 2014-04-01 | 2015-10-08 | Dolby International Ab | Efficient coding of audio scenes comprising audio objects |
EP3151240B1 (en) * | 2014-05-30 | 2022-12-21 | Sony Group Corporation | Information processing device and information processing method |
US10321256B2 (en) | 2015-02-03 | 2019-06-11 | Dolby Laboratories Licensing Corporation | Adaptive audio construction |
US11096004B2 (en) * | 2017-01-23 | 2021-08-17 | Nokia Technologies Oy | Spatial audio rendering point extension |
GB201719854D0 (en) * | 2017-11-29 | 2018-01-10 | Univ London Queen Mary | Sound effect synthesis |
GB201800920D0 (en) | 2018-01-19 | 2018-03-07 | Nokia Technologies Oy | Associated spatial audio playback |
EP3683794B1 (en) * | 2019-01-15 | 2021-07-28 | Nokia Technologies Oy | Audio processing |
KR20210151831A (en) | 2019-04-15 | 2021-12-14 | 돌비 인터네셔널 에이비 | Dialogue enhancements in audio codecs |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01279700A (en) * | 1988-04-30 | 1989-11-09 | Teremateiiku Kokusai Kenkyusho:Kk | Acoustic signal processor |
JPH04225700A (en) * | 1990-12-27 | 1992-08-14 | Matsushita Electric Ind Co Ltd | Audio reproducing device |
JPH06246064A (en) * | 1993-02-23 | 1994-09-06 | Victor Co Of Japan Ltd | Additional equipment for tv game machine |
JP3492404B2 (en) * | 1993-12-24 | 2004-02-03 | ローランド株式会社 | Sound effect device |
US7085387B1 (en) * | 1996-11-20 | 2006-08-01 | Metcalf Randall B | Sound system and method for capturing and reproducing sounds originating from a plurality of sound sources |
DE69831396T2 (en) * | 1997-11-29 | 2006-06-14 | Koninkl Philips Electronics Nv | METHOD AND DEVICE FOR ADAPTING VARIABLE RATE DIGITAL AUDIO INFORMATION TO A SERIES OF EQUIVALENT BLOCKS AND A UNITARY MEDIUM PRODUCED THEREFORE THROUGH A COMMUNICATION INTERFACE |
US6054989A (en) * | 1998-09-14 | 2000-04-25 | Microsoft Corporation | Methods, apparatus and data structures for providing a user interface, which exploits spatial memory in three-dimensions, to objects and which provides spatialized audio |
GB2349762B (en) * | 1999-03-05 | 2003-06-11 | Canon Kk | Image processing apparatus |
US7149313B1 (en) * | 1999-05-17 | 2006-12-12 | Bose Corporation | Audio signal processing |
EP1209949A1 (en) * | 2000-11-22 | 2002-05-29 | Technische Universiteit Delft | Wave Field Synthesys Sound reproduction system using a Distributed Mode Panel |
GB0127778D0 (en) * | 2001-11-20 | 2002-01-09 | Hewlett Packard Co | Audio user interface with dynamic audio labels |
US20030035553A1 (en) * | 2001-08-10 | 2003-02-20 | Frank Baumgarte | Backwards-compatible perceptual coding of spatial cues |
-
2003
- 2003-09-25 DE DE10344638A patent/DE10344638A1/en not_active Ceased
-
2004
- 2004-08-02 AT AT04763715T patent/ATE390824T1/en not_active IP Right Cessation
- 2004-08-02 EP EP04763715A patent/EP1652405B1/en not_active Expired - Lifetime
- 2004-08-02 CN CNB2004800264019A patent/CN100508650C/en not_active Expired - Lifetime
- 2004-08-02 JP JP2006522307A patent/JP4263217B2/en not_active Expired - Lifetime
- 2004-08-02 WO PCT/EP2004/008646 patent/WO2005017877A2/en active IP Right Grant
- 2004-08-04 US US10/912,276 patent/US7680288B2/en active Active
Non-Patent Citations (1)
Title |
---|
See references of WO2005017877A2 * |
Also Published As
Publication number | Publication date |
---|---|
CN1849845A (en) | 2006-10-18 |
US7680288B2 (en) | 2010-03-16 |
EP1652405B1 (en) | 2008-03-26 |
US20050105442A1 (en) | 2005-05-19 |
ATE390824T1 (en) | 2008-04-15 |
JP2007501553A (en) | 2007-01-25 |
JP4263217B2 (en) | 2009-05-13 |
DE10344638A1 (en) | 2005-03-10 |
WO2005017877A2 (en) | 2005-02-24 |
CN100508650C (en) | 2009-07-01 |
WO2005017877A3 (en) | 2005-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1652405B1 (en) | Device and method for the generation, storage or processing of an audio representation of an audio scene | |
DE10328335B4 (en) | Wavefield syntactic device and method for driving an array of loud speakers | |
EP1844628B1 (en) | Device and method for activating an electromagnetic field synthesis renderer device with audio objects | |
DE69130528T2 (en) | SOUND MIXER | |
EP1844627B1 (en) | Device and method for simulating an electromagnetic field synthesis system | |
EP1671516B1 (en) | Device and method for producing a low-frequency channel | |
DE10254404B4 (en) | Audio reproduction system and method for reproducing an audio signal | |
EP1723825B1 (en) | Apparatus and method for controlling a wave field synthesis rendering device | |
DE19950319A1 (en) | Process for synthesizing a three-dimensional sound field | |
DE102005008343A1 (en) | Apparatus and method for providing data in a multi-renderer system | |
DE102006017791A1 (en) | Audio-visual signal reproducer e.g. CD-player, has processing device producing gradient in audio pressure distribution, so that pressure level is increased inversely proportional to angles between tones arrival directions and straight line | |
DE10321980B4 (en) | Apparatus and method for calculating a discrete value of a component in a loudspeaker signal | |
DE102006010212A1 (en) | Apparatus and method for the simulation of WFS systems and compensation of sound-influencing WFS properties | |
EP3756363A1 (en) | Apparatus and method for object-based spatial audio-mastering | |
DE2850490A1 (en) | DEVICE FOR MULTI-DIMENSIONAL SIGNAL DISTRIBUTION | |
DE10254470A1 (en) | Apparatus and method for determining an impulse response and apparatus and method for presenting an audio piece | |
EP1789970B1 (en) | Device and method for storing audio files | |
DE102010009170B4 (en) | Method for processing and/or mixing sound tracks | |
DE2503778C3 (en) | Sound transmission system with at least four channels and with a sound recording device | |
DE2503778B2 (en) | SOUND TRANSMISSION SYSTEM WITH AT LEAST FOUR CHANNELS AND WITH A SOUND RECORDING DEVICE | |
CH704501B1 (en) | A method for reproducing data stored on a data carrier and audio data corresponding device. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20050527 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR |
|
DAX | Request for extension of the european patent (deleted) | ||
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWAN |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: MUENNICH, KATHRIN Inventor name: ROEDER, THOMAS Inventor name: LANGHAMMER, JAN Inventor name: MELCHIOR, FRANK Inventor name: BRIX, SANDRA |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: REICHELT, KATHRIN Inventor name: ROEDER, THOMAS Inventor name: LANGHAMMER, JAN Inventor name: MELCHIOR, FRANK Inventor name: BRIX, SANDRA |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: GERMAN Ref country code: CH Ref legal event code: EP |
|
REF | Corresponds to: |
Ref document number: 502004006676 Country of ref document: DE Date of ref document: 20080508 Kind code of ref document: P |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080326 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080326 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080326 |
|
ET | Fr: translation filed | ||
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FD4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080326 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080626 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080901 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080707 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080326 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080326 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080326 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080326 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20081230 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080831 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080326 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080626 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080831 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080326 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080326 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080802 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080802 Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080927 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080326 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080627 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 13 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 14 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 15 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230524 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: NL Payment date: 20230823 Year of fee payment: 20 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20230824 Year of fee payment: 20 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20230821 Year of fee payment: 20 Ref country code: DE Payment date: 20230822 Year of fee payment: 20 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R071 Ref document number: 502004006676 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MK Effective date: 20240801 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: PE20 Expiry date: 20240801 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION Effective date: 20240801 |