EP2692154B1 - Method for capturing and rendering an audio scene - Google Patents
Method for capturing and rendering an audio scene Download PDFInfo
- Publication number
- EP2692154B1 EP2692154B1 EP12718101.4A EP12718101A EP2692154B1 EP 2692154 B1 EP2692154 B1 EP 2692154B1 EP 12718101 A EP12718101 A EP 12718101A EP 2692154 B1 EP2692154 B1 EP 2692154B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- sound
- directivity
- signal
- acquisition
- signals
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 29
- 238000009877 rendering Methods 0.000 title claims description 25
- 238000004590 computer program Methods 0.000 claims description 10
- 230000035945 sensitivity Effects 0.000 claims description 2
- 230000000875 corresponding effect Effects 0.000 description 18
- 239000012528 membrane Substances 0.000 description 14
- 239000011888 foil Substances 0.000 description 13
- 238000013519 translation Methods 0.000 description 11
- 230000005540 biological transmission Effects 0.000 description 9
- 125000006850 spacer group Chemical group 0.000 description 8
- 238000013022 venting Methods 0.000 description 8
- 238000007792 addition Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000001465 metallisation Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- ZYXYTGQFPZEUFX-UHFFFAOYSA-N benzpyrimoxan Chemical compound O1C(OCCC1)C=1C(=NC=NC=1)OCC1=CC=C(C=C1)C(F)(F)F ZYXYTGQFPZEUFX-UHFFFAOYSA-N 0.000 description 4
- 230000005484 gravity Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000009977 dual effect Effects 0.000 description 3
- 230000000638 stimulation Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000009527 percussion Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 239000002023 wood Substances 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 208000023514 Barrett esophagus Diseases 0.000 description 1
- 229910001369 Brass Inorganic materials 0.000 description 1
- 241001503991 Consolida Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 239000010951 brass Substances 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005520 electrodynamics Effects 0.000 description 1
- 230000000155 isotopic effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000010255 response to auditory stimulus Effects 0.000 description 1
- 238000013518 transcription Methods 0.000 description 1
- 230000035897 transcription Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/02—Casings; Cabinets ; Supports therefor; Mountings therein
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/02—Casings; Cabinets ; Supports therefor; Mountings therein
- H04R1/025—Arrangements for fixing loudspeaker transducers, e.g. in a box, furniture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/02—Casings; Cabinets ; Supports therefor; Mountings therein
- H04R1/026—Supports for loudspeaker casings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/22—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired frequency characteristic only
- H04R1/24—Structural combinations of separate transducers or of two parts of the same transducer and responsive respectively to two or more frequency ranges
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R19/00—Electrostatic transducers
- H04R19/01—Electrostatic transducers characterised by the use of electrets
- H04R19/016—Electrostatic transducers characterised by the use of electrets for microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/02—Spatial or constructional arrangements of loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/15—Aspects of sound capture and related signal processing for recording or reproduction
Definitions
- the present invention is related to electroacoustics and, particularly to concepts of acquiring and rendering sound.
- audio scenes are captured using a set of microphones. Each microphone outputs a microphone signal.
- a sound engineer performs a mixing of the 25 microphone output signals into, for example, a standardized format such as a stereo format or a 5.1, 7.1, 7.2 etc., format.
- a stereo format the sound engineer or an automatic mixing process generates two stereo channels.
- the mixing results in five channels and a subwoofer channel.
- the mixing results in seven channels and two subwoofer channels.
- the mixing result is applied to electro-dynamic loudspeakers.
- two loudspeakers exist and the first loudspeaker receives the first stereo channel and the second loudspeaker receives the second stereo channel.
- seven loudspeakers exist at predetermined locations and two subwoofers. The seven channels are applied to the corresponding loudspeakers and the two subwoofer channels are applied to the corresponding subwoofers.
- the usage of a single microphone arrangement on the capturing side and a single loudspeaker arrangement on the reproduction side typically neglect the true nature of the sound sources.
- acoustic music instruments and the human voice can be distinguished with respect to the way in which the sound is generated and they can also be distinguished with respect their emitting characteristic.
- Trumpets, trombones horns or bugles for example, have a powerful, strongly directed sound emission. Stated differently, these instruments emit in a preferred direction and, therefore, have a high directivity.
- String or bow instruments, xylophones, cymbals and triangles for example, generate sound energy in a frequency range up to 100 kHz and, additionally, have a low emission directivity or a low emission quality factor. Specifically, the sound of a xylophone and a triangle are clearly identifiable instead of their low sound energy and their low quality factor even within a loud orchestra.
- Fig. 7 When generating sound energy, air molecules, for example two- and three-atomic gas molecules are stimulated. There are three different mechanisms responsible for the stimulation. Reference is made to German Patent DE 198 19 452 C1 . These are summarized in Fig. 7 .
- the first way is the translation.
- the translation describes the linear movement of the air molecules or atoms with reference to the molecule's center of gravity.
- the second way of stimulation is the rotation, where the air molecules or atoms rotate around the molecule's center of gravity.
- the center of gravity is indicated in Fig. 7 at 70.
- the third mechanism is the vibration mechanism, where the atoms of a molecule move back and forth in the direction to and from the center of gravity of the molecules.
- the sound energy generated by acoustical music instruments and generated by the human voice is composed by an individual mixing ratio of translation, rotation and vibration.
- the sound emission generated by musical instruments and voices generates a sound field and the field reaches the listener in two ways.
- the first way is the direct sound, where the direct sound portion of the sound field allows a precise location of the sound source.
- the further component is the room-like emission. Sound energy emitted in all room directions generates a specific sound of instruments or a group of instruments since this room emission cooperates with the room by reflections, attenuations, etc.
- a characteristic of all acoustical musical instruments and the human voice is a certain relation between the direct sound portion and the room-like emitted sound portion.
- the present invention is based on the finding that, for obtaining a very good sound by loudspeakers in a reproduction environment, which is comparable and in most instances even not discernable from the original sound scene, where the sound is not emitted by loudspeakers but by musical instruments or human voices, the different ways in which the sound intensity is generated, i.e., translation, rotation, vibration have to be considered or the different ways in which the sound is emitted, i.e., whether the sound is emitted as a direct sound or as a room-like emission, is to be accounted for when capturing an audio scene and rendering an audio scene.
- an audio scene is not described by a single set of microphones but is described by two different sets of microphone signals. These different sets of microphone signals are never mixed with each other. Instead, a mixing can be performed with the individual signals within the first acquisition signal to obtain a first mixed signal and, additionally, the individual signals contained in the second acquisition signal can also be mixed among themselves to obtain a second mixed signal. However, individual signals from the first acquisition signal are not combined with individual signals of the second acquisition signal in order to maintain the sound signals with the different directivities. These acquisition signals or mixed signals can be separately stored. Furthermore, when mixing is not performed, the acquisition signals are separately stored. Alternatively or additionally, the two acquisition signals or the two mixed signals are transmitted into a reproduction environment and rendered by individual loudspeaker arrangements.
- the first acquisition signal or the first mixed signal is rendered by a first loudspeaker arrangement having loudspeakers emitting with a higher directivity and the second acquisition signal or the second mixed signal is rendered by a second separate loudspeaker arrangement having a more omnidirectional emission characteristic, i.e., having a less directed emission characteristic.
- a sound scene is represented not only by one acquisition signal or one mixed signal, but is represented by two acquisition signals or two mixed signals which are simultaneously acquired on the one hand or are simultaneously rendered on the other hand.
- the present invention ensures that different emission characteristics are additionally recorded from the audio scene and are rendered in the reproduction set-up.
- Loudspeakers for reproducing the omnidirectional characteristic comprise, in an example, a longitudinal enclosure comprising at least one subwoofer speaker for emitting lower sound frequencies. Furthermore, a carrier portion is provided on top of the cylindrical enclosure and a speaker arrangement comprises individual speakers for emitting higher sound frequencies that are arranged in different directions with respect to the cylindrical enclosure. The speaker arrangement is fixed to the carrier portion and is not surrounded by the longitudinal enclosure. In an example, the cylindrical enclosure additionally comprises one or more individual speakers emitting with a high directivity. This can be done by placing these individual speakers within the cylindrical enclosure in a line-array, where the loudspeaker is arranged with respect to the listener so that the directly emitting loudspeakers are facing the listeners.
- the carrier portion is a cone or frustum-like element having a small cross-section area on top where the speaker arrangement is placed. This makes sure that the loudspeaker has improved characteristics with respect to the perceived sound due to the fact that the coupling between the longitudinal enclosure in which the subwoofer is arranged and the speaker arrangement for generating the omnidirectional sound is restricted to a comparatively small area.
- the speaker arrangement is made up by a ball-like element which has equally distributed loudspeakers in it where the individual loudspeakers, however, are not included in the casing but are freely-vibratable membranes supported by a supporting structure. This makes sure that the omnidirectional emission characteristic is additionally supported by a good rotational portion of sound since such individual speakers, which are not cased in a casing, additionally generate a significant amount of rotational energy.
- the capturing of the sound scene can be enhanced by using specific microphones comprising a first electrode microphone portion and a second electret microphone portion which are arranged in a back-to-back arrangement.
- Both electret microphone portions comprise a free space so that a sound acquisition membrane or foil is movable.
- a vent channel is provided for venting the first free space or the second free space to the ambient pressure so that both microphones, although arranged in the back-to-back arrangement, have superior sound acquisition characteristics.
- first contacts for deriving an electrical signal are arranged at the first microphone portion and second contacts for deriving an electrical signal are arranged at the second microphone portion.
- each microphone portion is comprised of a metalized foil as a first electrode which is movable in response to sound energy impinging on the microphone, a spacer and a counter electrode which has, on its top, an electret foil.
- Each counter electrode additionally comprises venting channel portions which are vertically arranged with respect to the microphone.
- the venting channel comprises a horizontal venting channel portion communicating with the vertical venting channel portions and the vertical and horizontal venting channel portions are applied to the first and second microphone portions in such a way that both free spaces of the microphone portions defined by the corresponding spacers are vented to the ambient pressure and are, therefore, at ambient pressure. Additionally, this makes sure that the sound acquisition electrode can freely move with respect to the corresponding counter electrode since the venting makes sure that the free space does not build up an additional counter-pressure in addition to the ambient pressure.
- the acquisition signals generated by steps 200, 202 or the mixed signals generated by step 206 can be transmitted to a loudspeaker setup as indicated in block 208.
- the first mixed signal or the first acquisition signal is rendered by a loudspeaker arrangement having a first directivity where the first directivity is a high directivity.
- the second acquisition signal or second mixed signal is rendered by a second loudspeaker arrangement having a second directivity, where the second directivity is lower than the first directivity and where the steps 210, 212 are performed simultaneously.
- the step 202 of Fig. 2 comprises placing a second set of microphones 102 lateral or above places for sound sources as schematically illustrated in Fig. 1a , where the microphones 102 are placed above the sound scene while microphones 100 are placed in front of the sound scene.
- the individual microphone signals generated by the set of microphones 102 together form the second acquisition signal.
- the setup illustrated in Fig. 1a additionally comprises a first mixer 104, a second mixer 106, a storage 108, a transmission channel 110.
- the left portion of Fig. 1a until the transmission channel 110 represents the sound acquisition portion.
- a first processor 112 receiving the first acquisition signal or the first mixed signal is provided.
- a second processor 114 receiving the second acquisition signal or the second mixed signal is provided.
- the first processor 112 feeds the first speaker arrangement 118 for a directed sound emission and the second processor 114 feeds the second speaker arrangement 120 for an omnidirectional sound emission.
- Both loudspeaker arrangements are positioned in a replay environment 122 while the microphones 102, 100 are placed close to a sound scene 124 or can also be placed within the sound scene 124.
- Fig. 1b illustrates an exemplary standardized loudspeaker set-up in a replay environment (122 in Fig. 1a ).
- a five-channel environment similar to Dolby surround or MPEG surround is indicated where there is a left loudspeaker 151, a center loudspeaker 152, a right loudspeaker 153, a left surround loudspeaker 154 and a right surround loudspeaker 155.
- the individual loudspeakers are arranged at standardized places as, for example, known from ISO/IEC standardization of different loudspeaker setups such as stereo setups, 5.1 setups, 7.1 setups, 7.2 setups, etc.
- each of the individual loudspeakers 151 to 155 preferably comprises an omnidirectional arrangement, a directional arrangement and a subwoofer, although a single subwoofer would also be useful.
- each of the loudspeakers 151 to 155 would only have an omnidirectional arrangement and a directional arrangement and there would be an additional subwoofer placed somewhere in the room and preferably placed close to the center speaker.
- a listener position is indicated in Fig. 1b at 156.
- the sound acquisition concept illustrated in Figs. 1a , 1b and 2 can also be described as the "dual Q" concept which is an electro acoustic transmission concept in which the sound energy portions of individual sound sources or a complete sound scene are separately acquired with respect to a sound energy emitted in the direction of the listener on the one hand and a sound energy emitted more or less omnidirectional into the room of the sound scene. Furthermore, these different signals generated by the different microphone arrays are then separately processed and separately rendered.
- the sound energy which is emitted directly in the front direction to the listener is composed mainly of instruments having a high directivity such as trumpets or trombones and, additionally, comes from the singers or vocalists.
- This "high Q" sound portion is detected by microphones 100 of Fig. 1a which are placed between the sound sources and the listeners and which are directed in the direction of the sound sources if these microphones are microphones having a certain acquisition directivity.
- microphones 100 can be omnidirectional or directed microphones. Directed microphones are preferred where the maximum acquisition sensitivity is directed to the sound scene or individual instruments within the sound scene. However, already due to the placement of the first set of microphones 100 between the sound scene and the listener, a directed sound energy is acquired even though omnidirectional microphones are used.
- Instruments having a high directivity but which do not directly emit sound in the front direction such as a tuba, different horns or wings and several wood wind instruments and, additionally, instruments having a low directivity such as string instruments, percussion, gong or triangle generate a room-like or less directed sound emission.
- This "low Q" sound portion is detected with a microphone set placed lateral and/or above the instruments or with respect to the sound scene. If microphones having a certain directivity are used, it is preferred that these microphones are directed into the direction of the individual sound sources such as tuba, horns, wood wind instruments, strings, percussion, gong, triangle.
- These individual "high Q” and “low Q” microphone signals i.e., the first and second acquisition signals are independently recorded from each other and further processed such as mixed, stored, transmitted or in other ways manipulated.
- separate high and low Q mixtures can be mixed to obtain the first and second mixed signals and these mixed signals can be stored within the storage 108 or can be rendered via separate high and low Q speakers.
- Dual Q loudspeaker systems illustrated in Fig. 1b have separate speaker arrangements for the high Q rendering and the low Q rendering.
- the purpose of the high Q speakers is a direct sound emission directed to the ears of the listeners while the low Q speaker arrangement should care for an omnidirectional sound emission within the room as far as possible. Therefore, directed sphere emitters or cylinder wave emitters are used for the high Q rendering.
- For the low Q rendering omnidirectionally emitting speakers are used, where the omnidirectional characteristic actually provided by the individual speaker arrangements will typically not be an ideal omnidirectional characteristic but at least an approximation to this. Stated differently, the speakers for the low Q rendering should have a reproduction characteristic which is less directed than the reproduction or emission characteristic of the high Q speaker arrangement.
- Each microphone set 100, 102 preferably comprises a number of microphones being, for example, higher than 10 and even higher than 20 individual microphones.
- the first acquisition signal and the second acquisition signal each comprises 10 or 20 or more individual microphone signals.
- These microphone signals are then typically downmixed within the mixer 104, 106, respectively to obtain a mixed signal having a corresponding lower number of individual signals.
- each mixer performs a downmix from 20 to 5.
- the mixers 104, 106 can also perform an upmix or when the number of microphones in a microphone set is equal to the number of loudspeakers, then no mixing at all or the mixing among the microphone signals from 1 set of microphones can be performed but the mixing does not influence the number of individual signals.
- microphones 102 instead of or in addition to placing the microphones 102 above or lateral to the sound scene and placing the microphones 100 in front of the sound scene, microphones can also be placed selectively in a corresponding proximity to the corresponding instruments.
- the step of acquiring comprises placing the first set of microphones closer to the instruments of the first set of instruments than to the instruments of the second set of instruments to obtain the first acquisition signal and placing the second set of microphones closer to the instruments of the second set of instruments, i.e., the low directivity emitting instruments, than to the first set of instruments to obtain the second acquisition signal.
- the directivity as defined by a directivity factor related to a sound source is the ratio of radiated sound intensity at the remote point on the principle axis of a sound source to the average intensity of the sound transmitted through a sphere passing through the remote point and concentric with the sound source.
- the frequency is stated so that the directivity factor is obtained for individual subbands.
- the directivity factor is the ratio of the square of the voltage produced by sound waves arriving parallel to the principle axis of a microphone or other receiving transducer to the mean square of the voltage that would be produced if sound waves having the same frequency and mean square pressure where arriving simultaneously from all directions with random phase.
- the frequency is stated in order to have a directivity factor for each individual subband.
- the directivity factor is the ratio of radiated sound intensity at the remote point on the principle axis of a loudspeaker or other transducer to the average intensity of the sound transmitted through a sphere passing through the remote point and concentric with the transducer.
- the frequency is given as well in this case.
- the directivity factor is a number indicating the factor by which the radiated power would have to be increased if the directed emitter were replaced by an isotopic radiator assuming the sane field intensity for the actual sound source and the isotropic radiator.
- the directivity factor is a number indicating the factor by which the input power of the receiver/microphone for the direction of maximum reception exceeds the mean power obtained by averaging the power received from all directions of reception if the field intensity at the microphone location is equal for any direction of wave incidence.
- the directivity factor is a quantitative characterization of the capacity of a sound source to concentrate the radiated energy in a given direction or the capacity of a microphone to select signals incident from a given direction.
- the directivity factor related to the first acquisition signal is preferably greater than 0.6 and the directivity factor related to the second acquisition is preferably lower than 0.4.
- the microphones are manufactured and placed in such a way that the directionally emitted sound dominates the omnidirectionally emitted sound in the first microphone signal and that the omnidirectionally emitted sound dominates over the directionally emitted sound in the second acquisition signal.
- the method of rendering comprises a step of generating (210, 212) a sound signal from the first acquisition signal or the first mixed signal and the step of generating a second sound signal from the second acquisition signal or the second mixed signal.
- a directional speaker arrangement 118 is used, and for generating the second signal an omnidirectional speaker arrangement 120 is used.
- the directivity of the directional speaker arrangement is higher than the directivity of the omnidirectional speaker arrangement 120, although it is clear that an ideal omnidirectional emission characteristic can almost not be generated by existing loudspeaker systems, although the loudspeaker of Figs. 3 to 6 provides an excellent approximation of an ideal omnidirectional loudspeaker emission characteristic.
- the emission characteristic of the omnidirectional speakers is close to the ideal omnidirectional characteristic within a tolerance of 30 %.
- Figs. 3 to 7 for illustrating a preferred sound rendering and a preferred loudspeaker.
- brass instruments are instruments with a mainly translatory sound generation.
- the human voice generates a translatorial and a rotational portion of the air molecules.
- existing microphones and speakers with piston-like operating membranes and a back enclosure are available.
- the rotation is generated mainly by playing bow instruments, guitar, a gong or a piano due to the acoustic short-circuit of the corresponding instrument.
- the acoustic short-circuit is, for example, performed via the F-holes of a violin, the sound hole for the guitar or between the upper and lower surface of the sounding board at a grand or normal piano or by the front and back phase of a gong.
- the rotation is excited between mouth and nose.
- the rotation movement is typically limited to the medium sound frequencies and can be preferably acquired by microphones having a figure of eight characteristic, since these microphones additionally have an acoustic short-circuit.
- the reproduction is realized by mid-frequency speakers with freely vibratable membranes without having a backside enclosure.
- the vibration is generated by violins or is strongly generated by xylophones, cymbals and triangles.
- the vibrations of the atoms within a molecule is generation up to the ultrasound region above 60 kHz and even up to 100 kHz.
- this frequency range is typically not perceivable by the human hearing mechanism, nevertheless level and frequency-dependent demodulations effects and other effects take place, which are then made perceivable, since they actually occur within the hearing range extending between 20 Hz and 20 kHz.
- the authentic transmission of vibration is available by extending the frequency range above the hearing limit at about 20 kHz up to more than 60 or even 100 kHz.
- the detection of the directional sound portion for a correct location of sound sources requires a directional microphoning and speakers with a high emission quality factor or directivity in order to only put sound to the ears of the listeners as far as possible.
- a separate mixing is generated and reproduced via separate speakers.
- the detection of the room-like energy is realized by a microphone setup placed above or lateral with respect to the sound sources.
- a separate mixing is generated and reproduced by speakers having a low emission quality factor (sphere emitters) in a separate manner.
- the loudspeaker comprises a longitudinal enclosure 300 comprising at least one subwoofer speaker 310 for emitting lower sound frequencies. Furthermore, a carrier portion 312 is provided on a top and 310a of the longitudinal enclosure. Furthermore, the longitudinal enclosures has a bottom end 310b and the longitudinal enclosure is preferably closed throughout its shape and is particularly closed by a bottom plate 310b and the upper plate 310a, in which the carrier portion 312 is provided.
- an omnidirectionally emitting speaker arrangement 314 which comprises individual speakers for emitting higher sound frequencies which are arranged in different directions with respect to this longitudinal enclosure 300, wherein the speaker arrangement is fixed to the carrier portion 312 and is not surrounded by the longitudinal enclosure 300 as illustrated.
- the longitudinal enclosure is a cylindrical enclosure with a circle as a diameter throughout the length of the cylindrical enclosure 300.
- the longitudinal enclosure has a length greater than 50 cm or 100 cm and a lateral dimension grater than 20 cm. As illustrated in Fig.
- a preferred dimension of the longitudinal enclosure is 175 cm, the diameter is 30 cm and the dimension of the carrier in the direction of the longitudinal enclosure is 15 cm and the speaker arrangement 314 is in a wall-shape manner and has a diameter of 30 cm, which is the same as the diameter of the longitudinal enclosure.
- the carrier portion 312 preferably comprises a base portion having matching dimensions with the longitudinal enclosure 300. Therefore, when the longitudinal enclosure is a round cylinder, then the base portion of the carrier is a circle matching with the diameter of the longitudinal enclosure. However, when the longitudinal enclosure is square-shaped, then the lower portion of the carrier 312 is square-shaped as well and matches in dimensions with the longitudinal enclosure 300.
- the carrier 312 comprises a tip portion having a cross-sectional area which is less than 20 % of a cross-sectional area of the base portion, where the speaker arrangement 314 is fixed to the tip portion.
- the carrier 312 is cone-shaped so that the entire loudspeaker illustrated in Fig. 4 looks like a pencil having a ball on top. This is preferable due to the fact that the connection between the omnidirectional speaker arrangement 314 and the subwoofer-provided enclosure is as small as possible, since only the tip portion 312b of the carrier is in contact with the speaker arrangement 314. Hence, there is a good sound decoupling between the speaker arrangement and the longitudinal enclosure.
- the speaker arrangement 314 has a sphere-like carrier structure 316, which is also illustrated in Fig. 5 for a further embodiment.
- Individual loudspeakers are mounted so that each individual loudspeaker emits in a different direction.
- Fig. 4 illustrates several planes, where each plane is directed into a different direction and each plane represents a single speaker with a membrane such as a straightforward piston-like speaker, but without any back casing for this speaker.
- the carrier structure can be implemented specifically as illustrated in Fig. 5 where, again, the speaker rooms or planes 318 are illustrated. Furthermore, it is preferred that the structure as illustrated in Fig.
- the carrier structure 360 additionally comprises many holes 320 so that the carrier structure 360 only fulfills its functionality as a carrier structure, but does not influence the sound emission and particularly does not hinder that the membranes of the individual speakers in the speaker arrangement 314 are freely suspended. Then, due to the fact that freely suspended membranes generate a good rotation component, a useful and high quality rendering of rotational sound can be produced. Therefore, the carrier structure is preferably as less bulky as possible so that it only fulfills its functionality of structurally supporting the individual piston-like speakers without influencing the possibility of excursions of the individual membranes.
- the speaker arrangement comprises at least six individual speakers and particularly even twelve individual speakers arranged in twelve different directions, where, in this embodiment, the speaker arrangement 314 comprises a pentagonal dodekaeder (e.g.
- the loudspeaker arrangement 314 does not comprise a loudspeaker enclosure and the individual speakers are held by the supporting structure 316 so that the membranes of the individual speakers are freely suspended.
- the longitudinal enclosure 300 not only comprises the subwoofer, but additionally comprises electronic parts necessary for feeding the subwoofer speaker and the speakers of the speaker arrangement 314. Additionally, in order to provide the speaker system as, for example, illustrated in Fig. 1b , the longitudinal enclosure 300 not only comprises a single subwoofer. Instead, one or more subwoofer speakers can be provided in the front of the enclosure, where the enclosure has openings indicated at 310 in Fig. 6 , which can be covered by any kind of covering materials such as a foam-like foil or so. The whole volume of the closed enclosure serves as a resonance body for the subwoofer speakers. The enclosure additionally comprises one or more directional speakers for medium and/or high frequencies indicated at 602 in Fig.
- directional speakers are arranged in the longitudinal enclosure 300 and if there is more than one such speaker, then these speakers are preferably arranged in a line as illustrated in Fig. 6 and the entire loudspeaker is arranged with respect to the listener so that the speakers 602 are facing the listeners. Then, the individual speakers in the speaker arrangement 314 are provided with the second acquisition signal or second mixed signal discussed in the context of Fig. 1 and Fig. 2 , and the directional speakers are provided with the corresponding first acquisition signal or first mixed signal. Hence, when there are five speakers illustrated in Fig. 6 positioned at the five places indicated in Fig.
- each individual speaker has an omnidirectional arrangement (316), a directional arrangement (602) and a subwoofer 310.
- the first mixed signal comprises five channels
- the second mixed signal comprises five channels as well and there is additionally provided one subwoofer channel
- each subwoofer 310 of the five speakers in Fig. 1b receives the same signal
- each of the directional speakers 602 in one loudspeaker receives the corresponding individual signal of the first mixed signal
- each of the individual speakers in speaker arrangement 314 receives the corresponding same individual signal of the second mixed signal.
- the three speakers 602 are arranged in an d'Appolito arrangement, i.e., the upper and the lower speakers are mid frequency speakers and the speaker in the middle is a high frequency speaker.
- the loudspeaker in Fig. 6 without the directional speaker 602 can be used in order to implement the omnidirectional arrangement in Fig. 1b for each loudspeaker place and an additional directional speaker can be placed, for example, close to the center position only or close to each loudspeaker position in order to reproduce the high directivity sound separately from the low directivity sound.
- the enclosure furthermore comprises a further speaker 604 which is suspended at an upper portion of the enclosure and which has a freely suspended membrane.
- This speaker is a low/mid speaker for a low/mid frequency range between 80 and 300 Hz and preferably between 100 and 300 Hz.
- This additional speaker is advantageous, since - due to the freely suspended membrane - the speaker generates rotation stimulation/energy in the low/mid frequency range. This rotation enhances the rotation generated by the speakers 314 at low/mid frequencies.
- This speaker 604 receives the low/mid frequency portion of the signal provided to the speakers at 314, e.g., the second acquisition signal or the second mixed signal.
- the subwoofer is a twelve inch subwoofer in the closed longitudinal enclosure 300 and the speaker arrangement 314 is a pentagon dodekaeder medium/high speaker arrangement with freely vibratable medium frequency membranes.
- a method of manufacturing a loudspeaker comprises the production and/or provision of the enclosure, the carrier portion and the speaker arrangement, where the carrier portion is placed on top of the longitudinal enclosure and the speaker arrangement with the individual speakers is placed on top of the carrier portion or alternatively the speaker arrangement without the individual speakers is placed on top of the carrier portion and then the individual speakers are mounted.
- Figs. 9 to 12 in order to illustrate a microphone which can be preferably used within the first or second microphone set illustrated in Fig. 1a at 110 or 100, or which can be used for any other microphone purpose.
- the microphone comprises a first electret microphone portion 801 having a first free space and a second electret portion 802 having a second free space.
- the first and the second microphone portions 801, 802 are arranged in a back-to-back arrangement.
- a vent channel 804 is provided for venting the first free space and/or the second free space.
- first contacts 806a, 806b for deriving an electrical signal 806c and second contacts 808a and 806b for deriving a second electrical signal 808b are arranged at the first microphone portion 801, and the second microphone portion 802, respectively.
- Fig. 8 illustrates a vented back-to-back electret microphone arrangement.
- the vent channel 804 comprises two individual vertical vent channel portions 804b, 804c, which communicate with a horizontal vent channel portion 804a.
- This arrangement allows that the vent channel is produced within corresponding counter electrodes or microphone backsides before the individually produced first and second microphone portions 801, 802 are stacked on each other.
- Fig. 10 illustrates a cross-section through a microphone implemented in accordance with the principles illustrated in Fig. 8 .
- the first electret microphone portion 801 comprises, from top to bottom in Fig. 10a first metallization 810 on a foil 811 which is placed on top of a spacer 812.
- the spacer defines the first vented free space 813 of the first microphone portion 801.
- the spacer 812 is placed on top of an electret foil 814 which is placed on a counter electrode or "back plate" indicated at 816.
- Elements 810, 811, 812, 813, 814 and 816 define the first electret microphone portion 801.
- the second electret microphone portion 802 is preferably constructed in the same manner and comprises, from bottom to top, a metallization 820, a foil 821, a spacer 822 defining a second vented free space 823. On the spacer 822 an electret foil 824 is placed and above the electret foil 824 a counter electrode 826 is placed which forms the back plate of the second microphone portion. Hence, elements 820 to 826 represent the second electret microphone portion 802 of the Fig. 8 in an embodiment.
- the first and the second microphone portions have a plurality of vertical vent portions 804b, 804c, as illustrated in Fig. 10 .
- the number and arrangement of the vertical vent portions over the area of the microphone portions can be selected depending on the needs. However, it is preferred to use an even distribution of the vertical vent portions over the area as illustrated in Fig. 10 in a cross-section.
- the horizontal vent portion 804a is indicated in Fig.
- the horizontal vent portion is arranged so that it communicates with the vertical vent portions, connects the vertical vent portions and therefore connects the vented free spaces 813, 823 to the ambient pressure so that irrespective of any movement of the electrodes formed by the metallization 810 and the foil 811 of the upper microphone or the movement of the movable electrode formed by the metallization 820, 821 for the lower microphone is not damped by a closed free space or so. Instead, when the membrane moves, then a pressure equalization is always obtained by the vertical and horizontal vent portions 804a to 804c.
- the microphone is a back-electret double-microphone with a symmetrical construction.
- the metalized foils 811, 821 are moved or excited by the kinetic energy of the air molecules (sound) and therefore the capacity of the capacitor consisting of the back electrode 816, 826 and the metallization 810, 820 is changed.
- Due to the persistent charge on the electret foils 814, 824, a voltage U 1 , U 2 is generated due to the equation Q C x U, which means that U is equal to Q/C.
- the voltage U 1 is proportional to the movement of the electrode 810, 811, and the voltage U 2 is proportional to the movement of the electrode 820, 821.
- Fig. 9 illustrates a controllable signal combiner 900, which receives the first microphone signal from the first microphone portion and the second microphone portion from the second microphone portion.
- the microphone signals can be voltages.
- controllable combiner 900 comprises the first weighting stage 902 and/or a second weighting stage 904. Each weighting stage is configured for applying a certain weighting factor W 1 , W 2 to the corresponding microphone signal.
- the output of the weighting stages 902, 904 are provided to an adder 906, which adds the output of the weighting stages 902, 904 to produce the combined output signal.
- controllable combiner 900 preferably comprises a control signal 908 which is connected to the weighting stages 902, 904 in order to set the weighting factors depending on a command applied to the control signal.
- Fig. 9 additionally illustrates a table, where individual weighting factors are applied to the microphone signals and where it is outlined which characteristic is obtained in the combined output signal.
- the in-phase, out-of-phase or weighted addition functionality of the combiner can be correspondingly hardwired so that each microphone has a certain output signal characteristic with the combined C output signal, but this microphone cannot be configured.
- the controllable combiner has the switching functionality illustrated in Fig. 9 , then a configurable microphone is obtained where a basic configurability can for example be obtained by only having one of the two weighters 902, 904 where this weighter, when correspondingly controlled, performs an inversion to obtain the out-of-phase addition, while when the two input signals are simply added by the adder 906 an in-phase addition is obtained.
- the electret microphone is miniaturized and only has dimensions as are set forth in Fig. 11 .
- the length dimension is lower than 20 mm and even equal to 10 mm.
- the width dimension is preferably lower than 20 mm and even equal to 10 mm, and the height dimension is lower than 10 mm and even equal to 5 mm.
- Miniaturized double microphones which use the electret technology can preferably be placed at critical places such as F-holes of a violin and so forth as illustrated in Fig. 12.
- Fig. 12 particularly illustrates a violin with two F-holes 1200, where in one F-hole 1200 a microphone as illustrated in Fig. 8 is placed.
- the first and the second microphone signals can be output by the microphone or if the microphone has the combiner, the combined output signal is output.
- the output can take place via a wireless or wired connection.
- the transmitter for the wireless connection does not necessarily have to be placed within the F-hole as well, but can be placed at any other suitable place of the violin. Hence, as indicated in Fig. 12 a close-up microphoning of acoustical instruments can be realized.
- the icon microphone should have an audio bandwidth of 60 kHz and preferably up to 100 kHz.
- the foils 811, 821 have to be attached to the spacer in a correspondingly stiff manner.
- the microphone illustrated in Fig. 8 is useful for transmitting the translation energy portion, the rotation energy portion and the vibration energy portion in accordance with the icon criteria.
- the electret microphone is considerably smaller and therefore considerably more useful when it comes to flexibility regarding placement and so on.
- the sound acquisition, sound transmission and sound generation in results in a substantially more nature-like rendering of particularly acoustical instruments and the human voice.
- embodiments of the invention can be implemented in hardware or in software.
- the implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
- Some embodiments according to the invention comprise a non-transitory data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed or having stored thereon the first or second acquisition signals or first or second mixed signals.
- embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer.
- the program code may for example be stored on a machine readable carrier.
- inventions comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
- an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
- a further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.
- a further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein.
- the data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
- a further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
- a processing means for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
- a further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
- a programmable logic device for example a field programmable gate array
- a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein.
- the methods are preferably performed by any hardware apparatus.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Circuit For Audible Band Transducer (AREA)
- Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)
- Stereophonic System (AREA)
Description
- The present invention is related to electroacoustics and, particularly to concepts of acquiring and rendering sound.
Typically, audio scenes are captured using a set of microphones. Each microphone outputs a microphone signal. For an orchestra audio scene, for example, 25 microphones are used. Then, a sound engineer performs a mixing of the 25 microphone output signals into, for example, a standardized format such as a stereo format or a 5.1, 7.1, 7.2 etc., format. In a stereo format, the sound engineer or an automatic mixing process generates two stereo channels. For a 5.1 format, the mixing results in five channels and a subwoofer channel. Analogously, for example for a 7.2 format, the mixing results in seven channels and two subwoofer channels. When the audio scene is to be rendered in a reproduction environment, the mixing result is applied to electro-dynamic loudspeakers. In a stereo reproduction set-up, two loudspeakers exist and the first loudspeaker receives the first stereo channel and the second loudspeaker receives the second stereo channel. In a 7.2 reproduction set-up, seven loudspeakers exist at predetermined locations and two subwoofers. The seven channels are applied to the corresponding loudspeakers and the two subwoofer channels are applied to the corresponding subwoofers.
The usage of a single microphone arrangement on the capturing side and a single loudspeaker arrangement on the reproduction side typically neglect the true nature of the sound sources.
For example, acoustic music instruments and the human voice can be distinguished with respect to the way in which the sound is generated and they can also be distinguished with respect their emitting characteristic.
Trumpets, trombones horns or bugles, for example, have a powerful, strongly directed sound emission. Stated differently, these instruments emit in a preferred direction and, therefore, have a high directivity. - Violins, cellos, contrabasses, guitars, grand pianos, small pianos, gongs and similar acoustic musical instruments, for example, have a comparatively small directivity or a corresponding small emission quality factor Q. These instruments use so-called acoustic short-circuits when generating sounds. The acoustic short-circuit is generated by a communication of the front side and the backside of the corresponding vibrating area or surface.
- Regarding the human voice, a medium emission quality factor exists. The air connection between mouth and nose causes an acoustic short-circuit.
- String or bow instruments, xylophones, cymbals and triangles, for example, generate sound energy in a frequency range up to 100 kHz and, additionally, have a low emission directivity or a low emission quality factor. Specifically, the sound of a xylophone and a triangle are clearly identifiable instead of their low sound energy and their low quality factor even within a loud orchestra.
- Hence, it becomes clear that the sound generation by the acoustical instruments or other instruments and the human voice is very different from instrument to instrument.
- When generating sound energy, air molecules, for example two- and three-atomic gas molecules are stimulated. There are three different mechanisms responsible for the stimulation. Reference is made to German Patent
DE 198 19 452 C1 . These are summarized inFig. 7 . The first way is the translation. The translation describes the linear movement of the air molecules or atoms with reference to the molecule's center of gravity. The second way of stimulation is the rotation, where the air molecules or atoms rotate around the molecule's center of gravity. The center of gravity is indicated inFig. 7 at 70. The third mechanism is the vibration mechanism, where the atoms of a molecule move back and forth in the direction to and from the center of gravity of the molecules. - Hence, the sound energy generated by acoustical music instruments and generated by the human voice is composed by an individual mixing ratio of translation, rotation and vibration.
- In the straightforward electro acoustic science, the definition of the vector sound intensity only reflects the translation. Unfortunately, however, the complete description of the sound energy, where rotation and vibration are additionally acknowledged, is missing in straightforward electro acoustics.
- However, the complete sound intensity is defined as a sum of the intensities stemming from translation, from rotation and vibration.
- Furthermore, different sound sources have different sound emission characteristics. The sound emission generated by musical instruments and voices generates a sound field and the field reaches the listener in two ways. The first way is the direct sound, where the direct sound portion of the sound field allows a precise location of the sound source. The further component is the room-like emission. Sound energy emitted in all room directions generates a specific sound of instruments or a group of instruments since this room emission cooperates with the room by reflections, attenuations, etc. A characteristic of all acoustical musical instruments and the human voice is a certain relation between the direct sound portion and the room-like emitted sound portion.
- The publication "The Art Of Recording The Big Band", November 11, 2010, XP055198515 discloses a big band recording of the swing era, i.e., Benny Goodman at Carnegie Hall, 1938. For recording, two microphones were used, one hanging over the stage for overall pickup, the second on stage and also used for the P.A. system. These two sources were probably mixed on the site, and the signal then sent to CBS studios (using the usual broadcast-remote transmission lines) where the transcription discs were cut. The setup differed from a radio remote in that there was no announcer, and therefore no separate announce microphone.
It is the object of the present invention to provide an improved concept for capturing and rendering an audio scene. This object is achieved by a method of capturing an audio scene in accordance withclaim 1 or a computer program in accordance with claim 6. - The present invention is based on the finding that, for obtaining a very good sound by loudspeakers in a reproduction environment, which is comparable and in most instances even not discernable from the original sound scene, where the sound is not emitted by loudspeakers but by musical instruments or human voices, the different ways in which the sound intensity is generated, i.e., translation, rotation, vibration have to be considered or the different ways in which the sound is emitted, i.e., whether the sound is emitted as a direct sound or as a room-like emission, is to be accounted for when capturing an audio scene and rendering an audio scene. When capturing the audio scene, sound having a first or high directivity is acquired to obtain a first acquisition signal and, simultaneously, sound having a second directivity is acquired to obtain a second acquisition signal, where the directivity of the second acquisition signal or the directivity of the sound actually captured by the second acquisition signal is lower than the second directivity.
- Thus, an audio scene is not described by a single set of microphones but is described by two different sets of microphone signals. These different sets of microphone signals are never mixed with each other. Instead, a mixing can be performed with the individual signals within the first acquisition signal to obtain a first mixed signal and, additionally, the individual signals contained in the second acquisition signal can also be mixed among themselves to obtain a second mixed signal. However, individual signals from the first acquisition signal are not combined with individual signals of the second acquisition signal in order to maintain the sound signals with the different directivities. These acquisition signals or mixed signals can be separately stored. Furthermore, when mixing is not performed, the acquisition signals are separately stored. Alternatively or additionally, the two acquisition signals or the two mixed signals are transmitted into a reproduction environment and rendered by individual loudspeaker arrangements. Hence, the first acquisition signal or the first mixed signal is rendered by a first loudspeaker arrangement having loudspeakers emitting with a higher directivity and the second acquisition signal or the second mixed signal is rendered by a second separate loudspeaker arrangement having a more omnidirectional emission characteristic, i.e., having a less directed emission characteristic.
Hence, a sound scene is represented not only by one acquisition signal or one mixed signal, but is represented by two acquisition signals or two mixed signals which are simultaneously acquired on the one hand or are simultaneously rendered on the other hand. The present invention ensures that different emission characteristics are additionally recorded from the audio scene and are rendered in the reproduction set-up.
Loudspeakers for reproducing the omnidirectional characteristic comprise, in an example, a longitudinal enclosure comprising at least one subwoofer speaker for emitting lower sound frequencies. Furthermore, a carrier portion is provided on top of the cylindrical enclosure and a speaker arrangement comprises individual speakers for emitting higher sound frequencies that are arranged in different directions with respect to the cylindrical enclosure. The speaker arrangement is fixed to the carrier portion and is not surrounded by the longitudinal enclosure. In an example, the cylindrical enclosure additionally comprises one or more individual speakers emitting with a high directivity. This can be done by placing these individual speakers within the cylindrical enclosure in a line-array, where the loudspeaker is arranged with respect to the listener so that the directly emitting loudspeakers are facing the listeners. Furthermore, it is preferred that the carrier portion is a cone or frustum-like element having a small cross-section area on top where the speaker arrangement is placed. This makes sure that the loudspeaker has improved characteristics with respect to the perceived sound due to the fact that the coupling between the longitudinal enclosure in which the subwoofer is arranged and the speaker arrangement for generating the omnidirectional sound is restricted to a comparatively small area. Furthermore, it is preferred that the speaker arrangement is made up by a ball-like element which has equally distributed loudspeakers in it where the individual loudspeakers, however, are not included in the casing but are freely-vibratable membranes supported by a supporting structure. This makes sure that the omnidirectional emission characteristic is additionally supported by a good rotational portion of sound since such individual speakers, which are not cased in a casing, additionally generate a significant amount of rotational energy. - Additionally, the capturing of the sound scene can be enhanced by using specific microphones comprising a first electrode microphone portion and a second electret microphone portion which are arranged in a back-to-back arrangement. Both electret microphone portions comprise a free space so that a sound acquisition membrane or foil is movable. A vent channel is provided for venting the first free space or the second free space to the ambient pressure so that both microphones, although arranged in the back-to-back arrangement, have superior sound acquisition characteristics. Furthermore, first contacts for deriving an electrical signal are arranged at the first microphone portion and second contacts for deriving an electrical signal are arranged at the second microphone portion. Due to the back-to-back arrangement, it is preferred that the ground contact, i.e., the counter-electrode contact of both microphones, is connected or implemented as a single contact so that the microphone comprises three output contacts for deriving two different voltages as electrical signals. Preferably, each microphone portion is comprised of a metalized foil as a first electrode which is movable in response to sound energy impinging on the microphone, a spacer and a counter electrode which has, on its top, an electret foil. Each counter electrode additionally comprises venting channel portions which are vertically arranged with respect to the microphone. Furthermore, the venting channel comprises a horizontal venting channel portion communicating with the vertical venting channel portions and the vertical and horizontal venting channel portions are applied to the first and second microphone portions in such a way that both free spaces of the microphone portions defined by the corresponding spacers are vented to the ambient pressure and are, therefore, at ambient pressure. Additionally, this makes sure that the sound acquisition electrode can freely move with respect to the corresponding counter electrode since the venting makes sure that the free space does not build up an additional counter-pressure in addition to the ambient pressure.
- Preferred embodiments of the present invention are subsequently explained with respect to the accompanying drawings in which:
- Fig. 1a
- illustrates a schematic representation of the sound acquisition scenario and a sound rendering scenario;
- Fig. 1a
- illustrates a loudspeaker placement in an exemplary standardized reproduction set-up with omnidirectional, directional and subwoofer speaker arrangements;
- Fig. 2
- illustrates a flow chart for illustrating the method of capturing an audio scene or rendering an audio scene;
- Fig. 3
- illustrates a schematic representation of a loudspeaker;
- Fig. 4
- illustrates a preferred embodiment of a loudspeaker;
- Fig. 5
- illustrates an implementation of the omnidirectional emitting speaker arrangement;
- Fig. 6
- illustrates a further schematic representation of the loudspeaker additionally having directionally emitting speakers;
- Fig. 7
- illustrates the different sound intensities;
- Fig. 8
- illustrates the schematic representation of a microphone;
- Fig. 9
- illustrates a schematic representation of a controllable combiner useful in combination with the back-to-back electret microphone of
Fig. 8 ; - Fig. 10
- illustrates a detailed implementation of a preferred microphone;
- Fig. 11
- illustrates the outer form of the microphone of
Fig. 10 ; and - Fig. 12
- illustrates a violin having a microphone attached to the F-hole.
-
Fig. 2 illustrates a flow chart of a method of capturing an audio scene. Instep 200, a sound having a first directivity is acquired to obtain a first acquisition signal. Instep 202, a sound having a second directivity is acquired to obtain a second acquisition signal. Particularly, the first directivity is higher than the second directivity. Furthermore, thesteps step step 204, the first and second acquisition signals are separately stored for later use either for mixing or reproduction or transmission. Alternatively or additionally,step 206 is performed, wherein individual channels in the first acquisition signal are mixed to obtain a first mixed signal and where individual channels in the second acquisition signal are mixed to obtain a second mixed signal. Both mixed signals can then be separately stored at the end ofstep 206. - Alternatively or additionally, the acquisition signals generated by
steps step 206 can be transmitted to a loudspeaker setup as indicated inblock 208. Instep 210, the first mixed signal or the first acquisition signal is rendered by a loudspeaker arrangement having a first directivity where the first directivity is a high directivity. Instep 212, the second acquisition signal or second mixed signal is rendered by a second loudspeaker arrangement having a second directivity, where the second directivity is lower than the first directivity and where thesteps - In an embodiment, the step of acquiring the sound having a first directivity comprises placing
microphones 100 illustrated inFig. 1a between places for sound sources and places for listeners and the microphones indicated at 100 inFig. 1a form a first set of microphones. The individual microphone signals output by theindividual microphones 100 form the first acquisition signal. - Furthermore, the
step 202 ofFig. 2 comprises placing a second set ofmicrophones 102 lateral or above places for sound sources as schematically illustrated inFig. 1a , where themicrophones 102 are placed above the sound scene whilemicrophones 100 are placed in front of the sound scene. The individual microphone signals generated by the set ofmicrophones 102 together form the second acquisition signal. The setup illustrated inFig. 1a additionally comprises afirst mixer 104, asecond mixer 106, astorage 108, atransmission channel 110. The left portion ofFig. 1a until thetransmission channel 110 represents the sound acquisition portion. In the sound rendering portion illustrated at the left hand portion ofFig. 1a , afirst processor 112 receiving the first acquisition signal or the first mixed signal is provided. Additionally, asecond processor 114 receiving the second acquisition signal or the second mixed signal is provided. Thefirst processor 112 feeds thefirst speaker arrangement 118 for a directed sound emission and thesecond processor 114 feeds thesecond speaker arrangement 120 for an omnidirectional sound emission. Both loudspeaker arrangements are positioned in areplay environment 122 while themicrophones sound scene 124 or can also be placed within thesound scene 124. -
Fig. 1b illustrates an exemplary standardized loudspeaker set-up in a replay environment (122 inFig. 1a ). A five-channel environment similar to Dolby surround or MPEG surround is indicated where there is aleft loudspeaker 151, acenter loudspeaker 152, aright loudspeaker 153, aleft surround loudspeaker 154 and aright surround loudspeaker 155. The individual loudspeakers are arranged at standardized places as, for example, known from ISO/IEC standardization of different loudspeaker setups such as stereo setups, 5.1 setups, 7.1 setups, 7.2 setups, etc. - As indicated in
Fig. 1b , each of theindividual loudspeakers 151 to 155 preferably comprises an omnidirectional arrangement, a directional arrangement and a subwoofer, although a single subwoofer would also be useful. In this embodiment each of theloudspeakers 151 to 155 would only have an omnidirectional arrangement and a directional arrangement and there would be an additional subwoofer placed somewhere in the room and preferably placed close to the center speaker. A listener position is indicated inFig. 1b at 156. - The sound acquisition concept illustrated in
Figs. 1a ,1b and2 can also be described as the "dual Q" concept which is an electro acoustic transmission concept in which the sound energy portions of individual sound sources or a complete sound scene are separately acquired with respect to a sound energy emitted in the direction of the listener on the one hand and a sound energy emitted more or less omnidirectional into the room of the sound scene. Furthermore, these different signals generated by the different microphone arrays are then separately processed and separately rendered. - When an orchestra is considered, it has been found that the sound energy which is emitted directly in the front direction to the listener is composed mainly of instruments having a high directivity such as trumpets or trombones and, additionally, comes from the singers or vocalists. This "high Q" sound portion is detected by
microphones 100 ofFig. 1a which are placed between the sound sources and the listeners and which are directed in the direction of the sound sources if these microphones are microphones having a certain acquisition directivity. It is to be noted here thatmicrophones 100 can be omnidirectional or directed microphones. Directed microphones are preferred where the maximum acquisition sensitivity is directed to the sound scene or individual instruments within the sound scene. However, already due to the placement of the first set ofmicrophones 100 between the sound scene and the listener, a directed sound energy is acquired even though omnidirectional microphones are used. - Instruments having a high directivity but which do not directly emit sound in the front direction such as a tuba, different horns or wings and several wood wind instruments and, additionally, instruments having a low directivity such as string instruments, percussion, gong or triangle generate a room-like or less directed sound emission. This "low Q" sound portion is detected with a microphone set placed lateral and/or above the instruments or with respect to the sound scene. If microphones having a certain directivity are used, it is preferred that these microphones are directed into the direction of the individual sound sources such as tuba, horns, wood wind instruments, strings, percussion, gong, triangle.
- These individual "high Q" and "low Q" microphone signals, i.e., the first and second acquisition signals are independently recorded from each other and further processed such as mixed, stored, transmitted or in other ways manipulated. Hence, separate high and low Q mixtures can be mixed to obtain the first and second mixed signals and these mixed signals can be stored within the
storage 108 or can be rendered via separate high and low Q speakers. - Dual Q loudspeaker systems illustrated in
Fig. 1b have separate speaker arrangements for the high Q rendering and the low Q rendering. The purpose of the high Q speakers is a direct sound emission directed to the ears of the listeners while the low Q speaker arrangement should care for an omnidirectional sound emission within the room as far as possible. Therefore, directed sphere emitters or cylinder wave emitters are used for the high Q rendering. For the low Q rendering, omnidirectionally emitting speakers are used, where the omnidirectional characteristic actually provided by the individual speaker arrangements will typically not be an ideal omnidirectional characteristic but at least an approximation to this. Stated differently, the speakers for the low Q rendering should have a reproduction characteristic which is less directed than the reproduction or emission characteristic of the high Q speaker arrangement. - Furthermore, as indicated at 115 in
Fig. 1a , it is preferred in an embodiment to introduce room effect information into theprocessor 114 for the reproduction of the low Q sound. For the generation of virtual room effects within the replay environment or replay room, each individual speaker within the omnidirectional arrangement receives a separate signal representing the room effect information and a convolution or folding of the corresponding low Q signal with the corresponding effect signal is performed. On the other hand, theprocessor 112 does not receive any room effect information so that a room effect processing is not performed with the first acquisition signal or first mixed signal but is only preferred with the second acquisition signal or the second mixed signal. - Preferably, the dual Q technology is combined with the icon technology which is described in the context of
Figs. 3 to 7 . The icon technology describes an electro acoustic concept in which the sound energy generated by sound sources, specifically acoustical musical instruments and the human voice, is reproduced not only in the form of translation but also in the form of rotation and vibration of air or gas molecules or atoms. Preferably, translation, rotation and vibration are detected, transmitted and reproduced. - Subsequently,
Fig. 1a is discussed in more detail. Each microphone set 100, 102 preferably comprises a number of microphones being, for example, higher than 10 and even higher than 20 individual microphones. Hence, the first acquisition signal and the second acquisition signal each comprises 10 or 20 or more individual microphone signals. These microphone signals are then typically downmixed within themixer mixers - Furthermore, according to an example which, however, is not part of the scope of the invention, instead of or in addition to placing the
microphones 102 above or lateral to the sound scene and placing themicrophones 100 in front of the sound scene, microphones can also be placed selectively in a corresponding proximity to the corresponding instruments. - When the audio scene, for example, comprises an orchestra having a first set of instruments emitting with a higher directivity and a second set of instruments emitting sound with a lower directivity, then the step of acquiring comprises placing the first set of microphones closer to the instruments of the first set of instruments than to the instruments of the second set of instruments to obtain the first acquisition signal and placing the second set of microphones closer to the instruments of the second set of instruments, i.e., the low directivity emitting instruments, than to the first set of instruments to obtain the second acquisition signal.
Depending on the implementation, the directivity as defined by a directivity factor related to a sound source is the ratio of radiated sound intensity at the remote point on the principle axis of a sound source to the average intensity of the sound transmitted through a sphere passing through the remote point and concentric with the sound source. Preferably, the frequency is stated so that the directivity factor is obtained for individual subbands. - Regarding a sound acquisition by microphones, the directivity factor is the ratio of the square of the voltage produced by sound waves arriving parallel to the principle axis of a microphone or other receiving transducer to the mean square of the voltage that would be produced if sound waves having the same frequency and mean square pressure where arriving simultaneously from all directions with random phase. Preferably, the frequency is stated in order to have a directivity factor for each individual subband.
- Regarding sound emitters such as speakers, the directivity factor is the ratio of radiated sound intensity at the remote point on the principle axis of a loudspeaker or other transducer to the average intensity of the sound transmitted through a sphere passing through the remote point and concentric with the transducer. Preferably, the frequency is given as well in this case.
- However, other definitions exist for the directivity factor as well which all have the same characteristic but result in different quantitative results. For example, for a sound emitter, the directivity factor is a number indicating the factor by which the radiated power would have to be increased if the directed emitter were replaced by an isotopic radiator assuming the sane field intensity for the actual sound source and the isotropic radiator.
- For the receiving case, i.e., for a microphone, the directivity factor is a number indicating the factor by which the input power of the receiver/microphone for the direction of maximum reception exceeds the mean power obtained by averaging the power received from all directions of reception if the field intensity at the microphone location is equal for any direction of wave incidence.
- The directivity factor is a quantitative characterization of the capacity of a sound source to concentrate the radiated energy in a given direction or the capacity of a microphone to select signals incident from a given direction.
- When the measure of the directivity factor is from 0 to 1, then the directivity factor related to the first acquisition signal is preferably greater than 0.6 and the directivity factor related to the second acquisition is preferably lower than 0.4. Stated differently, it is preferred to place the two different sets of microphones so that the values of 0.6 for the first acquisition signal and 0.4 for the second acquisition signal is obtained. Naturally, it will practically not be possible to have a first acquisition signal only having directed sound and not having any omnidirectional sound. On the other hand, it will not be possible to have a second acquisition signal only having omnidirectionally emitted sound and not having directionally emitted sound. However, the microphones are manufactured and placed in such a way that the directionally emitted sound dominates the omnidirectionally emitted sound in the first microphone signal and that the omnidirectionally emitted sound dominates over the directionally emitted sound in the second acquisition signal.
- A method of rendering an audio scene comprises a step of providing a first acquisition signal related to sound having a first directivity or providing a first mixed signal related to sound having the first directivity. The method of rendering additionally comprises providing a second acquisition signal related to sound having a second directivity or providing a second mixed signal related to sound having a second directivity, where the first directivity is higher than the second directivity. The steps of providing can be actually implemented by receiving, in the sound rendering portion of
Fig. 1a , a transmitted acquisition signal or a transmitted mixed signal or by reading, from a storage, the first acquisition signal or the first mixed signal on the one hand, and the second acquisition signal or the second mixed signal on the other hand. - Furthermore, the method of rendering comprises a step of generating (210, 212) a sound signal from the first acquisition signal or the first mixed signal and the step of generating a second sound signal from the second acquisition signal or the second mixed signal. For generating the first sound signal a
directional speaker arrangement 118 is used, and for generating the second signal anomnidirectional speaker arrangement 120 is used. Preferably, the directivity of the directional speaker arrangement is higher than the directivity of theomnidirectional speaker arrangement 120, although it is clear that an ideal omnidirectional emission characteristic can almost not be generated by existing loudspeaker systems, although the loudspeaker ofFigs. 3 to 6 provides an excellent approximation of an ideal omnidirectional loudspeaker emission characteristic. - Preferably, the emission characteristic of the omnidirectional speakers is close to the ideal omnidirectional characteristic within a tolerance of 30 %.
- Subsequently, reference is made to
Figs. 3 to 7 for illustrating a preferred sound rendering and a preferred loudspeaker. - For example, brass instruments are instruments with a mainly translatory sound generation. The human voice generates a translatorial and a rotational portion of the air molecules. For the transmission of the translation, existing microphones and speakers with piston-like operating membranes and a back enclosure are available.
- The rotation is generated mainly by playing bow instruments, guitar, a gong or a piano due to the acoustic short-circuit of the corresponding instrument. The acoustic short-circuit is, for example, performed via the F-holes of a violin, the sound hole for the guitar or between the upper and lower surface of the sounding board at a grand or normal piano or by the front and back phase of a gong. When generating a human voice, the rotation is excited between mouth and nose. The rotation movement is typically limited to the medium sound frequencies and can be preferably acquired by microphones having a figure of eight characteristic, since these microphones additionally have an acoustic short-circuit. The reproduction is realized by mid-frequency speakers with freely vibratable membranes without having a backside enclosure.
- The vibration is generated by violins or is strongly generated by xylophones, cymbals and triangles. The vibrations of the atoms within a molecule is generation up to the ultrasound region above 60 kHz and even up to 100 kHz.
- Although this frequency range is typically not perceivable by the human hearing mechanism, nevertheless level and frequency-dependent demodulations effects and other effects take place, which are then made perceivable, since they actually occur within the hearing range extending between 20 Hz and 20 kHz. The authentic transmission of vibration is available by extending the frequency range above the hearing limit at about 20 kHz up to more than 60 or even 100 kHz.
- The detection of the directional sound portion for a correct location of sound sources requires a directional microphoning and speakers with a high emission quality factor or directivity in order to only put sound to the ears of the listeners as far as possible. For the directional sound, a separate mixing is generated and reproduced via separate speakers.
- The detection of the room-like energy is realized by a microphone setup placed above or lateral with respect to the sound sources. For the transmission of the room-like portion, a separate mixing is generated and reproduced by speakers having a low emission quality factor (sphere emitters) in a separate manner.
- Subsequently, a preferred loudspeaker is described with respect to
Fig. 3 . The loudspeaker comprises alongitudinal enclosure 300 comprising at least onesubwoofer speaker 310 for emitting lower sound frequencies. Furthermore, acarrier portion 312 is provided on a top and 310a of the longitudinal enclosure. Furthermore, the longitudinal enclosures has abottom end 310b and the longitudinal enclosure is preferably closed throughout its shape and is particularly closed by abottom plate 310b and theupper plate 310a, in which thecarrier portion 312 is provided. Furthermore, an omnidirectionally emittingspeaker arrangement 314 is provided which comprises individual speakers for emitting higher sound frequencies which are arranged in different directions with respect to thislongitudinal enclosure 300, wherein the speaker arrangement is fixed to thecarrier portion 312 and is not surrounded by thelongitudinal enclosure 300 as illustrated. Preferably, the longitudinal enclosure is a cylindrical enclosure with a circle as a diameter throughout the length of thecylindrical enclosure 300. Preferably, the longitudinal enclosure has a length greater than 50 cm or 100 cm and a lateral dimension grater than 20 cm. As illustrated inFig. 4 , a preferred dimension of the longitudinal enclosure is 175 cm, the diameter is 30 cm and the dimension of the carrier in the direction of the longitudinal enclosure is 15 cm and thespeaker arrangement 314 is in a wall-shape manner and has a diameter of 30 cm, which is the same as the diameter of the longitudinal enclosure. Thecarrier portion 312 preferably comprises a base portion having matching dimensions with thelongitudinal enclosure 300. Therefore, when the longitudinal enclosure is a round cylinder, then the base portion of the carrier is a circle matching with the diameter of the longitudinal enclosure. However, when the longitudinal enclosure is square-shaped, then the lower portion of thecarrier 312 is square-shaped as well and matches in dimensions with thelongitudinal enclosure 300. - Furthermore, the
carrier 312 comprises a tip portion having a cross-sectional area which is less than 20 % of a cross-sectional area of the base portion, where thespeaker arrangement 314 is fixed to the tip portion. Preferably, as illustrated inFig. 4 , thecarrier 312 is cone-shaped so that the entire loudspeaker illustrated inFig. 4 looks like a pencil having a ball on top. This is preferable due to the fact that the connection between theomnidirectional speaker arrangement 314 and the subwoofer-provided enclosure is as small as possible, since only thetip portion 312b of the carrier is in contact with thespeaker arrangement 314. Hence, there is a good sound decoupling between the speaker arrangement and the longitudinal enclosure. Furthermore, it is preferred to place the longitudinal enclosure below the speaker arrangement, since the omnidirectional emission is even better when it takes place from above rather than below the longitudinal enclosure. - The
speaker arrangement 314 has a sphere-like carrier structure 316, which is also illustrated inFig. 5 for a further embodiment. Individual loudspeakers are mounted so that each individual loudspeaker emits in a different direction. In order to illustrate thecarrier structure 316,Fig. 4 illustrates several planes, where each plane is directed into a different direction and each plane represents a single speaker with a membrane such as a straightforward piston-like speaker, but without any back casing for this speaker. The carrier structure can be implemented specifically as illustrated inFig. 5 where, again, the speaker rooms orplanes 318 are illustrated. Furthermore, it is preferred that the structure as illustrated inFig. 5 additionally comprisesmany holes 320 so that the carrier structure 360 only fulfills its functionality as a carrier structure, but does not influence the sound emission and particularly does not hinder that the membranes of the individual speakers in thespeaker arrangement 314 are freely suspended. Then, due to the fact that freely suspended membranes generate a good rotation component, a useful and high quality rendering of rotational sound can be produced. Therefore, the carrier structure is preferably as less bulky as possible so that it only fulfills its functionality of structurally supporting the individual piston-like speakers without influencing the possibility of excursions of the individual membranes.
Preferably, the speaker arrangement comprises at least six individual speakers and particularly even twelve individual speakers arranged in twelve different directions, where, in this embodiment, thespeaker arrangement 314 comprises a pentagonal dodekaeder (e.g. body with 12 equally distributed surfaces) having twelve individual areas, wherein each individual area is provided with an individual speaker membrane. Importantly, theloudspeaker arrangement 314 does not comprise a loudspeaker enclosure and the individual speakers are held by the supportingstructure 316 so that the membranes of the individual speakers are freely suspended. - Furthermore, as illustrated in
Fig. 6 in a further example, thelongitudinal enclosure 300 not only comprises the subwoofer, but additionally comprises electronic parts necessary for feeding the subwoofer speaker and the speakers of thespeaker arrangement 314. Additionally, in order to provide the speaker system as, for example, illustrated inFig. 1b , thelongitudinal enclosure 300 not only comprises a single subwoofer. Instead, one or more subwoofer speakers can be provided in the front of the enclosure, where the enclosure has openings indicated at 310 inFig. 6 , which can be covered by any kind of covering materials such as a foam-like foil or so. The whole volume of the closed enclosure serves as a resonance body for the subwoofer speakers. The enclosure additionally comprises one or more directional speakers for medium and/or high frequencies indicated at 602 inFig. 6 , which are preferably aligned with the one or more subwoofers indicated at 310 inFig. 6 . These directional speakers are arranged in thelongitudinal enclosure 300 and if there is more than one such speaker, then these speakers are preferably arranged in a line as illustrated inFig. 6 and the entire loudspeaker is arranged with respect to the listener so that thespeakers 602 are facing the listeners. Then, the individual speakers in thespeaker arrangement 314 are provided with the second acquisition signal or second mixed signal discussed in the context ofFig. 1 andFig. 2 , and the directional speakers are provided with the corresponding first acquisition signal or first mixed signal. Hence, when there are five speakers illustrated inFig. 6 positioned at the five places indicated inFig. 1b , then the situation inFig. 1b exists where each individual speaker has an omnidirectional arrangement (316), a directional arrangement (602) and asubwoofer 310. If, for example, the first mixed signal comprises five channels, the second mixed signal comprises five channels as well and there is additionally provided one subwoofer channel, then eachsubwoofer 310 of the five speakers inFig. 1b receives the same signal, each of thedirectional speakers 602 in one loudspeaker receives the corresponding individual signal of the first mixed signal, and each of the individual speakers inspeaker arrangement 314 receives the corresponding same individual signal of the second mixed signal. Preferably, the threespeakers 602 are arranged in an d'Appolito arrangement, i.e., the upper and the lower speakers are mid frequency speakers and the speaker in the middle is a high frequency speaker.
Alternatively, however, the loudspeaker inFig. 6 without thedirectional speaker 602 can be used in order to implement the omnidirectional arrangement inFig. 1b for each loudspeaker place and an additional directional speaker can be placed, for example, close to the center position only or close to each loudspeaker position in order to reproduce the high directivity sound separately from the low directivity sound. - The enclosure furthermore comprises a
further speaker 604 which is suspended at an upper portion of the enclosure and which has a freely suspended membrane. This speaker is a low/mid speaker for a low/mid frequency range between 80 and 300 Hz and preferably between 100 and 300 Hz. This additional speaker is advantageous, since - due to the freely suspended membrane - the speaker generates rotation stimulation/energy in the low/mid frequency range. This rotation enhances the rotation generated by thespeakers 314 at low/mid frequencies. Thisspeaker 604 receives the low/mid frequency portion of the signal provided to the speakers at 314, e.g., the second acquisition signal or the second mixed signal. - In a preferred example with a single subwoofer, the subwoofer is a twelve inch subwoofer in the closed
longitudinal enclosure 300 and thespeaker arrangement 314 is a pentagon dodekaeder medium/high speaker arrangement with freely vibratable medium frequency membranes. - Additionally, a method of manufacturing a loudspeaker comprises the production and/or provision of the enclosure, the carrier portion and the speaker arrangement, where the carrier portion is placed on top of the longitudinal enclosure and the speaker arrangement with the individual speakers is placed on top of the carrier portion or alternatively the speaker arrangement without the individual speakers is placed on top of the carrier portion and then the individual speakers are mounted.
- Subsequently, reference is made to
Figs. 9 to 12 in order to illustrate a microphone which can be preferably used within the first or second microphone set illustrated inFig. 1a at 110 or 100, or which can be used for any other microphone purpose. - The microphone comprises a first
electret microphone portion 801 having a first free space and asecond electret portion 802 having a second free space. The first and thesecond microphone portions vent channel 804 is provided for venting the first free space and/or the second free space. Furthermore,first contacts electrical signal 806c andsecond contacts electrical signal 808b are arranged at thefirst microphone portion 801, and thesecond microphone portion 802, respectively. Hence,Fig. 8 illustrates a vented back-to-back electret microphone arrangement. Preferably, thevent channel 804 comprises two individual verticalvent channel portions vent channel portion 804a. This arrangement allows that the vent channel is produced within corresponding counter electrodes or microphone backsides before the individually produced first andsecond microphone portions -
Fig. 10 illustrates a cross-section through a microphone implemented in accordance with the principles illustrated inFig. 8 . Preferably, the firstelectret microphone portion 801 comprises, from top to bottom inFig. 10a first metallization 810 on afoil 811 which is placed on top of aspacer 812. The spacer defines the first ventedfree space 813 of thefirst microphone portion 801. Thespacer 812 is placed on top of anelectret foil 814 which is placed on a counter electrode or "back plate" indicated at 816.Elements electret microphone portion 801. - The second
electret microphone portion 802 is preferably constructed in the same manner and comprises, from bottom to top, a metallization 820, a foil 821, a spacer 822 defining a second ventedfree space 823. On the spacer 822 an electret foil 824 is placed and above the electret foil 824 a counter electrode 826 is placed which forms the back plate of the second microphone portion. Hence, elements 820 to 826 represent the secondelectret microphone portion 802 of theFig. 8 in an embodiment. - Preferably, the first and the second microphone portions have a plurality of
vertical vent portions Fig. 10 . The number and arrangement of the vertical vent portions over the area of the microphone portions can be selected depending on the needs. However, it is preferred to use an even distribution of the vertical vent portions over the area as illustrated inFig. 10 in a cross-section. Furthermore, thehorizontal vent portion 804a is indicated inFig. 10 as well, and the horizontal vent portion is arranged so that it communicates with the vertical vent portions, connects the vertical vent portions and therefore connects the ventedfree spaces metallization 810 and thefoil 811 of the upper microphone or the movement of the movable electrode formed by the metallization 820, 821 for the lower microphone is not damped by a closed free space or so. Instead, when the membrane moves, then a pressure equalization is always obtained by the vertical andhorizontal vent portions 804a to 804c. - Preferably, the microphone is a back-electret double-microphone with a symmetrical construction. The metalized foils 811, 821 are moved or excited by the kinetic energy of the air molecules (sound) and therefore the capacity of the capacitor consisting of the back electrode 816, 826 and the
metallization 810, 820 is changed. Due to the persistent charge on the electret foils 814, 824, a voltage U1, U2 is generated due to the equation Q = C x U, which means that U is equal to Q/C. The voltage U1 is proportional to the movement of theelectrode vertical vent portions free spaces horizontal vent portions 804a are provided which communicate with thevertical vent portions free spaces
Fig. 9 illustrates acontrollable signal combiner 900, which receives the first microphone signal from the first microphone portion and the second microphone portion from the second microphone portion. The microphone signals can be voltages. Furthermore, thecontrollable combiner 900 comprises thefirst weighting stage 902 and/or asecond weighting stage 904. Each weighting stage is configured for applying a certain weighting factor W1, W2 to the corresponding microphone signal. The output of the weighting stages 902, 904 are provided to anadder 906, which adds the output of the weighting stages 902, 904 to produce the combined output signal. Furthermore, thecontrollable combiner 900 preferably comprises a control signal 908 which is connected to the weighting stages 902, 904 in order to set the weighting factors depending on a command applied to the control signal.Fig. 9 additionally illustrates a table, where individual weighting factors are applied to the microphone signals and where it is outlined which characteristic is obtained in the combined output signal. It becomes clear from the table inFig. 9 that when an in-phase addition of both microphone channels or microphone signals is performed, i.e. when theweighters same weighting factor 1 or -1, then an omnidirectional characteristic of the back-to-back electret microphone arrangement is obtained. However, when an out-of-phase addition is performed as indicated by weighting factors having a different sign, then a figure of eight characteristic is obtained. Arbitrarily designed cardioid-like characteristics can be obtained by different level settings and out-of-phase additions, i.e. different weighting factors and weighting factors different from one instructed by a corresponding control signal atcontrol input 906.
Naturally, an actually provided signal combiner does not necessarily have to be the controllability feature. Instead, the in-phase, out-of-phase or weighted addition functionality of the combiner can be correspondingly hardwired so that each microphone has a certain output signal characteristic with the combined C output signal, but this microphone cannot be configured. However, when the controllable combiner has the switching functionality illustrated inFig. 9 , then a configurable microphone is obtained where a basic configurability can for example be obtained by only having one of the twoweighters adder 906 an in-phase addition is obtained. - Preferably, the electret microphone is miniaturized and only has dimensions as are set forth in
Fig. 11 . Preferably, the length dimension is lower than 20 mm and even equal to 10 mm. Furthermore, the width dimension is preferably lower than 20 mm and even equal to 10 mm, and the height dimension is lower than 10 mm and even equal to 5 mm. Miniaturized double microphones which use the electret technology can preferably be placed at critical places such as F-holes of a violin and so forth as illustrated inFig. 12. Fig. 12 particularly illustrates a violin with two F-holes 1200, where in one F-hole 1200 a microphone as illustrated inFig. 8 is placed. If the microphone does not have the signal combiner, then the first and the second microphone signals can be output by the microphone or if the microphone has the combiner, the combined output signal is output. The output can take place via a wireless or wired connection. The transmitter for the wireless connection does not necessarily have to be placed within the F-hole as well, but can be placed at any other suitable place of the violin. Hence, as indicated inFig. 12 a close-up microphoning of acoustical instruments can be realized. - Furthermore, in order to fully detect the vibration energy, the icon microphone should have an audio bandwidth of 60 kHz and preferably up to 100 kHz. To this end, the
foils 811, 821 have to be attached to the spacer in a correspondingly stiff manner. The microphone illustrated inFig. 8 is useful for transmitting the translation energy portion, the rotation energy portion and the vibration energy portion in accordance with the icon criteria. In contrast to prior art technologies, where only condenser microphones exist for this purpose, the electret microphone is considerably smaller and therefore considerably more useful when it comes to flexibility regarding placement and so on. The sound acquisition, sound transmission and sound generation in results in a substantially more nature-like rendering of particularly acoustical instruments and the human voice. The often heard complaints about a "speaker sound" are no longer pertinent, since the inventive concept results in a sound rendering without the typical "speaker sound". Furthermore, the usage of sound transducers with enhanced frequency ranges at the acquisition stage and at the sound reproduction stage results in an enhanced reproduction of the original sound source. Specifically, the liveliness of the original sound source and the entire sensational intensity of the reproduction are considerably enhanced. Listening tests have shown that the inventive concept results in a much more comfortable sound experience. Furthermore, listening tests have shown that the sound level when reproducing translation, rotation and vibration can be reduced by up to 10 dB compared to the sound level of prior art systems only rendering translational sound energy without having a subjective loss of loudness perception. The reduction of the sound level additionally results in a reduced power consumption which is particularly useful for portable devices and additionally the danger of damages to the human hearing system is considerably reduced. - Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
Some embodiments according to the invention comprise a non-transitory data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed or having stored thereon the first or second acquisition signals or first or second mixed signals. - Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
- Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
- In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
- A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.
- A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
- A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
- A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
- In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are preferably performed by any hardware apparatus.
- The above described embodiments are merely illustrative for the principles of the present invention. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. It is the intent, therefore, to be limited only by the scope of the impending patent claims and not by the specific details presented by way of description and explanation of the embodiments herein.
Claims (6)
- Method of capturing an audio scene, comprising:acquiring (200) sound having a first directivity to obtain a first acquisition signal;acquiring (202) sound having a second directivity to obtain a second acquisition signal, wherein the first directivity is higher than the second directivity,wherein the steps of acquiring (200, 202) are performed simultaneously, and wherein both acquisition signals together represent the audio scene; and: separately storing (204) the first and the second acquisition signals; ormixing (206) individual channels in the first acquisition signal to obtain a first mixed signal, mixing individual channels in the second acquisition signal to obtain a second mixed signal and separately storing the first and the second mixed signal, or transmitting (208) the first and the second mixed signals to a loudspeaker setup, or rendering (210) the first mixed signal using a loudspeaker arrangement having a first directivity and simultaneously rendering (212) the second mixed signal using a loudspeaker arrangement (120) having a second directivity, wherein the second loudspeaker directivity is lower than the first loudspeaker directivity; or 20 transmitting (208) the first and the second acquisition signals to a loudspeaker setup; orrendering (210) the first acquisition signal using a loudspeaker arrangement (118) having a first directivity and simultaneously rendering (212) the second acquisition signal using a loudspeaker arrangement (120) having a second directivity, wherein the second loudspeaker directivity is lower than the first loudspeaker directivity,wherein the step of acquiring (200) the sound having the first directivity comprises placing a first set of microphones (100) between places for sound sources and places for listeners or directing a first set of directed microphones so that a maximum sensitivity is directed to the audio scene and acquiring microphone signals from the first set as the first acquisition signal; andwherein the step of acquiring (202) the sound having a second directivity comprises placing a second set of microphones (102) lateral or above the places for the sound sources, where microphone signals from the second set are the second acquisition signal.
- Method in accordance with claim 1,
wherein the first and the second acquisition signals each comprise a plurality of individual acquisition signals, wherein the first and the second mixed signals each comprise a plurality of individual mixed signals, and
wherein the step of mixing (206) comprises a downmixing operation so that a number of individual mixed signals is lower than a number of individual acquisition signals of the corresponding acquisition signal. - Method of claim 2, wherein the step of mixing (206) comprises mixing each acquisition signal into a 7.X format, a 5.X format or a stereo format for each acquisition signal so that the audio scene is represented by a corresponding format for the sound having the first directivity and the sound having the second directivity, wherein X is an integer greater than or equal to zero.
- Method in accordance with one of the preceding claims,
wherein the directivity is defined by a direction factor as a ratio of radiated sound intensity at the remote point on a principle axis of a sound source to an average intensity of the sound transmitted through a sphere passing through the remote point and concentric with the sound source,
wherein the first acquisition signal has a higher directivity factor than the second acquisition signal. - Method of claim 4,
wherein the directivity factor related to the first acquisitions signal is greater than 0.6, and wherein the directivity factor relative to the second acquisition signal is lower than 0.4. - Computer program for performing, when running on a computer, the method of capturing an audio scene of claim 1.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DK17191635.6T DK3288295T3 (en) | 2011-03-30 | 2012-03-29 | Procedure for reproducing a sound scene |
EP17191635.6A EP3288295B1 (en) | 2011-03-30 | 2012-03-29 | Method for rendering an audio scene |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161469436P | 2011-03-30 | 2011-03-30 | |
PCT/EP2012/055697 WO2012130985A1 (en) | 2011-03-30 | 2012-03-29 | Method and apparatus for capturing and rendering an audio scene |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP17191635.6A Division EP3288295B1 (en) | 2011-03-30 | 2012-03-29 | Method for rendering an audio scene |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2692154A1 EP2692154A1 (en) | 2014-02-05 |
EP2692154B1 true EP2692154B1 (en) | 2017-09-20 |
Family
ID=45954639
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP12714272.7A Active EP2692144B1 (en) | 2011-03-30 | 2012-03-29 | Loudspeaker |
EP16192275.2A Active EP3151580B1 (en) | 2011-03-30 | 2012-03-29 | Loudspeaker |
EP12718101.4A Active EP2692154B1 (en) | 2011-03-30 | 2012-03-29 | Method for capturing and rendering an audio scene |
EP17191635.6A Active EP3288295B1 (en) | 2011-03-30 | 2012-03-29 | Method for rendering an audio scene |
EP12714273.5A Active EP2692151B1 (en) | 2011-03-30 | 2012-03-29 | Electret microphone |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP12714272.7A Active EP2692144B1 (en) | 2011-03-30 | 2012-03-29 | Loudspeaker |
EP16192275.2A Active EP3151580B1 (en) | 2011-03-30 | 2012-03-29 | Loudspeaker |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP17191635.6A Active EP3288295B1 (en) | 2011-03-30 | 2012-03-29 | Method for rendering an audio scene |
EP12714273.5A Active EP2692151B1 (en) | 2011-03-30 | 2012-03-29 | Electret microphone |
Country Status (5)
Country | Link |
---|---|
US (4) | US10469924B2 (en) |
EP (5) | EP2692144B1 (en) |
DK (1) | DK3288295T3 (en) |
ES (4) | ES2653344T3 (en) |
WO (3) | WO2012130989A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102021200552A1 (en) | 2021-01-21 | 2022-07-21 | Kaetel Systems Gmbh | Head wearable sound generator, signal processor and method of operating a sound generator or a signal processor |
DE102021200555A1 (en) | 2021-01-21 | 2022-07-21 | Kaetel Systems Gmbh | MICROPHONE, METHOD OF RECORDING AN AUDIBLE SIGNAL, REPRODUCTION DEVICE OF AN AUDIO SIGNAL, OR METHOD OF REPRODUCTION OF AN AUDIO SIGNAL |
DE102021200554A1 (en) | 2021-01-21 | 2022-07-21 | Kaetel Systems Gmbh | speaker system |
DE102021200553A1 (en) | 2021-01-21 | 2022-07-21 | Kaetel Systems Gmbh | Device and method for controlling a sound generator with synthetic generation of the differential signal |
WO2022157255A1 (en) | 2021-01-25 | 2022-07-28 | Kaetel Systems Gmbh | Loudspeaker |
DE102021203639A1 (en) | 2021-04-13 | 2022-10-13 | Kaetel Systems Gmbh | Loudspeaker system, method of manufacturing the loudspeaker system, public address system for a performance area and performance area |
DE102021203632A1 (en) | 2021-04-13 | 2022-10-13 | Kaetel Systems Gmbh | Loudspeaker, signal processor, method for manufacturing the loudspeaker or method for operating the signal processor using dual-mode signal generation with two sound generators |
DE102021203640A1 (en) | 2021-04-13 | 2022-10-13 | Kaetel Systems Gmbh | Device and method for generating a first control signal and a second control signal using linearization and/or bandwidth expansion |
DE102021205545A1 (en) | 2021-05-31 | 2022-12-01 | Kaetel Systems Gmbh | Device and method for generating a control signal for a sound generator or for generating an extended multi-channel audio signal using a similarity analysis |
WO2023001673A2 (en) | 2021-07-19 | 2023-01-26 | Kaetel Systems Gmbh | Apparatus and method for providing audio coverage in a room |
WO2023052555A2 (en) | 2021-09-30 | 2023-04-06 | Kaetel Systems Gmbh | Loudspeaker system, control circuit for a loudspeaker system having one tweeter and two mid-range drivers or woofers, and corresponding method |
WO2023166109A1 (en) | 2022-03-03 | 2023-09-07 | Kaetel Systems Gmbh | Device and method for rerecording an existing audio sample |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9467777B2 (en) * | 2013-03-15 | 2016-10-11 | Cirrus Logic, Inc. | Interface for a digital microphone array |
DE102013221752A1 (en) * | 2013-10-25 | 2015-04-30 | Kaetel Systems Gmbh | EARPHONES AND METHOD FOR PRODUCING AN EARPHOR |
DE102013221754A1 (en) * | 2013-10-25 | 2015-04-30 | Kaetel Systems Gmbh | HEADPHONES AND METHOD FOR MANUFACTURING A HEADPHONES |
US10708686B2 (en) * | 2016-05-30 | 2020-07-07 | Sony Corporation | Local sound field forming apparatus and local sound field forming method |
US9621983B1 (en) | 2016-09-22 | 2017-04-11 | Nima Saati | 100 to 150 output wattage, 360 degree surround sound, low frequency speaker, portable wireless bluetooth compatible system |
CN206260057U (en) * | 2016-12-01 | 2017-06-16 | 辜成允 | Speaker unit |
US11671749B2 (en) | 2019-03-29 | 2023-06-06 | Endow Audio, LLC | Audio loudspeaker array and related methods |
US11985475B2 (en) | 2020-10-19 | 2024-05-14 | Endow Audio, LLC | Audio loudspeaker array and related methods |
Family Cites Families (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB832276A (en) * | 1958-12-02 | 1960-04-06 | Standard Telephones Cables Ltd | Improvements in or relating to electro-acoustic transducers |
US3931867A (en) * | 1975-02-12 | 1976-01-13 | Electrostatic Research Corporation | Wide range speaker system |
JPS55120300A (en) * | 1979-03-08 | 1980-09-16 | Sony Corp | Two-way electrostatic microphone |
DE3034522C2 (en) * | 1979-09-14 | 1983-11-03 | Pioneer Electronic Corp., Tokyo | Loudspeaker unit for automobiles |
US4357490A (en) * | 1980-07-18 | 1982-11-02 | Dickey Baron C | High fidelity loudspeaker system for aurally simulating wide frequency range point source of sound |
JPS57148500A (en) * | 1981-03-10 | 1982-09-13 | Matsushita Electric Ind Co Ltd | Electrostatic acoustic converter |
US4513049A (en) * | 1983-04-26 | 1985-04-23 | Mitsui Petrochemical Industries, Ltd. | Electret article |
US4580654A (en) * | 1985-03-04 | 1986-04-08 | Hale James W | Portable sound speaker system |
JPH01127781A (en) | 1987-11-13 | 1989-05-19 | Yunifuroo:Kk | Hinge device |
JP2597425B2 (en) * | 1990-12-14 | 1997-04-09 | 株式会社ケンウッド | Omnidirectional speaker system |
US7085387B1 (en) * | 1996-11-20 | 2006-08-01 | Metcalf Randall B | Sound system and method for capturing and reproducing sounds originating from a plurality of sound sources |
JPH1127781A (en) * | 1997-07-07 | 1999-01-29 | Rion Co Ltd | Sound pressure microphone |
JP3344647B2 (en) * | 1998-02-18 | 2002-11-11 | 富士通株式会社 | Microphone array device |
DE19819452C1 (en) | 1998-04-30 | 2000-01-20 | Boerder Klaus | Method and device for the electroacoustic transmission of sound energy |
JP2000050393A (en) * | 1998-05-25 | 2000-02-18 | Hosiden Corp | Electret condenser microphone |
JP4073093B2 (en) * | 1998-09-29 | 2008-04-09 | 株式会社オーディオテクニカ | Condenser microphone |
US7136496B2 (en) * | 2001-04-18 | 2006-11-14 | Sonion Nederland B.V. | Electret assembly for a microphone having a backplate with improved charge stability |
AUPR647501A0 (en) | 2001-07-19 | 2001-08-09 | Vast Audio Pty Ltd | Recording a three dimensional auditory scene and reproducing it for the individual listener |
CA2354858A1 (en) * | 2001-08-08 | 2003-02-08 | Dspfactory Ltd. | Subband directional audio signal processing using an oversampled filterbank |
WO2004032351A1 (en) * | 2002-09-30 | 2004-04-15 | Electro Products Inc | System and method for integral transference of acoustical events |
JP4033830B2 (en) * | 2002-12-03 | 2008-01-16 | ホシデン株式会社 | Microphone |
US7024002B2 (en) * | 2004-01-26 | 2006-04-04 | Dickey Baron C | Method and apparatus for spatially enhancing the stereo image in sound reproduction and reinforcement systems |
KR100547357B1 (en) * | 2004-03-30 | 2006-01-26 | 삼성전기주식회사 | Speaker for mobile terminal and manufacturing method thereof |
JP4476059B2 (en) * | 2004-07-20 | 2010-06-09 | シチズン電子株式会社 | Electret condenser microphone |
CA2598575A1 (en) | 2005-02-22 | 2006-08-31 | Verax Technologies Inc. | System and method for formatting multimode sound content and metadata |
JP4513765B2 (en) * | 2005-04-15 | 2010-07-28 | 日本ビクター株式会社 | Electroacoustic transducer |
US7721208B2 (en) | 2005-10-07 | 2010-05-18 | Apple Inc. | Multi-media center for computing systems |
JP2007129543A (en) * | 2005-11-04 | 2007-05-24 | Hosiden Corp | Electret condenser microphone |
JP4821589B2 (en) * | 2006-01-30 | 2011-11-24 | ソニー株式会社 | Speaker device |
US20080115651A1 (en) * | 2006-11-21 | 2008-05-22 | Eric Schmidt | Internally-mounted soundhole interfacing device |
WO2009125773A1 (en) * | 2008-04-07 | 2009-10-15 | 国立大学法人埼玉大学 | Electromechanical transducer, electromechanical transducer device, and fabrication method for same |
US8107652B2 (en) * | 2008-08-04 | 2012-01-31 | MWM Mobile Products, LLC | Controlled leakage omnidirectional electret condenser microphone element |
JP5237046B2 (en) * | 2008-10-21 | 2013-07-17 | 株式会社オーディオテクニカ | Variable directional microphone unit and variable directional microphone |
US20100223552A1 (en) * | 2009-03-02 | 2010-09-02 | Metcalf Randall B | Playback Device For Generating Sound Events |
US8917881B2 (en) * | 2010-01-26 | 2014-12-23 | Cheng Yih Jenq | Enclosure-less loudspeaker system |
EP2432249A1 (en) * | 2010-07-02 | 2012-03-21 | Knowles Electronics Asia PTE. Ltd. | Microphone |
JP5682244B2 (en) * | 2010-11-09 | 2015-03-11 | ソニー株式会社 | Speaker system |
JP6270625B2 (en) * | 2014-05-23 | 2018-01-31 | 株式会社オーディオテクニカ | Variable directivity electret condenser microphone |
JP6270626B2 (en) * | 2014-05-23 | 2018-01-31 | 株式会社オーディオテクニカ | Variable directivity electret condenser microphone |
-
2012
- 2012-03-29 EP EP12714272.7A patent/EP2692144B1/en active Active
- 2012-03-29 ES ES12718101.4T patent/ES2653344T3/en active Active
- 2012-03-29 WO PCT/EP2012/055701 patent/WO2012130989A1/en active Application Filing
- 2012-03-29 ES ES16192275T patent/ES2712724T3/en active Active
- 2012-03-29 WO PCT/EP2012/055697 patent/WO2012130985A1/en active Application Filing
- 2012-03-29 WO PCT/EP2012/055698 patent/WO2012130986A1/en active Application Filing
- 2012-03-29 ES ES17191635T patent/ES2886366T3/en active Active
- 2012-03-29 EP EP16192275.2A patent/EP3151580B1/en active Active
- 2012-03-29 EP EP12718101.4A patent/EP2692154B1/en active Active
- 2012-03-29 ES ES12714273.5T patent/ES2661837T3/en active Active
- 2012-03-29 EP EP17191635.6A patent/EP3288295B1/en active Active
- 2012-03-29 EP EP12714273.5A patent/EP2692151B1/en active Active
- 2012-03-29 DK DK17191635.6T patent/DK3288295T3/en active
-
2013
- 2013-09-27 US US14/040,549 patent/US10469924B2/en active Active
- 2013-09-27 US US14/040,561 patent/US9668038B2/en active Active
-
2019
- 2019-10-28 US US16/665,853 patent/US10848842B2/en active Active
-
2020
- 2020-08-12 US US16/991,459 patent/US11259101B2/en active Active
Non-Patent Citations (1)
Title |
---|
ROBERT AULD: "The Art of Recording the Big Band, Pt.1", 11 November 2010 (2010-11-11), XP055198515, Retrieved from the Internet <URL:https://web.archive.org/web/20101111131107/http://www.auldworks.com/bbrec1.htm> [retrieved on 20150626] * |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102021200553B4 (en) | 2021-01-21 | 2022-11-17 | Kaetel Systems Gmbh | Device and method for controlling a sound generator with synthetic generation of the differential signal |
DE102021200555A1 (en) | 2021-01-21 | 2022-07-21 | Kaetel Systems Gmbh | MICROPHONE, METHOD OF RECORDING AN AUDIBLE SIGNAL, REPRODUCTION DEVICE OF AN AUDIO SIGNAL, OR METHOD OF REPRODUCTION OF AN AUDIO SIGNAL |
DE102021200554A1 (en) | 2021-01-21 | 2022-07-21 | Kaetel Systems Gmbh | speaker system |
DE102021200553A1 (en) | 2021-01-21 | 2022-07-21 | Kaetel Systems Gmbh | Device and method for controlling a sound generator with synthetic generation of the differential signal |
WO2022157254A1 (en) | 2021-01-21 | 2022-07-28 | Kaetel Systems Gmbh | Loudspeaker system |
WO2022157252A1 (en) | 2021-01-21 | 2022-07-28 | Kaetel Systems Gmbh | Microphone, method for recording an acoustic signal, playback device for an acoustic signal, or method for playing back an acoustic signal |
DE102021200552B4 (en) | 2021-01-21 | 2023-04-20 | Kaetel Systems Gmbh | Head wearable sound generator and method of operating a sound generator |
DE102021200555B4 (en) | 2021-01-21 | 2023-04-20 | Kaetel Systems Gmbh | Microphone and method for recording an acoustic signal |
WO2022157253A1 (en) | 2021-01-21 | 2022-07-28 | Kaetel Systems Gmbh | Device and method for controlling a sound generator with synthetic generation of the differential signal |
WO2022157251A2 (en) | 2021-01-21 | 2022-07-28 | Kaetel Systems Gmbh | Sound generator which can be worn on the head, signal processor, and method for operating a sound generator or a signal processor |
DE102021200554B4 (en) | 2021-01-21 | 2023-03-16 | Kaetel Systems Gmbh | speaker system |
DE102021200552A1 (en) | 2021-01-21 | 2022-07-21 | Kaetel Systems Gmbh | Head wearable sound generator, signal processor and method of operating a sound generator or a signal processor |
DE102021200633B4 (en) | 2021-01-25 | 2023-02-23 | Kaetel Systems Gmbh | speaker |
DE102021200633A1 (en) | 2021-01-25 | 2022-07-28 | Kaetel Systems Gmbh | speaker |
WO2022157255A1 (en) | 2021-01-25 | 2022-07-28 | Kaetel Systems Gmbh | Loudspeaker |
DE102021203639A1 (en) | 2021-04-13 | 2022-10-13 | Kaetel Systems Gmbh | Loudspeaker system, method of manufacturing the loudspeaker system, public address system for a performance area and performance area |
DE102021203640A1 (en) | 2021-04-13 | 2022-10-13 | Kaetel Systems Gmbh | Device and method for generating a first control signal and a second control signal using linearization and/or bandwidth expansion |
DE102021203640B4 (en) | 2021-04-13 | 2023-02-16 | Kaetel Systems Gmbh | Loudspeaker system with a device and method for generating a first control signal and a second control signal using linearization and/or bandwidth expansion |
DE102021203632A1 (en) | 2021-04-13 | 2022-10-13 | Kaetel Systems Gmbh | Loudspeaker, signal processor, method for manufacturing the loudspeaker or method for operating the signal processor using dual-mode signal generation with two sound generators |
WO2022218824A2 (en) | 2021-04-13 | 2022-10-20 | Kaetel Systems Gmbh | Loudspeaker, signal processor, method for manufacturing the loudspeaker or method for operating the signal processor using dual-mode signal generation with two sound generators |
WO2022218823A1 (en) | 2021-04-13 | 2022-10-20 | Kaetel Systems Gmbh | Loudspeaker system, method for manufacturing the loudspeaker system, public address system for a performance area and performance area |
WO2022218822A1 (en) | 2021-04-13 | 2022-10-20 | Kaetel Systems Gmbh | Device and method for generating a first control signal and a second control signal using linearisation and/or bandwidth expansion |
DE102021205545A1 (en) | 2021-05-31 | 2022-12-01 | Kaetel Systems Gmbh | Device and method for generating a control signal for a sound generator or for generating an extended multi-channel audio signal using a similarity analysis |
WO2022253768A1 (en) | 2021-05-31 | 2022-12-08 | Kaetel Systems Gmbh | Apparatus and method for generating a control signal for a sound generator or for generating an extended multi-channel audio signal using a similarity analysis |
WO2023001673A2 (en) | 2021-07-19 | 2023-01-26 | Kaetel Systems Gmbh | Apparatus and method for providing audio coverage in a room |
WO2023052555A2 (en) | 2021-09-30 | 2023-04-06 | Kaetel Systems Gmbh | Loudspeaker system, control circuit for a loudspeaker system having one tweeter and two mid-range drivers or woofers, and corresponding method |
WO2023052557A1 (en) | 2021-09-30 | 2023-04-06 | Kaetel Systems Gmbh | Device and method for generating control signals for a loudspeaker system having spectral interleaving in the low frequency range |
WO2023166109A1 (en) | 2022-03-03 | 2023-09-07 | Kaetel Systems Gmbh | Device and method for rerecording an existing audio sample |
Also Published As
Publication number | Publication date |
---|---|
US20200374610A1 (en) | 2020-11-26 |
US10469924B2 (en) | 2019-11-05 |
ES2886366T3 (en) | 2021-12-17 |
EP2692154A1 (en) | 2014-02-05 |
ES2661837T3 (en) | 2018-04-04 |
WO2012130986A1 (en) | 2012-10-04 |
EP2692151A1 (en) | 2014-02-05 |
EP3288295B1 (en) | 2021-07-21 |
US20140098980A1 (en) | 2014-04-10 |
US11259101B2 (en) | 2022-02-22 |
EP3288295A1 (en) | 2018-02-28 |
US10848842B2 (en) | 2020-11-24 |
DK3288295T3 (en) | 2021-10-25 |
US20200084526A1 (en) | 2020-03-12 |
ES2712724T3 (en) | 2019-05-14 |
WO2012130985A1 (en) | 2012-10-04 |
EP3151580B1 (en) | 2018-11-21 |
EP2692144A1 (en) | 2014-02-05 |
WO2012130989A1 (en) | 2012-10-04 |
US9668038B2 (en) | 2017-05-30 |
EP2692151B1 (en) | 2018-01-10 |
US20140105444A1 (en) | 2014-04-17 |
EP2692144B1 (en) | 2017-02-01 |
EP3151580A1 (en) | 2017-04-05 |
ES2653344T3 (en) | 2018-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11259101B2 (en) | Method and apparatus for capturing and rendering an audio scene | |
US10231054B2 (en) | Headphones and method for producing headphones | |
USRE44611E1 (en) | System and method for integral transference of acoustical events | |
US10524055B2 (en) | Earphone and method for producing an earphone | |
JPH0970092A (en) | Point sound source, non-oriented speaker system | |
Zotter et al. | A beamformer to play with wall reflections: The icosahedral loudspeaker | |
Warusfel et al. | Directivity synthesis with a 3D array of loudspeakers: application for stage performance | |
JP2006513656A (en) | Apparatus and method for generating sound | |
WO1989012373A1 (en) | Multidimensional stereophonic sound reproduction system | |
TW200818964A (en) | A loudspeaker system having at least two loudspeaker devices and a unit for processing an audio content signal | |
WO2014021178A1 (en) | Sound field support device and sound field support system | |
JP2009194924A (en) | Apparatus and method for generating sound | |
Becker | Franz Zotter, Markus Zaunschirm, Matthias Frank, and Matthias Kronlachner | |
JP2020120218A (en) | Sound playback apparatus and electronic musical instrument including the same | |
JP2009189027A (en) | Apparatus and method for generating sound |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20131023 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20150918 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04S 7/00 20060101AFI20170223BHEP |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20170331 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 930997 Country of ref document: AT Kind code of ref document: T Effective date: 20171015 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602012037539 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: NV Representative=s name: BOVARD AG PATENT- UND MARKENANWAELTE, CH |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: FP |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20171220 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 |
|
REG | Reference to a national code |
Ref country code: ES Ref legal event code: FG2A Ref document number: 2653344 Country of ref document: ES Kind code of ref document: T3 Effective date: 20180206 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 930997 Country of ref document: AT Kind code of ref document: T Effective date: 20170920 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20171221 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20171220 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 7 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180120 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602012037539 Country of ref document: DE |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 |
|
26N | No opposition filed |
Effective date: 20180621 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20180331 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180329 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180329 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180331 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180329 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20120329 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170920 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170920 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230519 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: NL Payment date: 20240320 Year of fee payment: 13 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240227 Year of fee payment: 13 Ref country code: GB Payment date: 20240320 Year of fee payment: 13 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20240327 Year of fee payment: 13 Ref country code: FR Payment date: 20240321 Year of fee payment: 13 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: CH Payment date: 20240401 Year of fee payment: 13 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: ES Payment date: 20240401 Year of fee payment: 13 |