EP2206113A1 - Vorrichtung und verfahren zum erzeugen eines multikanalsignals mit einer sprachsignalverarbeitung - Google Patents
Vorrichtung und verfahren zum erzeugen eines multikanalsignals mit einer sprachsignalverarbeitungInfo
- Publication number
- EP2206113A1 EP2206113A1 EP08802737A EP08802737A EP2206113A1 EP 2206113 A1 EP2206113 A1 EP 2206113A1 EP 08802737 A EP08802737 A EP 08802737A EP 08802737 A EP08802737 A EP 08802737A EP 2206113 A1 EP2206113 A1 EP 2206113A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- signal
- channel
- speech
- input signal
- direct
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims description 53
- 238000012545 processing Methods 0.000 title description 10
- 239000003607 modifier Substances 0.000 claims abstract description 31
- 230000002238 attenuated effect Effects 0.000 claims abstract description 15
- 230000007613 environmental effect Effects 0.000 claims description 22
- 230000003595 spectral effect Effects 0.000 claims description 16
- 238000004458 analytical method Methods 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 7
- 230000002123 temporal effect Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000013016 damping Methods 0.000 claims description 2
- 238000007619 statistical method Methods 0.000 claims 1
- 238000001514 detection method Methods 0.000 abstract description 13
- 230000005236 sound signal Effects 0.000 description 27
- 230000001629 suppression Effects 0.000 description 23
- 230000008569 process Effects 0.000 description 13
- 239000000203 mixture Substances 0.000 description 10
- 238000000605 extraction Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 238000012986 modification Methods 0.000 description 7
- 230000004048 modification Effects 0.000 description 7
- 230000009467 reduction Effects 0.000 description 7
- 238000001228 spectrum Methods 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000000354 decomposition reaction Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 238000000926 separation method Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 230000003321 amplification Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 238000002156 mixing Methods 0.000 description 3
- 238000003199 nucleic acid amplification method Methods 0.000 description 3
- 238000003909 pattern recognition Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 238000000513 principal component analysis Methods 0.000 description 3
- 230000001755 vocal effect Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 235000009508 confectionery Nutrition 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000006227 byproduct Substances 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 108010066082 tartrate-sensitive acid phosphatase Proteins 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
- H04S5/005—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation of the pseudo five- or more-channel type, e.g. virtual surround
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0316—Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
- G10L21/0364—Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for improving intelligibility
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
Definitions
- the present invention relates to the field of audio signal processing, and more particularly to the generation of multiple output channels from fewer input channels, such as audio channels.
- B one (mono) channel or two (stereo) input channels.
- Multi-channel audio is becoming more and more popular.
- Such playback systems generally consist of three speakers L (left), C (center) and R (right), which are typically located in front of the user, and two speakers Ls and Rs located behind the user, and typically one of them LFE channel, also called the low frequency effect channel or subwoofer.
- LFE channel also called the low frequency effect channel or subwoofer.
- Such a channel scenario is indicated in Fig. 5b and in Fig. 5c. While the positioning of the loudspeakers L, C, R, Ls, Rs should be made with respect to the user as shown in FIGS.
- the positioning of the LFE channel is not so critical because the ear can not locate at such low frequencies and thus the LFE channel can be located anywhere where it does not bother due to its considerable size.
- Such a multi-channel system provides several advantages over a typical stereo reproduction, which is a two-channel reproduction such as shown in Fig. 5a.
- Even outside of the optimal central listening position results in improved stability of the front listening experience, which is also referred to as a "front image”, due to the center channel. This results in a larger “sweet spot”, where "sweet spot” stands for the optimal listening position.
- the listener has a better feeling of "immersing" in the audio scene due to the two rear speakers Ls and Rs.
- the first option is to play the left and right channels through the left and right speakers of the multi-channel playback system.
- a disadvantage of this solution is that you do not exploit the variety of existing speakers, so that you do not take advantage of the presence of the center speaker and the two rear speakers advantageous.
- Another option is to convert the two channels into a multi-channel signal. This can be done during playback or by a special preprocessing, which advantageously takes advantage of all six loudspeakers of the existing 5.1 reproduction system, for example, and thus leads to an improved listening impression when the upmixing or the "upmixing" of two channels to 5 or 6 Channels is carried out faultlessly.
- the second option ie the use of all loudspeakers of the multichannel system, have an advantage over the first solution, if one does not commit Üpmix errors. Such upmix errors can be especially troublesome if signals for the rear speakers, which are also known as ambience signals or ambient signals, are not generated without error.
- the direct sound sources are reproduced by the three front channels so that they are perceived by the user at the same position as in the original two-channel version.
- the original two-channel version is shown schematically in Fig. 5a, using the example of various drum instruments.
- Fig. 5b shows a highly mixed version of the concept in which all the original sound sources, ie the drum instruments, are again reproduced by the three front loudspeakers L, C and R, wherein in addition special environmental signals are output from the two rear loudspeakers.
- the term "direct sound source” is thus used to describe a sound coming only and directly from a discrete sound source, such as a drum instrument or other instrument, or generally a particular audio object, as shown schematically, eg, in FIG. 5a is shown using a drum instrument. Any additional sounds, such as due to wall reflections, etc. are not present in such a direct sound source.
- FIG. 5c Another alternative concept, which is referred to as "in the band” concept, is shown schematically in FIG. 5c.
- Each type of sound ie direct sound sources and ambient sounds, are all positioned around the listener.
- the position of a sound is independent of its characteristics (direct sound sources or ambient sounds) and depends only on the specific design of the algorithm, as described e.g. in Fig. 5c is shown.
- Fig. 5c it has been determined by the upmix algorithm that the two instruments 1100 and 1102 are positioned laterally with respect to the listener while the two instruments 1104 and 1106 are positioned in front of the user.
- the two rear speakers Ls, Rs now also contain portions of the two instruments 1100 and 1102 and no longer just ambient sounds, as was the case in Fig. 5b, where the same instruments are all positioned in front of the user have been.
- the Ambience Extraction technique also exists using non-negative matrix factorization, especially in the context of a 1-up-N upmix, where N is greater than two.
- a time-frequency distribution (TFD) of the input signal is calculated, for example by means of a short-time Fourier transformation.
- An estimate of the TFD of the direct signal components is derived by a numerical optimization technique called non-negative matrix factorization.
- An estimate of the TFD of the ambient signal is determined by calculating the difference of the TFD of the input signal and the estimate of the TFD for the direct signal.
- the re-synthesis of the time signal of the surround signal is performed using the phase spectrogram of the input signal. Additional post processing is optionally performed to enhance the listening experience of the generated multichannel signal. This method is described in detail in C. Uhle, A. Walther, O. Hellmuth and J. Herre in "Ambience Separation from mono recordings using non-negative matrix factorizing", Proceedings of the AES 30th Conference 2007.
- Matrix decoders are known under the heading Dolby Pro Logic II, DTS Neo: 6 or HarmanKardon / Lexicon Logic 7 and in almost every Au- contained in the dio / video receiver sold today. As a by-product of their intended functionality, these processes are also able to perform a blind upmix. These decoders use interchannel differences and signal adaptive control mechanisms to produce multichannel output signals.
- frequency domain techniques described by Avendano and Jot are also used to identify and extract the ambience information in stereo audio signals. This method is based on the calculation of an interchannel coherence index and a non-linear mapping function, thereby making it possible to determine the time-frequency regions which are mainly composed of ambient signal components.
- the surround signals are subsequently synthesized and used to feed the surround channels of the multi-channel playback system.
- One component of the direct / ambient high-mix process is the extraction of an environmental signal that is injected into the two back channels Ls, Rs.
- a signal that it is used as an environment-like signal in the context of a direct / environment high-racking process.
- a prerequisite is that no relevant parts of the direct sound sources should be audible in order to be able to locate the direct sound sources safely in front of the listener. This is especially important if the audio signal contains speech or one or more distinguishable speakers. Speech signals generated by a crowd, on the other hand, do not necessarily disturb the listener unless they are located in front of the listener.
- a prerequisite for the sound signal of a movie (a soundtrack) is that the listening experience should conform to the impression created by the images. Audible clues to the localization should therefore not be in contrast to visible clues to the localization. Consequently, if a speaker is seen on the screen, the corresponding language should also be placed in front of the user.
- audio signals d. H. is not necessarily limited to situations where both audio and video signals are presented simultaneously.
- Such other audio signals are for example broadcast signals or audiobooks.
- a listener is accustomed to producing speech from the front channels, and would likely turn around to restore his usual impression if speech were coming from the back channels at once.
- a language extractor is used.
- An attack and settling time are used to smooth out modifications of the output signal. So a multi-channel soundtrack without language can be extracted from a movie. If a particular stereo reverberation feature is present in the original stereo downmix signal, this causes a high-mix tool to distribute that reverberation to each channel except for the center channel, so that reverberation is heard.
- dynamic level control is performed on L, R, Ls and Rs to attenuate the reverberation of a voice.
- the object of the present invention is to provide a concept for generating a multi-channel signal with a number of output channels, which on the one hand provides flexibility and on the other hand, a high-quality product.
- This object is achieved by a device for generating a multi-channel signal according to claim 1, a method for generating a multi-channel signal according to claim 23 or a computer program according to claim 24.
- the present invention is based on the finding that speech components are suppressed in the rear channels, ie in the surrounding channels, so that the rear channels are speech component-free.
- an input signal is highly mixed with one or more channels to provide a direct signal channel and to provide an environmental signal channel or, depending on the implementation, the modified surround signal channel.
- a speech detector is provided to search for speech components in the input signal, the direct channel or the surround channel, such speech components being temporal and / or frequency sections, or even in components of orthogonal decomposition, for example.
- a signal modifier is provided to modify the direct signal produced by the high mixer or a copy of the input signal to suppress the speech signal components there while less or not attenuating the direct signal components in the corresponding sections comprising speech signal components. Such a modified surround channel signal is then used to generate loudspeaker signals for corresponding loudspeakers.
- the surround signal generated by the high mixer is used directly because the speech components are already suppressed there since the underlying audio signal also had already suppressed speech components.
- the high-mix process also generates a direct channel
- the direct channel is calculated not based on the modified input signal but on the basis of the unmodified input signal to selectively suppress the speech components, only in the environment channel, but not in the direct channel, in which the speech components are explicitly desired.
- a signal-dependent processing is thus carried out in order to remove or suppress the speech components in the rear channels or in the ambient signal.
- two essential steps are taken, namely the detection of the occurrence of speech and the suppression of speech, wherein the detection of the occurrence of speech in the input signal, in the direct channel or in the surrounding channel can be made, and wherein the suppression of speech in the surrounding channel directly or indirectly can be made in the input signal, which is then used to generate the surround channel, this modified input signal is not used to generate the direct channel.
- the resulting signals for the rear channels viewed by the user comprise a minimal amount of speech, to get the original sound image before the user (front image).
- the position of the speakers would be positioned outside the front area, somewhere between the listener and the front speakers or, in extreme cases, even behind the listener. This would result in a very disturbing sound perception, especially if the audio signals are presented simultaneously with visual signals, as is the case for instance in films. Therefore, many multi-channel movie soundtracks contain hardly any speech components in the back channels.
- Fig. 1 is a block diagram of an embodiment of the present invention
- Fig. 2 shows an assignment of time / frequency sections of an analysis signal and an environmental channel or input signal to explain the "corresponding sections"
- FIG. 3 is an environmental signal modification according to a preferred embodiment of the present invention.
- FIG. 4 shows a cooperation between a speech detector and an environment signal modifier according to a further embodiment of the present invention
- 5a shows a stereo reproduction scenario with direct sources (percussion instruments) and diffuse components
- Fig. 5b shows a multi-channel playback scenario in which all direct-switching sources are reproduced by the front channels and diffuse components are reproduced by all channels, this scenario also being referred to as a direct-environment concept;
- FIG. 5c shows a multi-channel reproduction scenario in which discrete switching sources can also be reproduced at least partially by rear channels and not or less in the transmission channels as shown in Fig. 5b by the rear speakers;
- FIG. 6a shows a further embodiment with a speech detection in the environment channel and a modification of the environment channel
- 6b an embodiment with speech detection in the input signal and modification of the ambient channel
- 6c an embodiment with a speech detection in the input signal and a modification of the input signal
- Fig. 6d shows a further embodiment with a speech detection in the input signal and a modification in the surrounding signal, wherein the modification is specially tuned to the speech;
- FIG. 8 is a more detailed illustration of a gain calculation block of FIG. 7.
- FIG. 8 is a more detailed illustration of a gain calculation block of FIG. 7.
- Fig. 1 shows a block diagram of an apparatus for generating a multi-channel signal 10, which is shown in Fig. 1 such that it has a left channel L, a right channel R, a center channel C, an LFE channel, a left rear channel LS and a right rear channel RS. It should be noted, however, that the present invention is also suitable for any other representations than for this selected 5.1 representation, for example, for a 7.1 representation or for a 3.0 representation, in which case only a left channel, a right channel and a center channel is generated.
- the multi Channel signal 10 having the e.g. six channels shown in Fig.
- 1 is generated from an input signal 12 or " x ⁇ having a number of input channels, the number of input channels being 1 or greater than 1 and equal to 2, for example is when a stereo downmix is entered. In general, however, the number of output channels is greater than the number of input channels.
- the apparatus shown in FIG. 1 includes a high mixer 14 for up-converting the input signal 12 to produce at least one direct signal channel 15 and one ambient signal channel 16 or optionally a modified ambient signal channel 16 '.
- a speech detector 18 adapted to use as input the analysis signal, the input signal 12, as provided at 18a, or to use the direct signal channel 15, as provided at 18b, or to use another signal, which is similar in terms of the temporal / frequency appearance or in terms of its characteristics, as far as speech components, to the input signal 12.
- the speech detector detects a portion of the input signal, the direct channel or z.
- the environmental channel as shown at 18c, in which a speech component occurs.
- This language component can be a significant language component, eg. For example, a speech component whose language property has been derived as a function of a specific qualitative or quantitative measure, wherein the qualitative measure and the quantitative measure exceeds a threshold, which is also referred to as speech detection threshold.
- a language property is quantified with a numeric value, and this numeric value is compared to a threshold.
- a decision is made per section, which can be made by one or more decision criteria.
- decision criteria may be, for example, various quantitative features that be compared / weighted with each other or processed somehow in order to come to a yes / no decision.
- the apparatus shown in FIG. 1 further includes a signal modifier 20 configured to modify the original input signal, as shown at 20 a, or adapted to modify the environmental channel 16.
- the signal modifier 20 When the control channel 16 is modified, the signal modifier 20 outputs a modified environmental channel 21, while when the input signal 20a is modified, a modified input signal 20b is output to the high mixer 14, which then modifies the modified environmental channel 16 '. B. generated by the same Hochmischvorgang that has been used for the direct channel 15. Should this hyperbolic process also lead to a direct channel due to the modified input signal 20b, this direct channel would be discarded because a direct channel derived from the unmodified (without speech suppression) input signal 12 and not from the modified input signal 20b is used as the direct channel according to the invention ,
- the signal modifier is configured to modify portions of the at least one environmental channel or the input signal, which portions may be temporal or frequency portions or portions of orthogonal decomposition, for example.
- the portions corresponding to the portions detected by the speech detector are modified so that the signal modifier, as illustrated, generates the modified surround channel 21 or the modified input signal 20b in which a speech portion is attenuated or eliminated, wherein the speech portion in the corresponding portion of the direct channel has been less, or at best, not attenuated at all.
- the apparatus shown in Fig. 1 comprises a speaker signal output means 22 for outputting of loudspeaker signals in a reproduction scenario, such as the 5.1 scenario shown by way of example in FIG. 1, but also a 7.1 scenario, a 3.0 scenario or another or even higher scenario is also possible.
- a reproduction scenario such as the 5.1 scenario shown by way of example in FIG. 1, but also a 7.1 scenario, a 3.0 scenario or another or even higher scenario is also possible.
- the at least one direct channel and the at least one modified surround channel are used, where the modified surround channel may either originate from the signal modifier 20, as shown at 21 or originate from the high mixer 14, as at 16 'is shown.
- the two modified surround channels 21 could be fed directly into the two loudspeaker signals Ls, Rs, while the direct channels are fed only to the three front loudspeakers L, R, C, thus allowing complete separation between ambient signal components and direct signal components.
- the direct signal components are then all in front of the user and the surrounding signal components are all behind the user.
- ambient signal components can typically also be introduced to a smaller percentage in the front channels, so that z.
- the direct / ambient scenario shown in Fig. 5b is formed in which not only surround channels ambient signals are generated, but also from the front speakers z. L, C, R.
- surrounding signal components will also be mainly from the front speakers z. B. L, R, C output, but also direct signal components are at least partially fed into the two rear speakers Ls, Rs.
- the proportion of the source 1100 in the speaker L will be about the same size as in the speaker Ls, so according to a typical panning rule, the source 1100 can be placed midway between L and Ls.
- the loudspeaker signal output device 22 can thus effect a direct forwarding of a channel fed on the input side or can map the surrounding channels and the direct channels, for example by an in-band concept or a direct / ambient concept, such that a distribution the channels to the individual loudspeakers takes place and finally, in order to produce the actual loudspeaker signal, a summation of the components from the individual channels can take place.
- Fig. 2 shows a time / frequency division of an analysis signal in the upper portion and an ambient channel or input signal in a lower portion.
- the time is plotted along the horizontal axis and the frequency is plotted along the vertical axis.
- the signal modifier 20 z.
- the speech detector 18 in section 22 detects a speech signal, somehow processes the portion of the surround channel / input signal, such as attenuates, completely eliminates, or substitutes a synthesis signal that has no speech property.
- the division need not be as selective as shown in FIG. Instead, even a temporal detection can already provide a satisfactory effect, in which case a specific time segment of the analysis signal, for example from second 2 to second 2.1 is detected as containing speech signal, and then the section of the ambient channel or the input signal also between second 2 and 2.1 to achieve speech suppression.
- an orthogonal decomposition can be performed, for. B. by means of a principal component analysis, in which case the same component decomposition is then used both in the environment channel or input signal and in the analysis signal. Then, certain components that have been detected as speech components in the analysis signal are attenuated or completely suppressed or eliminated in the ambient channel or input signal.
- a section is detected in the analysis signal, in which case this section is not necessarily processed in the analysis signal, but possibly also in another signal.
- FIG. 3 shows an implementation of a speech detector in cooperation with an environmental channel modifier, wherein the speech detector provides only time information, that is, if FIG. 2 is considered, only wideband identifies the first, second, third, fourth or fifth time period and this information is the envelope - Channel channel modifier 20 via a control line 18d (Fig. 1) communicates.
- the speech detector 18 and the environmental channel modifier 20, operating synchronously or buffered, together achieve that in the signal to be modified, which may be, for example, the signal 12 or the signal 16, the speech signal is attenuated while ensuring in that such attenuation of the corresponding section in the direct channel does not occur or only occurs to a lesser extent.
- the direct signal thus obtained is then supplied to the output device 22 without any further processing, while the ambient signal is processed with regard to a speech suppression.
- the up-mixer 14 may, so to speak, operate twice to extract the direct channel component based on the original input signal, but to extract the modified surround channel 16 'based on the modified input signal 20b.
- the same high-mix algorithm would run twice, but using a different input signal, in which one input signal the speech component is attenuated and in the other input signal the speech component is not attenuated.
- the environment channel modifier has broadband attenuation functionality or high pass filtering functionality, as set forth below.
- FIGS. 6a, 6b, 6c and 6d Various implementations of the device according to the invention will be explained below with reference to FIGS. 6a, 6b, 6c and 6d.
- the environmental signal a is extracted from the input signal x, which extraction is part of the functionality of the upmixing 14.
- the occurrence of speech is detected in the surround signal a.
- the detection result d is used in the environment channel modifier 20, which computes the modified surround signal 21 in which speech components are suppressed.
- FIG. 6b shows a configuration different from FIG. 6a in that the input signal and not the surrounding signal is supplied to the speech detector 18 as the analysis signal 18a.
- the modified surround channel signal a s is calculated similarly to the configuration of FIG. 6 a, but the speech in the input signal is detected. This is motivated by the fact that the speech components generally in the input signal x signify can be found more easily than in the signal a. Thus, a higher reliability can be achieved by the configuration shown in FIG.
- the speech-modified surround signal a s is extracted from a version x s of the input signal which has already been subjected to speech signal suppression. Since the speech components typically emerge more prominently in x than in an extracted surround signal, their suppression is safer and more sustainable than in FIG. 6a.
- the disadvantage of the configuration shown in FIG. 6c compared to the configuration in FIG. 6a is that possible artifacts of the speech suppression and the environmental extraction process could still be increased depending on the type of extraction process.
- the functionality of the environment channel extractor 14 is only used to extract the environment channel from the modified audio signal.
- the direct channel is not extracted from the modified audio signal x s (20b), but on the basis of the original input signal x (12).
- the environmental signal a is extracted from the input signal x by the high mixer.
- the occurrence of speech is detected in the input signal x.
- additional page information e which additionally controls the functionality of the environment channel modifier 20, is calculated by a speech analyzer 30.
- This page information is calculated directly from the input signal and may be the location of speech components in a time / frequency representation, for example in the form of a spectrogram of FIG. 2, or may be additional information, which will be discussed in more detail below.
- the functionality of the speech detector 18 will be discussed in greater detail below.
- the task of speech detection is to add a mix of audio signals analyze to estimate a likelihood that language is present.
- the input signal may be a signal that may be composed of a variety of different types of audio signals, such as a music signal, noise, or special sound effects, as known from movies.
- One way to detect speech is to use a pattern recognition system. Pattern recognition is understood to mean analyzing raw data and performing special processing based on a category of a pattern discovered in the raw data. In particular, the term "pattern" or "pattern” describes an underlying similarity that can be found between the measurements of objects of the same categories (classes).
- the basic operations of a pattern recognition system consist in capturing, that is to say recording the data using a converter, preprocessing, feature extraction and classification, whereby these basic operations can be performed in the given order.
- microphones are used as sensors for a speech capture system.
- a preparation may include A / D conversion, resampling, or noise reduction.
- the feature extraction is the calculation of characteristic features for each object from the measurements. The features are chosen to be similar among objects of the same class, so that good intra-class compactness is achieved and that they are different for objects of different classes, so that inter-class separability is achieved.
- a third requirement is that the features should be robust with respect to noise, ambient conditions, and irrelevant transformations of the input signal to human perception.
- Feature extraction can be split into two separate stages. The first level is the feature calculation and the second level is feature projection or transformation to a generally orthogonal basis to minimize correlation between feature vectors and to reduce the dimensionality of the features by not using low energy elements.
- the classification is the process of deciding whether speech is present or not based on the extracted features and a trained classifier. So let's say the following equation.
- a set of training vectors Sl x Y are defined, where feature vectors are denoted by xi and the set of classes by Y.
- Y has two values, namely, ⁇ language, non-language ⁇ ,
- the features Xi are calculated from labeled data, i. H. from audio signals that know which class y they belong to.
- the classifier After completing the training, the classifier has learned the characteristics of all classes.
- the features are computed from the unknown data as in the training phase and projected and classified by the classifier on the basis of the knowledge gained in training about the characteristics of the classes.
- approaches to speech enhancement and noise reduction which attenuate or enhance the coefficients of a time / frequency representation according to an estimate of the degree of noise contained in such a time / frequency coefficient.
- a time / frequency plot is obtained from a noisy measurement using, for example, special minimal statistics techniques.
- a noise suppression rule calculates a damping factor using the noise estimate. This principle is known as short-term spectral attenuation or spectral weighting, as is known, for example, in G. Schmid, "Single-channel noise suppression based on spectral weighting", Eurasip Newsletter 2004.
- speech enhancement techniques and noise reduction techniques introduce audible artifacts into the output signal.
- An example of such artifact is known as music noise or musical tones and results from an erroneous estimation of noise floors and fluctuating subband attenuation factors.
- blind source separation techniques may be used to separate the speech signal components from the surround signal and then separately manipulate both.
- One method consists in the broadband attenuation, as indicated at 20 in FIG.
- the audio signal is attenuated at the intervals where speech is present.
- Special amplification factors range between -12 dB and -3 dB, with a preferred attenuation of 6 dB. Since other signal components / components are equally suppressed, one might think that the total loss of audio signal energy is clearly perceived.
- An alternative method which is also indicated in Fig. 3 at 20, consists in a high-pass filtering.
- the audio signal is high-pass filtered where speech is present, with a cutoff frequency in the range between 600 Hz and 3000 Hz.
- the adjustment of the cutoff frequency results from the signal characteristic of speech with respect to the present invention.
- the long-term power spectrum of a speech signal focuses on an area below 2.5 kHz.
- the preferred range of the fundamental frequency of voiced speech is in the range between 75 Hz and 330 Hz.
- a range between 60 Hz and 250 Hz results for male adults.
- Mean values are 120 Hz for male speakers and 215 Hz for female speakers. Due to the resonances in the vocal tract certain signal frequencies are amplified.
- speech exhibits a 1 / F nature, ie, the spectral energy decreases with increasing frequency. Therefore, for purposes of the present invention, speech components may be well-defined by a high-pass Filtering be filtered with the specified cutoff frequency range.
- a first step 40 the fundamental wave of a speech is detected, which detection may take place in the speech detector 18 or, as shown in Fig. 6e, in the speech analyzer 30.
- a step 41 an examination is made to find the harmonics belonging to the fundamental wave.
- This functionality can be performed in the speech detector / speech analyzer or even in the ambient signal modifier.
- a spectrogram is calculated for the surround signal based on a block-wise forward transform as set forth at 42.
- the actual speech suppression is performed in a step 43, in which the fundamental wave and the harmonics are attenuated in the spectrogram.
- the modified surround signal in which the fundamental and harmonics are attenuated or eliminated is again inverse transformed to reach the modified surround signal or the modified input signal.
- This sinusoidal signal modeling is often used for tone synthesis, audio coding, source separation, tone manipulation, and noise suppression.
- a signal is represented as a composition of sine waves with time-varying amplitudes and frequencies.
- Tonal speech signal components are manipulated by dividing the partial tones, i. H. the fundamental and its harmonics (harmonics) are identified and modified.
- the partial tones are identified by means of a partial tone finder, as shown at 41.
- Partial tone finding is performed in the time / frequency domain.
- a spectrogram is performed by means of a short-time Fourier transform, as indicated at 42. Local maxima in each spectrum of the spectrogram are detected and trajectories determined by local maxima of neighboring spectra.
- An estimate of the fundamental frequency may support the peak picking process, where this estimate of the fundamental frequency is performed at 40.
- a sinusoidal signal is then obtained from the trajectories. It should be noted that the order between step 40, 41 and step 42 can also be varied, so that first an outward transformation 42 is performed, which takes place in the speech analyzer 30 of FIG. 6d.
- an improved speech signal is obtained by amplifying the sine component.
- the speech suppression according to the invention wants to achieve exactly the opposite, namely to suppress the partial tones, the partial tones comprising the fundamental wave and its harmonics, for a speech segment with tonal speech.
- the high energy speech components are tonal.
- speech is spoken at a level of 60-75 dB for vowels and about 20-30 dB lower for consonants.
- the excitation is a periodic pulse-like signal.
- the excitation signal is filtered by the vocal tract. Consequently, almost all of the energy is one tonal speech segment in the fundamental and its harmonics.
- FIGS. 7 and 8 illustrate the basic principle of short-term spectral attenuation or spectral weighting.
- the illustrated method estimates the amount of speech contained in a time / frequency tile using so-called low-level features that provide a measure of the "language" of a signal in a particular frequency range
- Low level features are low level features in terms of interpretation of their meaning and the cost of their computation.
- the audio signal is decomposed into a number of frequency bands by means of a filter bank or a short-time Fourier transform, which is shown at 70 in FIG.
- a filter bank or a short-time Fourier transform which is shown at 70 in FIG.
- temporally varying gains for all subbands are computed from such low-level features to attenuate subband signals in proportion to the amount of speech they contain.
- Suitable low-level features are the spectral flatness measure (SFM) and the 4 Hz modulation energy (4HzME).
- SFM measures the degree of tonality of an audio signal and, for a band, results from the quotient of the geometric mean of all spectral values in a band and the arithmetic mean of the spectral components in the band.
- FIG. 8 shows a more detailed illustration of the gain calculation block 71a and 71b of FIG. 7.
- a plurality of different low-level features ie LLF1,..., LLFn, are calculated. These features are then combined in a combiner 80 to arrive at a gain gi for a subband.
- the method according to the invention can be implemented in hardware or in software.
- the implementation may be on a digital storage medium, in particular a floppy disk or CD with electronically readable control signals, which may interact with a programmable computer system such that the method is performed.
- the invention thus also exists in a computer program product with a program code stored on a machine-readable carrier for carrying out the method according to the invention, when the computer program product runs on a computer.
- the invention can thus be realized as a computer program with a program code for carrying out the method when the computer program runs on a computer.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Stereophonic System (AREA)
- Stereo-Broadcasting Methods (AREA)
- Time-Division Multiplex Systems (AREA)
- Dot-Matrix Printers And Others (AREA)
- Color Television Systems (AREA)
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PL08802737T PL2206113T3 (pl) | 2007-10-12 | 2008-10-01 | Urządzenie i sposób wytwarzania sygnału wielokanałowego z przetwarzaniem sygnałów mowy |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102007048973A DE102007048973B4 (de) | 2007-10-12 | 2007-10-12 | Vorrichtung und Verfahren zum Erzeugen eines Multikanalsignals mit einer Sprachsignalverarbeitung |
PCT/EP2008/008324 WO2009049773A1 (de) | 2007-10-12 | 2008-10-01 | Vorrichtung und verfahren zum erzeugen eines multikanalsignals mit einer sprachsignalverarbeitung |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2206113A1 true EP2206113A1 (de) | 2010-07-14 |
EP2206113B1 EP2206113B1 (de) | 2011-04-27 |
Family
ID=40032822
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP08802737A Active EP2206113B1 (de) | 2007-10-12 | 2008-10-01 | Vorrichtung und verfahren zum erzeugen eines multikanalsignals mit einer sprachsignalverarbeitung |
Country Status (16)
Country | Link |
---|---|
US (1) | US8731209B2 (de) |
EP (1) | EP2206113B1 (de) |
JP (1) | JP5149968B2 (de) |
KR (1) | KR101100610B1 (de) |
CN (1) | CN101842834B (de) |
AT (1) | ATE507555T1 (de) |
AU (1) | AU2008314183B2 (de) |
BR (1) | BRPI0816638B1 (de) |
CA (1) | CA2700911C (de) |
DE (2) | DE102007048973B4 (de) |
ES (1) | ES2364888T3 (de) |
HK (1) | HK1146424A1 (de) |
MX (1) | MX2010003854A (de) |
PL (1) | PL2206113T3 (de) |
RU (1) | RU2461144C2 (de) |
WO (1) | WO2009049773A1 (de) |
Families Citing this family (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5577787B2 (ja) | 2009-05-14 | 2014-08-27 | ヤマハ株式会社 | 信号処理装置 |
US20110078224A1 (en) * | 2009-09-30 | 2011-03-31 | Wilson Kevin W | Nonlinear Dimensionality Reduction of Spectrograms |
TWI459828B (zh) * | 2010-03-08 | 2014-11-01 | Dolby Lab Licensing Corp | 在多頻道音訊中決定語音相關頻道的音量降低比例的方法及系統 |
JP5299327B2 (ja) * | 2010-03-17 | 2013-09-25 | ソニー株式会社 | 音声処理装置、音声処理方法、およびプログラム |
WO2011121782A1 (ja) * | 2010-03-31 | 2011-10-06 | 富士通株式会社 | 帯域拡張装置および帯域拡張方法 |
EP2581904B1 (de) * | 2010-06-11 | 2015-10-07 | Panasonic Intellectual Property Corporation of America | Gerät und Verfahren zur Audiokodierung/-dekodierung |
EP2661746B1 (de) * | 2011-01-05 | 2018-08-01 | Nokia Technologies Oy | Mehrkanalige kodierung und/oder dekodierung |
EP2523473A1 (de) | 2011-05-11 | 2012-11-14 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zur Erzeugung eines Ausgabesignals mithilfe einer Dekompositionsvorrichtung |
JP5057535B1 (ja) | 2011-08-31 | 2012-10-24 | 国立大学法人電気通信大学 | ミキシング装置、ミキシング信号処理装置、ミキシングプログラム及びミキシング方法 |
KR101803293B1 (ko) | 2011-09-09 | 2017-12-01 | 삼성전자주식회사 | 입체 음향 효과를 제공하는 신호 처리 장치 및 신호 처리 방법 |
US9280984B2 (en) | 2012-05-14 | 2016-03-08 | Htc Corporation | Noise cancellation method |
MY181365A (en) * | 2012-09-12 | 2020-12-21 | Fraunhofer Ges Forschung | Apparatus and method for providing enhanced guided downmix capabilities for 3d audio |
JP6054142B2 (ja) * | 2012-10-31 | 2016-12-27 | 株式会社東芝 | 信号処理装置、方法およびプログラム |
WO2014112792A1 (ko) * | 2013-01-15 | 2014-07-24 | 한국전자통신연구원 | 사운드 바를 위한 오디오 신호 처리 장치 및 방법 |
SG11201507066PA (en) * | 2013-03-05 | 2015-10-29 | Fraunhofer Ges Forschung | Apparatus and method for multichannel direct-ambient decomposition for audio signal processing |
EP2830061A1 (de) | 2013-07-22 | 2015-01-28 | Fraunhofer Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zur Codierung und Decodierung eines codierten Audiosignals unter Verwendung von zeitlicher Rausch-/Patch-Formung |
EP3503095A1 (de) | 2013-08-28 | 2019-06-26 | Dolby Laboratories Licensing Corp. | Hybride wellenformcodierte und parametercodierte spracherweiterung |
EP2866227A1 (de) | 2013-10-22 | 2015-04-29 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Verfahren zur Dekodierung und Kodierung einer Downmix-Matrix, Verfahren zur Darstellung von Audioinhalt, Kodierer und Dekodierer für eine Downmix-Matrix, Audiokodierer und Audiodekodierer |
US10176818B2 (en) * | 2013-11-15 | 2019-01-08 | Adobe Inc. | Sound processing using a product-of-filters model |
KR101808810B1 (ko) * | 2013-11-27 | 2017-12-14 | 한국전자통신연구원 | 음성/무음성 구간 검출 방법 및 장치 |
CN104683933A (zh) | 2013-11-29 | 2015-06-03 | 杜比实验室特许公司 | 音频对象提取 |
KR101841380B1 (ko) * | 2014-01-13 | 2018-03-22 | 노키아 테크놀로지스 오와이 | 다중-채널 오디오 신호 분류기 |
JP6274872B2 (ja) * | 2014-01-21 | 2018-02-07 | キヤノン株式会社 | 音処理装置、音処理方法 |
WO2016019130A1 (en) * | 2014-08-01 | 2016-02-04 | Borne Steven Jay | Audio device |
US20160071524A1 (en) * | 2014-09-09 | 2016-03-10 | Nokia Corporation | Audio Modification for Multimedia Reversal |
CN104409080B (zh) * | 2014-12-15 | 2018-09-18 | 北京国双科技有限公司 | 语音端点检测方法和装置 |
PL3257270T3 (pl) * | 2015-03-27 | 2019-07-31 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Urządzenie i sposób przetwarzania sygnałów stereo do odtwarzania w samochodach dla uzyskania indywidualnego dźwięku trójwymiarowego przez przednie głośniki |
CN106205628B (zh) * | 2015-05-06 | 2018-11-02 | 小米科技有限责任公司 | 声音信号优化方法及装置 |
WO2017136573A1 (en) * | 2016-02-02 | 2017-08-10 | Dts, Inc. | Augmented reality headphone environment rendering |
WO2017202680A1 (en) * | 2016-05-26 | 2017-11-30 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and apparatus for voice or sound activity detection for spatial audio |
CN109416912B (zh) * | 2016-06-30 | 2023-04-11 | 杜塞尔多夫华为技术有限公司 | 一种对多声道音频信号进行编码和解码的装置和方法 |
CN106412792B (zh) * | 2016-09-05 | 2018-10-30 | 上海艺瓣文化传播有限公司 | 对原立体声文件重新进行空间化处理并合成的系统及方法 |
CA3179080A1 (en) * | 2016-09-19 | 2018-03-22 | Pindrop Security, Inc. | Channel-compensated low-level features for speaker recognition |
EP3382702A1 (de) * | 2017-03-31 | 2018-10-03 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und verfahren zur bestimmung einer im voraus bestimmten eigenschaft bezüglich der künstlichen bandbreitenbeschränkungsverarbeitung eines audiosignals |
US9820073B1 (en) | 2017-05-10 | 2017-11-14 | Tls Corp. | Extracting a common signal from multiple audio signals |
BR112020010819A2 (pt) | 2017-12-18 | 2020-11-10 | Dolby International Ab | método e sistema para tratar transições locais entre posições de escuta em um ambiente de realidade virtual |
US11019201B2 (en) | 2019-02-06 | 2021-05-25 | Pindrop Security, Inc. | Systems and methods of gateway detection in a telephone network |
US12015637B2 (en) | 2019-04-08 | 2024-06-18 | Pindrop Security, Inc. | Systems and methods for end-to-end architectures for voice spoofing detection |
US20230215456A1 (en) * | 2019-12-31 | 2023-07-06 | Brainsoft Inc. | Sound processing method using dj transform |
KR102164306B1 (ko) * | 2019-12-31 | 2020-10-12 | 브레인소프트주식회사 | 디제이변환에 기초한 기본주파수 추출 방법 |
CN111654745B (zh) * | 2020-06-08 | 2022-10-14 | 海信视像科技股份有限公司 | 多声道的信号处理方法及显示设备 |
CN114630057B (zh) * | 2022-03-11 | 2024-01-30 | 北京字跳网络技术有限公司 | 确定特效视频的方法、装置、电子设备及存储介质 |
Family Cites Families (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03236691A (ja) | 1990-02-14 | 1991-10-22 | Hitachi Ltd | テレビジョン受信機用音声回路 |
JPH07110696A (ja) | 1993-10-12 | 1995-04-25 | Mitsubishi Electric Corp | 音声再生装置 |
JP3412209B2 (ja) | 1993-10-22 | 2003-06-03 | 日本ビクター株式会社 | 音響信号処理装置 |
CN1116737C (zh) | 1998-04-14 | 2003-07-30 | 听觉增强有限公司 | 用户可调节的适应听力的音量控制 |
US6928169B1 (en) * | 1998-12-24 | 2005-08-09 | Bose Corporation | Audio signal processing |
JP2001069597A (ja) | 1999-06-22 | 2001-03-16 | Yamaha Corp | 音声処理方法及び装置 |
FR2797343B1 (fr) * | 1999-08-04 | 2001-10-05 | Matra Nortel Communications | Procede et dispositif de detection d'activite vocale |
JP4463905B2 (ja) | 1999-09-28 | 2010-05-19 | 隆行 荒井 | 音声処理方法、装置及び拡声システム |
US6351733B1 (en) * | 2000-03-02 | 2002-02-26 | Hearing Enhancement Company, Llc | Method and apparatus for accommodating primary content audio and secondary content remaining audio capability in the digital audio production process |
US7177808B2 (en) * | 2000-11-29 | 2007-02-13 | The United States Of America As Represented By The Secretary Of The Air Force | Method for improving speaker identification by determining usable speech |
US20040086130A1 (en) * | 2002-05-03 | 2004-05-06 | Eid Bradley F. | Multi-channel sound processing systems |
US7567845B1 (en) * | 2002-06-04 | 2009-07-28 | Creative Technology Ltd | Ambience generation for stereo signals |
US7257231B1 (en) * | 2002-06-04 | 2007-08-14 | Creative Technology Ltd. | Stream segregation for stereo signals |
EP1618763B1 (de) * | 2003-04-17 | 2007-02-28 | Koninklijke Philips Electronics N.V. | Audiosignalsynthese |
WO2004093494A1 (en) | 2003-04-17 | 2004-10-28 | Koninklijke Philips Electronics N.V. | Audio signal generation |
SE0400997D0 (sv) * | 2004-04-16 | 2004-04-16 | Cooding Technologies Sweden Ab | Efficient coding of multi-channel audio |
SE0400998D0 (sv) | 2004-04-16 | 2004-04-16 | Cooding Technologies Sweden Ab | Method for representing multi-channel audio signals |
SE0402652D0 (sv) | 2004-11-02 | 2004-11-02 | Coding Tech Ab | Methods for improved performance of prediction based multi- channel reconstruction |
JP2007028065A (ja) | 2005-07-14 | 2007-02-01 | Victor Co Of Japan Ltd | サラウンド再生装置 |
WO2007034806A1 (ja) | 2005-09-22 | 2007-03-29 | Pioneer Corporation | 信号処理装置、信号処理方法、信号処理プログラムおよびコンピュータに読み取り可能な記録媒体 |
JP4940671B2 (ja) * | 2006-01-26 | 2012-05-30 | ソニー株式会社 | オーディオ信号処理装置、オーディオ信号処理方法及びオーディオ信号処理プログラム |
WO2007096792A1 (en) * | 2006-02-22 | 2007-08-30 | Koninklijke Philips Electronics N.V. | Device for and a method of processing audio data |
KR100773560B1 (ko) | 2006-03-06 | 2007-11-05 | 삼성전자주식회사 | 스테레오 신호 생성 방법 및 장치 |
DE102006017280A1 (de) * | 2006-04-12 | 2007-10-18 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zum Erzeugen eines Umgebungssignals |
-
2007
- 2007-10-12 DE DE102007048973A patent/DE102007048973B4/de active Active
-
2008
- 2008-10-01 AU AU2008314183A patent/AU2008314183B2/en active Active
- 2008-10-01 JP JP2010528297A patent/JP5149968B2/ja active Active
- 2008-10-01 BR BRPI0816638-2A patent/BRPI0816638B1/pt active IP Right Grant
- 2008-10-01 AT AT08802737T patent/ATE507555T1/de active
- 2008-10-01 CA CA2700911A patent/CA2700911C/en active Active
- 2008-10-01 CN CN2008801112350A patent/CN101842834B/zh active Active
- 2008-10-01 KR KR1020107007771A patent/KR101100610B1/ko active IP Right Grant
- 2008-10-01 WO PCT/EP2008/008324 patent/WO2009049773A1/de active Application Filing
- 2008-10-01 PL PL08802737T patent/PL2206113T3/pl unknown
- 2008-10-01 RU RU2010112890/08A patent/RU2461144C2/ru active
- 2008-10-01 US US12/681,809 patent/US8731209B2/en active Active
- 2008-10-01 EP EP08802737A patent/EP2206113B1/de active Active
- 2008-10-01 DE DE502008003378T patent/DE502008003378D1/de active Active
- 2008-10-01 ES ES08802737T patent/ES2364888T3/es active Active
- 2008-10-01 MX MX2010003854A patent/MX2010003854A/es active IP Right Grant
-
2011
- 2011-01-12 HK HK11100278.0A patent/HK1146424A1/xx unknown
Non-Patent Citations (1)
Title |
---|
See references of WO2009049773A1 * |
Also Published As
Publication number | Publication date |
---|---|
AU2008314183B2 (en) | 2011-03-31 |
AU2008314183A1 (en) | 2009-04-23 |
ATE507555T1 (de) | 2011-05-15 |
KR101100610B1 (ko) | 2011-12-29 |
DE502008003378D1 (de) | 2011-06-09 |
US20100232619A1 (en) | 2010-09-16 |
CA2700911A1 (en) | 2009-04-23 |
BRPI0816638A2 (pt) | 2015-03-10 |
RU2461144C2 (ru) | 2012-09-10 |
CN101842834A (zh) | 2010-09-22 |
DE102007048973A1 (de) | 2009-04-16 |
BRPI0816638B1 (pt) | 2020-03-10 |
KR20100065372A (ko) | 2010-06-16 |
CN101842834B (zh) | 2012-08-08 |
US8731209B2 (en) | 2014-05-20 |
MX2010003854A (es) | 2010-04-27 |
RU2010112890A (ru) | 2011-11-20 |
EP2206113B1 (de) | 2011-04-27 |
WO2009049773A1 (de) | 2009-04-23 |
ES2364888T3 (es) | 2011-09-16 |
PL2206113T3 (pl) | 2011-09-30 |
HK1146424A1 (en) | 2011-06-03 |
JP2011501486A (ja) | 2011-01-06 |
CA2700911C (en) | 2014-08-26 |
DE102007048973B4 (de) | 2010-11-18 |
JP5149968B2 (ja) | 2013-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2206113B1 (de) | Vorrichtung und verfahren zum erzeugen eines multikanalsignals mit einer sprachsignalverarbeitung | |
DE102006050068B4 (de) | Vorrichtung und Verfahren zum Erzeugen eines Umgebungssignals aus einem Audiosignal, Vorrichtung und Verfahren zum Ableiten eines Mehrkanal-Audiosignals aus einem Audiosignal und Computerprogramm | |
EP2402943B1 (de) | Vorrichtung und Verfahren zum Erzeugen eines Umgebungssignals | |
DE602005005186T2 (de) | Verfahren und system zur schallquellen-trennung | |
EP2064699B1 (de) | Verfahren und vorrichtung zum extrahieren und ändern des hallinhalts eines eingangssignals | |
DE60311794T2 (de) | Signalsynthese | |
EP1854334B1 (de) | Vorrichtung und verfahren zum erzeugen eines codierten stereo-signals eines audiostücks oder audiodatenstroms | |
DE69827775T2 (de) | Tonkanalsmischung | |
EP2730102B1 (de) | Verfahren und vorrichtung zur zerlegung einer stereoaufzeichnung mittels frequenzbereichsverarbeitung unter verwendung eines generators für spektrale gewichtungen | |
EP2891334B1 (de) | Erzeugung von mehrkanalton aus stereo-audiosignalen | |
RU2663345C2 (ru) | Устройство и способ масштабирования центрального сигнала и улучшения стереофонии на основе отношения сигнал-понижающее микширование | |
EP2917908A1 (de) | Nichtlineare inverse kodierung von multikanal-signalen | |
DE10148351B4 (de) | Verfahren und Vorrichtung zur Auswahl eines Klangalgorithmus | |
Lopatka et al. | Improving listeners' experience for movie playback through enhancing dialogue clarity in soundtracks | |
DE102017121876A1 (de) | Verfahren und vorrichtung zur formatumwandlung eines mehrkanaligen audiosignals |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20100407 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA MK RS |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
DAX | Request for extension of the european patent (deleted) | ||
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: GERMAN |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1146424 Country of ref document: HK |
|
REF | Corresponds to: |
Ref document number: 502008003378 Country of ref document: DE Date of ref document: 20110609 Kind code of ref document: P |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 502008003378 Country of ref document: DE Effective date: 20110609 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: T3 |
|
REG | Reference to a national code |
Ref country code: ES Ref legal event code: FG2A Ref document number: 2364888 Country of ref document: ES Kind code of ref document: T3 Effective date: 20110916 |
|
LTIE | Lt: invalidation of european patent or patent extension |
Effective date: 20110427 |
|
REG | Reference to a national code |
Ref country code: PL Ref legal event code: T3 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110727 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110829 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FD4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110827 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110728 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 Ref country code: IE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: GR Ref document number: 1146424 Country of ref document: HK |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20120130 |
|
BERE | Be: lapsed |
Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FORDERUNG DER ANGEWAN Effective date: 20111031 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 502008003378 Country of ref document: DE Effective date: 20120130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20111031 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20111031 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20111001 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110727 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20121031 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20121031 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MM01 Ref document number: 507555 Country of ref document: AT Kind code of ref document: T Effective date: 20131001 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20131001 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 8 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 9 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 10 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 11 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230512 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: TR Payment date: 20230919 Year of fee payment: 16 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: PL Payment date: 20230919 Year of fee payment: 16 Ref country code: NL Payment date: 20231023 Year of fee payment: 16 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20231025 Year of fee payment: 16 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: ES Payment date: 20231117 Year of fee payment: 16 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20231031 Year of fee payment: 16 Ref country code: FR Payment date: 20231023 Year of fee payment: 16 Ref country code: DE Payment date: 20231018 Year of fee payment: 16 |