US20130010983A1 - Device and method for manipulating an audio signal having a transient event - Google Patents
Device and method for manipulating an audio signal having a transient event Download PDFInfo
- Publication number
- US20130010983A1 US20130010983A1 US13/465,958 US201213465958A US2013010983A1 US 20130010983 A1 US20130010983 A1 US 20130010983A1 US 201213465958 A US201213465958 A US 201213465958A US 2013010983 A1 US2013010983 A1 US 2013010983A1
- Authority
- US
- United States
- Prior art keywords
- audio signal
- signal
- transient
- time
- transient event
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000001052 transient effect Effects 0.000 title claims abstract description 240
- 230000005236 sound signal Effects 0.000 title claims abstract description 199
- 238000000034 method Methods 0.000 title claims description 50
- 238000012545 processing Methods 0.000 claims abstract description 69
- 230000002829 reductive effect Effects 0.000 claims description 13
- 230000003595 spectral effect Effects 0.000 claims description 13
- 238000003860 storage Methods 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 230000010363 phase shift Effects 0.000 claims description 4
- 230000001419 dependent effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 claims description 2
- 230000003750 conditioning effect Effects 0.000 claims 1
- 238000001228 spectrum Methods 0.000 description 18
- 230000000694 effects Effects 0.000 description 12
- 239000011295 pitch Substances 0.000 description 12
- 230000002123 temporal effect Effects 0.000 description 10
- 230000017105 transposition Effects 0.000 description 9
- 230000007480 spreading Effects 0.000 description 8
- 238000003892 spreading Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 239000006185 dispersion Substances 0.000 description 5
- 238000004904 shortening Methods 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 230000001629 suppression Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000002592 echocardiography Methods 0.000 description 2
- 238000005562 fading Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000001771 impaired effect Effects 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 101000969688 Homo sapiens Macrophage-expressed gene 1 protein Proteins 0.000 description 1
- 102100021285 Macrophage-expressed gene 1 protein Human genes 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 108010076504 Protein Sorting Signals Proteins 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000016507 interphase Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/04—Time compression or expansion
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/022—Blocking, i.e. grouping of samples in time; Choice of analysis windows; Overlap factoring
- G10L19/025—Detection of transients or attacks for time/frequency resolution switching
Definitions
- the present invention relates to audio signal processing and, particularly, to audio signal manipulation in the context of applying audio effects to a signal containing transient events.
- phase vocoders like (pitch synchronous) overlap-add, (P)SOLA, as, for example, described in J. L. Flanagan and R. M. Golden, The Bell System Technical Journal, November 1966, pp. 1394 to 1509; U.S. Pat. No. 6,549,884 Laroche, J. & Dolson, M.: Phase-vocoder pitch-shifting; Jean Laroche and Mark Dolson, New Phase-Vocoder Techniques for Pitch-Shifting, Harmonizing And Other Exotic Effects”, Proc.
- audio signals can be subjected to a transposition using such methods, i.e. phase vocoders or (P)SOLA where the special issue of this kind of transposition is that the transposed audio signal has the same reproduction/replay length as the original audio signal before transposition, while the pitch is changed.
- phase vocoders or (P)SOLA
- the special issue of this kind of transposition is that the transposed audio signal has the same reproduction/replay length as the original audio signal before transposition, while the pitch is changed.
- This is obtained by an accelerated reproduction of the stretched signals where the acceleration factor for performing the accelerated reproduction depends on the stretching factor for stretching the original audio signal in time.
- this procedure corresponds to a down-sampling of the stretched signal or decimation of the stretched signal by a factor equal to the stretching factor where the sampling frequency is maintained.
- Transient events are events in a signal in which the energy of the signal in the whole band or in a certain frequency range is rapidly changing, i.e. rapidly increasing or rapidly decreasing.
- Characteristic features of specific transients are the distribution of signal energy in the spectrum. Typically, the energy of the audio signal during a transient event is distributed over the whole frequency while, in non-transient signal portions, the energy is normally concentrated in the low frequency portion of the audio signal or in specific bands. This means that a non-transient signal portion, which is also called a stationary or tonal signal portion has a spectrum, which is non-flat.
- the energy of the signal is included in a comparatively small number of spectral lines/spectral bands, which are strongly raised over a noise floor of an audio signal.
- the energy of the audio signal will be distributed over many different frequency bands and, specifically, will be distributed in the high frequency portion so that a spectrum for a transient portion of the audio signal will be comparatively flat and will, in any event be flatter than a spectrum of a tonal portion of the audio signal.
- a transient event is a strong change in time, which means that the signal will include many higher harmonics when a Fourier decomposition is performed.
- An important feature of these many higher harmonics is that the phases of these higher harmonics are in a very specific mutual relationship so that a superposition of all these sine waves will result in a rapid change of signal energy. In other words, there exists a strong correlation across the spectrum.
- phase situation among all harmonics can also be termed as a “vertical coherence”.
- This “vertical coherence” is related to a time/frequency spectrogram representation of the signal where a horizontal direction corresponds to the development of the signal over time and where the vertical dimension describes the interdependence over the frequency of the spectral components (transform frequency bins) in one short-time spectrum over frequency.
- the manipulated signal When the vertical coherence of transients is destroyed by an audio signal processing method, the manipulated signal will be very similar to the original signal in stationary or non-transient portions, but the transient portions will have a reduced quality in the manipulated signal.
- the uncontrolled manipulation of the vertical coherence of a transient results in temporal dispersion of the same, since many harmonic components contribute to a transient event and changing the phases of all these components in an uncontrolled manner inevitably results in such artifacts.
- transient portions are extremely important for the dynamics of an audio signal, such as a music signal or a speech signal where sudden changes of energy in a specific time represent a great deal of the subjective user impression on the quality of the manipulated signal.
- transient events in an audio signal are typically quite remarkable “milestones” of an audio signal, which have an over-proportional influence on the subjective quality impression.
- Manipulated transients in which the vertical coherence has been destroyed by a signal processing operation or has been degraded with respect to the transient portion of the original signal will sound distorted, reverberant and unnatural to the listener.
- transient signal portions are “blurred” by dispersion, since the so-called vertical coherence of the signal is impaired.
- Methods using so-called overlap-add methods, like (P)SOLA may generate disturbing pre- and post-echoes of transient sound events.
- an apparatus for manipulating an audio signal having a transient event may have a signal processor for processing a transient reduced audio signal in which a first time portion having the transient event is removed or, for processing an audio signal having the transient event to acquire a processed audio signal; a signal inserter for inserting a second time portion into the processed audio signal at a signal location, where the first portion was removed or where the transient event is located in the processed audio signal, wherein the second time portion has a transient event not influenced by the processing performed by the signal processor so that a manipulated audio signal is acquired.
- an apparatus for generating a meta data signal for an audio signal having a transient event may have a transient detector for detecting a transient event in the audio signal; a meta data calculator for generating the meta data indicating a time position of the transient event in the audio signal or indicating a start-time instant before the transient event or a stop-time instant subsequent to the transient event or a duration of a time portion of the audio signal including the transient event; and a signal output interface for generating the meta data signal either having the meta data or having the audio signal and the meta data for transmission or storage.
- a method of manipulating an audio signal having a transient event may have the steps of processing a transient reduced audio signal in which a first time portion having the transient event is removed or for processing an audio signal having the transient event to acquire a processed audio signal; inserting a second time portion into the processed audio signal at a signal location, where the first portion was removed or where the transient event is located in the processed audio signal, wherein the second time portion has a transient event not influenced by the processing so that a manipulated audio signal is acquired.
- a method of generating a meta data signal for an audio signal having a transient event may have the steps of detecting a transient event in the audio signal; generating the meta data indicating a time position of the transient event in the audio signal or indicating a start-time instant before the transient event or a stop-time instant subsequent to the transient event or a duration of a time portion of the audio signal including the transient event; and generating the meta data signal either having the meta data or having the audio signal and the meta data for transmission or storage.
- a meta data signal for an audio signal may have transient event, the meta data signal having information indicating a time position of the transient event in the audio signal or indicating a start-time instant before the transient event or a stop-time instant subsequent to the transient event or a duration of a time portion of the audio signal indicating the transient event and an information on the position of the time portion in the audio signal.
- a computer program may have a program code for performing, when running on a computer, the method of manipulating an audio signal having a transient event, which may have the steps of processing a transient reduced audio signal in which a first time portion having the transient event is removed or for processing an audio signal having the transient event to acquire a processed audio signal; inserting a second time portion into the processed audio signal at a signal location, where the first portion was removed or where the transient event is located in the processed audio signal, wherein the second time portion has a transient event not influenced by the processing so that a manipulated audio signal is acquired, or the method of generating a meta data signal for an audio signal having a transient event which may have the steps of detecting a transient event in the audio signal; generating the meta data indicating a time position of the transient event in the audio signal or indicating a start-time instant before the transient event or a stop-time instant subsequent to the transient event or a duration of a time portion of the audio signal including the
- the present invention makes sure that transient portions are not processed at all in a detrimental way, i.e. are removed before processing and are reinserted after processing or the transient events are processed, but are removed from the processed signal and replaced by non-processed transient events.
- the transient portions inserted into the processed signal are copies of corresponding transient portions in the original audio signal so that the manipulated signal consists of a processed portion not including a transient and a non- or differently processed portion including the transient.
- the original transient can be subjected to decimation or any kind of weighting or parameterized processing.
- transient portions can be replaced by synthetically-created transient portions, which are synthesized in such a way that the synthesized transient portion is similar to the original transient portion with respect to some transient parameters such as the amount of energy change in a certain time or any other measure characterizing a transient event.
- the present application provides a novel method for a perceptual favorable treatment of transient sound events within the framework of such processing, which would otherwise generate a temporal “blurring” by dispersion of a signal.
- This method essentially comprises the removal of the transient sound events prior to the signal manipulation for the purpose of time stretching and, subsequently, adding, while taking into account the stretching, the unprocessed transient signal portion to the modified (stretched) signal in an accurate manner.
- FIG. 1 illustrates an embodiment of an inventive apparatus or method for manipulating an audio signal having a transient
- FIG. 2 illustrates an implementation of a transient signal remover of FIG. 1 ;
- FIG. 3 a illustrates an implementation of a signal processor of FIG. 1 ;
- FIG. 3 b illustrates a further embodiment for implementing the signal processor of FIG. 1 ;
- FIG. 4 illustrates an implementation of the signal inserter of FIG. 1 ;
- FIG. 5 a illustrates an overview of the implementation of a vocoder to be used in the signal processor of FIG. 1 ;
- FIG. 5 b shows an implementation of parts (analysis) of a signal processor of FIG. 1 ;
- FIG. 5 c illustrates other parts (stretching) of a signal processor of FIG. 1 ;
- FIG. 6 illustrates a transform implementation of a phase vocoder to be used in the signal processor of FIG. 1 ;
- FIG. 7 a illustrates an encoder side of a bandwidth extension processing scheme
- FIG. 7 b illustrates a decoder side of a bandwidth extension scheme
- FIG. 8 a illustrates an energy representation of an audio input signal with a transient event
- FIG. 8 b illustrates the signal of FIG. 8 a , but with a windowed transient
- FIG. 8 c illustrates a signal without the transient portion prior to being stretched
- FIG. 8 d illustrates the signal of FIG. 8 c subsequent to being stretched
- FIG. 8 e illustrates the manipulated signal after the corresponding portion of the original signal has been inserted.
- FIG. 9 illustrates an apparatus for generating side information for an audio signal.
- FIG. 1 illustrates an apparatus for manipulating an audio signal having a transient event.
- the apparatus comprises a transient signal remover 100 having an input 101 for an audio signal with a transient event.
- the output 102 of the transient signal remover is connected to a signal processor 110 .
- the signal processor output 111 is connected to a signal inserter 120 .
- the signal inserter output 121 on which a manipulated audio signal with an unprocessed “natural” or synthesized transient is available may be connected to a further device such as a signal conditioner 130 , which can perform any further processing of the manipulated signal such as a down-sampling/decimation to be needed for bandwidth extension purposes as discussed in connection with FIGS. 7A and 7B .
- the signal conditioner 130 cannot be used at all if the manipulated audio signal obtained at the output of the signal inserter 120 is used as it is, i.e. is stored for further processing, is transmitted to a receiver or is transmitted to a digital/analog converter which, in the end, is connected to a loudspeaker equipment to finally generate a sound signal representing the manipulated audio signal.
- the signal on line 121 can already be the high band signal.
- the signal processor has generated the high band signal from the input low band signal, and the lowband transient portion extracted from the audio signal 101 would have to be put into the frequency range of the high band, which is done by a signal processing not disturbing the vertical coherence, such as a decimation. This decimation would be performed before the signal inserter so that the decimated transient portion is inserted in the high band signal at the output of block 110 .
- the signal conditioner would perform any further processing of the high band signal such as envelope shaping, noise addition, inverse filtering or adding of harmonics etc. as done e.g. in MPEG 4 Spectral Band Replication.
- the signal inserter 120 receives side information from the remover 100 via line 123 in order to choose the right portion from the unprocessed signal to be inserted in 111
- a signal sequence as discussed in connection with FIGS. 8 a to 8 e may be obtained.
- the transient signal remover 100 is not needed and the signal inserter 120 determines a signal portion to be cut out from the processed signal on output 111 and to replace this cut-out signal by a portion of the original signal as schematically illustrated by line 121 or by a synthesized signal as illustrated by line 141 where this synthesized signal can be generated in a transient signal generator 140 .
- the signal inserter 120 is configured to communicate transient description parameters to the transient signal generator. Therefore, the connection between blocks 140 and 120 as indicated by item 141 is illustrated as a two-way connection.
- the transient signal generator may be implemented to have transient samples, which can directly be used or to have pre-stored transient samples, which can be weighted using transient parameters in order to actually generate/synthesize a transient to be used by the signal inserter 120 .
- the transient signal remover 100 is configured for removing a first time portion from the audio signal to obtain a transient-reduced audio signal, wherein the first time portion comprises the transient event.
- the signal processor is configured for processing the transient-reduced audio signal in which a first time portion comprising the transient event is removed or for processing the audio signal including the transient event to obtain the processed audio signal on line 111 .
- the signal inserter 120 is configured for inserting a second time portion into the processed audio signal at a signal location where the first time portion has been removed or where the transient event is located in the audio signal, wherein the second time portion comprises a transient event not influenced by the processing performed by the signal processor 110 so that the manipulated audio signal at output 121 is obtained.
- FIG. 2 illustrates an embodiment of the transient signal remover 100 .
- the transient signal remover 100 comprises a transient detector 103 , a fade-out/fade-in calculator 104 and a first portion remover 105 .
- the transient signal remover 100 comprises a side information extractor 106 , which extracts the side information attached to the audio signal as indicated by line 107 .
- the information on the transient time may be provided to the fade-out/fade-in calculator 104 as illustrated by line 107 .
- the audio signal includes, as meta information, not (only) the transient time, i.e. the accurate time at which the transient event is occurring, but the start/stop time of the portion to be excluded from the audio signal, i.e. the start time and the stop time of the “first portion” of the audio signal, then the fade-out/fade-in calculator 104 is not needed as well and the start/stop time information can be directly forwarded to the first portion remover 105 as illustrated by line 108 .
- Line 108 illustrates an option and all other lines, which are indicated by broken lines, are optional as well.
- the fade-in/fade-out calculator 104 outputs side information 109 .
- This side information 109 is different from the start/stop times of the first portion, since the nature of the processing in the processor 110 of FIG. 1 is taken into account.
- the input audio signal is fed into the remover 105 .
- the fade-out/fade-in calculator 104 provides for the start/stop times of the first portion. These times are calculated based on the transient time so that not only the transient event, but also some samples surrounding the transient event are removed by the first portion remover 105 . Furthermore, it is advantageous to not just cut out the transient portion by a time domain rectangular window, but to perform the extraction by a fade-out portion and a fade-in portion. For performing a fade-out or/a fade-in portion, any kind of window having a smoother transition compared to a rectangular filter such as a raised cosine window can be applied so that the frequency response of this extraction is not as problematic as it would be when a rectangular window would be applied, although this is also an option. This time domain windowing operation outputs the remainder of the windowing operation, i.e. the audio signal without the windowed portion.
- any transient suppression method can be applied in this context including such transient suppression methods leaving a transient-reduced or fully non-transient residual signal after the transient removal.
- the transient suppression is advantageous in situations, in which a further processing of the audio signal would suffer from portions set to zero, since such portions set to zero are very unnatural for an audio signal.
- transient detector 103 and the fade-out/fade-in calculator 104 can be applied as well on the encoding side as discussed in connection with FIG. 9 as long as the results of these calculations such as the transient time and/or the start/stop times of the first portion are transmitted to a signal manipulator either as side information or meta information together with the audio signal or separately from the audio signal such as within a separate audio meta data signal to he transmitted via a separate transmission channel.
- FIG. 3 a illustrates an implementation of the signal processor 110 of FIG. 1 .
- This implementation comprises a frequency selective analyzer 112 and a subsequently-connected frequency-selective processing device 113 .
- the frequency-selective processing device 113 is implemented such that it applies a negative influence on the vertical coherence of the original audio signal. Examples for this processing is the stretching of a signal in time or the shortening of a signal in time where this stretching or shortening is applied in a frequency-selective manner, so that, for example, the processing introduces phase shifts into the processed audio signal, which are different for different frequency bands.
- a phase vocoder comprises a sub-band/transform analyzer 114 , a subsequently-connected processor 115 for performing a frequency-selective processing of a plurality of output signals provided by item 114 and, subsequently, a sub-band/transform combiner 116 , which combines the signals processed by item 115 in order to finally obtain a processed signal in the time domain at output 117 where this processed signal in the time domain, again, is a full bandwidth signal or a lowpass filtered signal as long as the bandwidth of the processed signal 117 is larger than the bandwidth represented by a single branch between item 115 and 116 , since the sub-band/transform combiner 116 performs a combination of frequency-selective signals.
- phase vocoder Further details on the phase vocoder are subsequently discussed in connection with FIGS. 5A , 5 B, 5 C and 6 .
- the signal inserter 120 of FIG. 1 comprises a calculator 122 for calculating the length of the second time portion.
- the length of the removed first portion and the time stretching factor (or the time shortening factor) are needed so that the length of the second time portion is calculated in item 122 .
- the length of the second time portion is calculated by multiplying the length of the first portion by the stretching factor.
- the length of the second time portion is forwarded to a calculator 123 for calculating the first border and the second border of the second time portion in the audio signal.
- the calculator 133 may be implemented to perform a cross-correlation processing between the processed audio signal without the transient event supplied at input 124 and the audio signal with the transient event, which provides the second portion as supplied at input 125 .
- the calculator 123 is controlled by a further control input 126 so that a positive shift of the transient event within the second time portion is advantageous versus a negative shift of the transient event as discussed later.
- the first border and the second border of the second time portion are provided to an extractor 127 .
- the extractor 127 cuts out the portion, i.e. the second time portion out of the original audio signal provided at input 125 . Since a subsequent cross-fader 128 is used, the cut-out takes place using a rectangular filter.
- the start portion of the second time portion and the stop portion of the second time portion are weighted by an increasing weight from 0 to 1 for the start portion and/or decreasing weight from 1 to 0 in the end portion so that in this cross-fade region, the end portion of the processed signal together with the start portion of the extracted signal, when added together, result in a useful signal.
- a similar processing is performed in the cross-fader 128 for the end of the second time portion and the beginning of the processed audio signal after the extraction.
- the cross-fading makes sure that no time domain artifacts occur which would otherwise be perceivable as clicking artifacts when the borders of the processed audio signal without the transient portion and the second time portion borders do not perfectly match together.
- FIGS. 5 a , 5 b , 5 c and 6 in order to illustrate an implementation of the signal processor 110 in the context of a phase vocoder.
- FIG. 5 a shows a filterbank implementation of a phase vocoder, wherein an audio signal is fed in at an input 500 and obtained at an output 510 .
- each channel of the schematic filterbank illustrated in FIG. 5 a includes a bandpass filter 501 and a downstream oscillator 502 .
- Output signals of all oscillators from every channel are combined by a combiner, which is for example implemented as an adder and indicated at 503 , in order to obtain the output signal.
- Each filter 501 is implemented such that it provides an amplitude signal on the one hand and a frequency signal on the other hand.
- the amplitude signal and the frequency signal are time signals illustrating a development of the amplitude in a filter 501 over time, while the frequency signal represents a development of the frequency of the signal filtered by a filter 501 .
- FIG. 5 b A schematical setup of filter 501 is illustrated in FIG. 5 b .
- Each filter 501 of FIG. 5 a may be set up as in FIG. 5 b , wherein, however, only the frequencies f i supplied to the two input mixers 551 and the adder 552 are different from channel to channel.
- the mixer output signals are both lowpass filtered by lowpasses 553 , wherein the lowpass signals are different insofar as they were generated by local oscillator frequencies (LO frequencies), which are out of phase by 90°.
- the upper lowpass filter 553 provides a quadrature signal 554
- the lower filter 553 provides an in-phase signal 555 .
- phase unwrapper 558 At the output of the element 558 , there is no phase value present any more which is between 0 and 360°, but a phase value which increases linearly.
- This “unwrapped” phase value is supplied to a phase/frequency converter 559 which may for example be implemented as a simple phase difference former which subtracts a phase of a previous point in time from a phase at a current point in time to obtain a frequency value for the current point in time.
- This frequency value is added to the constant frequency value f i of the filter channel i to obtain a temporarily varying frequency value at the output 560 .
- the phase vocoder achieves a separation of the spectral information and time information.
- the spectral information is in the special channel or in the frequency f i which provides the direct portion of the frequency for each channel, while the time information is contained in the frequency deviation or the magnitude over time, respectively.
- FIG. 5 c shows a manipulation as it is executed for the bandwidth increase according to the invention, in particular, in the vocoder and, in particular, at the location of the illustrated circuit plotted in dashed lines in FIG. 5 a.
- the amplitude signals A(t) in each channel or the frequency of the signals f(t) in each signal may be decimated or interpolated, respectively.
- an interpolation i.e. a temporal extension or spreading of the signals A(t) and f(t) is performed to obtain spread signals A′(t) and f′(t), wherein the interpolation is controlled by a spread factor in a bandwidth extension scenario.
- the phase variation i.e. the value before the addition of the constant frequency by the adder 552
- the frequency of each individual oscillator 502 in FIG. 5 a is not changed.
- the temporal change of the overall audio signal is slowed down, however, i.e. by the factor 2.
- the result is a temporally spread tone having the original pitch, i.e. the original fundamental wave with its harmonics.
- a transform implementation of a phase vocoder may also be used as depicted in FIG. 6 .
- the audio signal 100 is fed into an FFT processor, or more generally, into a Short-Time-Fourier-Transform-Processor 600 as a sequence of time samples.
- the FFT processor 600 is implemented schematically in FIG. 6 to perform a time windowing of an audio signal in order to then, by means of an FFT, calculate magnitude and phase of the spectrum, wherein this calculation is performed for successive spectra which are related to blocks of the audio signal, which are strongly overlapping.
- a new spectrum may be calculated, wherein a new spectrum may be calculated also e.g. only for each twentieth new sample.
- This distance a in samples between two spectra is given by a controller 602 .
- the controller 602 is further implemented to feed an IFFT processor 604 which is implemented to operate in an overlapping operation.
- the IFFT processor 604 is implemented such that it performs an inverse short-time Fourier Transformation by performing one IFFT per spectrum based on magnitude and phase of a modified spectrum, in order to then perform an overlap add operation, from which the resulting time signal is obtained.
- the overlap add operation eliminates the effects of the analysis window.
- a spreading of the time signal is achieved by the distance b between two spectra, as they are processed by the IFFT processor 604 , being greater than the distance a between the spectrums in the generation of the FFT spectrums.
- the basic idea is to spread the audio signal by the inverse FFTs simply being spaced apart further than the analysis FFTs. As a result, temporal changes in the synthesized audio signal occur more slowly than in the original audio signal.
- phase resealing in block 606 Without a phase resealing in block 606 , this would, however, lead to artifacts.
- the time interval here is the time interval between successive FFTs.
- the inverse FFTs are being spaced farther apart from each other, this means that the 45° phase increase occurs across a longer time interval.
- the phase is resealed by exactly the same factor by which the audio signal was spread in time.
- the phase of each FFT spectral value is thus increased by the factor b/a, so that this mismatch is eliminated.
- the spreading in FIG. 6 is achieved by the distance between two IFFT spectra being greater than the distance between two FFT spectra, i.e. b being greater than a, wherein, however, for an artifact prevention a phase resealing is executed according to b/a.
- phase Vocoder A tutorial”, Mark Dolson, Computer Music Journal, vol. 10, no. 4, pp. 14-27, 1986, or “New phase Vocoder techniques for pitch-shifting, harmonizing and other exotic effects”, L. Laroche and M. Dolson, Proceedings 1999 IEEE Workshop on applications of signal processing to audio and acoustics, New Paltz, N.Y., Oct. 17-20, 1999, pages 91 to 94; “New approached to transient processing interphase vocoder”, A. Röbel, Proceeding of the 6th international conference on digital audio effects (DAFx-03), London, UK, Sep.
- Pitch Synchronous Overlap Add in short PSOLA, is a synthesis method in which recordings of speech signals are located in the database. As far as these are periodic signals, the same are provided with information on the fundamental frequency (pitch) and the beginning of each period is marked. In the synthesis, these periods are cut out with a certain environment by means of a window function, and added to the signal to be synthesized at a suitable location: Depending on whether the desired fundamental frequency is higher or lower than that of the database entry, they are combined accordingly denser or less dense than in the original. For adjusting the duration of the audible, periods may be omitted or output in double.
- TD-PSOLA This method is also called TD-PSOLA, wherein TD stands for time domain and emphasizes that the methods operate in the time domain.
- MultiBand Resynthesis OverLap Add Method in short MBROLA.
- the segments in the database are brought to a uniform fundamental frequency by a pre-processing and the phase position of the harmonic is normalized. By this, in the synthesis of a transition from a segment to the next, less perceptive interferences result and the achieved speech quality is higher.
- the audio signal is already bandpass filtered before spreading, so that the signal after spreading and decimation already contains the desired portions and the subsequent bandpass filtering may be omitted.
- the bandpass filter is set so that the portion of the audio signal which would have been filtered out after bandwidth extension is still contained in the output signal of the bandpass filter.
- the bandpass filter thus contains a frequency range which is not contained in the audio signal after spreading and decimation.
- the signal with this frequency range is the desired signal forming the synthesized high-frequency signal.
- the signal manipulator as illustrated in FIG. 1 may, additionally, comprise the signal conditioner 130 for further processing the audio signal with the unprocessed “natural” or synthesized transient on line 121 .
- This signal conditioner can be a signal decimator within a bandwidth extension application, which, at its output, generates a high-band signal, which can then be further adapted to closely resemble the characteristics of the original highband signal by using high frequency (HF) parameters to be transmitted together with an HFR (high frequency reconstruction) datastream.
- HF high frequency
- FIGS. 7 a and 7 b illustrate a bandwidth extension scenario, which can advantageously use the output signal of the signal conditioner within the bandwidth extension coder 720 of FIG. 7 b .
- An audio signal is fed into a lowpass/highpass combination at an input 700 .
- the lowpass/highpass combination on the one hand includes a lowpass (LP), to generate a lowpass filtered version of the audio signal 700 , illustrated at 703 in FIG. 7 a .
- This lowpass filtered audio signal is encoded with an audio encoder 704 .
- the audio encoder is, for example, an MP3 encoder (MPEG1 Layer 3) or an AAC encoder, also known as an MP4 encoder and described in the MPEG4 Standard.
- MP3 encoder MPEG1 Layer 3
- AAC encoder also known as an MP4 encoder and described in the MPEG4 Standard.
- Alternative audio encoders providing a transparent or advantageously perceptually transparent representation of the band-limited audio signal 703 may be used in the encoder 704 to generate a completely encoded or perceptually encoded and perceptually transparently encoded audio signal 705 , respectively.
- the upper band of the audio signal is output at an output 706 by the highpass portion of the filter 702 , designated by “HP”.
- the highpass portion of the audio signal i.e. the upper band or HF band, also designated as the HF portion, is supplied to a parameter calculator 707 which is implemented to calculate the different parameters.
- These parameters are, for example, the spectral envelope of the upper band 706 in a relatively coarse resolution, for example, by representation of a scale factor for each psychoacoustic frequency group or for each Bark band on the Bark scale, respectively.
- a further parameter which may be calculated by the parameter calculator 707 is the noise floor in the upper band, whose energy per band may be related to the energy of the envelope in this band.
- Further parameters which may be calculated by the parameter calculator 707 include a tonality measure for each partial band of the upper band which indicates how the spectral energy is distributed in a band, i.e. whether the spectral energy in the band is distributed relatively uniformly, wherein then a non-tonal signal exists in this band, or whether the energy in this band is relatively strongly concentrated at a certain location in the band, wherein then rather a tonal signal exists for this band.
- the parameter calculator 707 is implemented to generate only parameters 708 for the upper band which may be subjected to similar entropy reduction steps as they may also be performed in the audio encoder 704 for quantized spectral values, such as for example differential encoding, prediction or Huffman encoding, etc.
- the parameter representation 708 and the audio signal 705 are then supplied to a datastream formatter 709 which is implemented to provide an output side datastream 710 which will typically be a bitstream according to a certain format as it is for example standardized in the MPEG4 standard.
- the decoder side is in the following illustrated with regard to FIG. 7 b .
- the datastream 710 enters a datastream interpreter 711 which is implemented to separate the bandwidth extension related parameter portion 708 from the audio signal portion 705 .
- the parameter portion 708 is decoded by a parameter decoder 712 to obtain decoded parameters 713 .
- the audio signal portion 705 is decoded by an audio decoder 714 to obtain an audio signal.
- the audio signal 100 may be output via a first output 715 .
- an audio signal with a small bandwidth and thus also a low quality may then be obtained.
- the inventive bandwidth extension 720 is performed to obtain the audio signal 712 on the output side with an extended or high bandwidth, respectively, and thus a high quality.
- the synthesis filterbank belonging to a special analysis filterbank receives bandpass signals of the audio signal in the lower band and envelope-adjusted bandpass signals of the lower band which were harmonically patched in the upper band.
- the output signal of the synthesis filterbank is an audio signal extended with regard to its bandwidth, which was transmitted from the encoder side to the decoder side with a very low data rate.
- filterbank calculations and patching in the filterbank domain may become a high computational effort.
- the method presented here solves the problems mentioned.
- the inventive novelty of the method consists in that in contrast to existing methods, a windowed portion, which contains the transient, is removed from the signal to be manipulated, and in that from the original signal, a second windowed portion (generally different from the first portion) is additionally selected which may be reinserted into the manipulated signal such that the temporal envelope is preserved as much as possible in the environment of the transient.
- This second portion is selected such that it will accurately fit into the recess changed by the time-stretching operation.
- the accurate fitting-in is performed by calculating the maximum of the cross-correlation of the edges of the resulting recess with the edges of the original transient portion.
- Precise determination of the position of the transient for the purpose of selecting a suitable portion may be performed, e.g., using a moving centroid calculation of the energy over a suitable period of time.
- the size of the first portion determines the needed size of the second portion. This size is to be selected such that more than one transient is accommodated by the second portion used for reinsertion only if the time interval between the closely adjacent transients is below the threshold for human perceptibility of individual temporal events.
- Optimum fitting-in of the transient in accordance with the maximum cross-correlation may need a slight offset in time relative to the original position of same.
- the position of the reinserted transient need not precisely match the original position. Due to the extended period of action of the post-masking, a shift of the transient in the positive time direction is advantageous.
- the timbre or pitch of the same will be changed when the sampling rate is changed by a subsequent decimation step.
- this is masked by the transient itself by means of psychoacoustic temporal masking mechanisms.
- the method is suitable for any audio applications wherein the reproduction speeds of audio signals or their pitches are to be changed.
- FIG. 8 a illustrates a representation of the audio signal, but in contrast to a straight-forward time domain audio sample sequence
- FIG. 8 a illustrates an energy envelope representation, which can, for example, be obtained when each audio sample in a time domain sample illustration is squared.
- FIG. 8 a illustrates an audio signal 800 having a transient event 801 where the transient event is characterized by a sharp increase and decrease of energy over time.
- a transient would also be a sharp increase of energy when this energy remains on a certain high level or a sharp decrease of energy when the energy has been on a high level for a certain time before the decrease.
- a specific pattern for a transient is, for example, a clapping of hands or any other tone generated by a percussion instrument.
- transients are rapid attacks of an instrument, which starts playing a tone loudly, i.e. which provides sound energy into a certain band or a plurality of bands above a certain threshold level below a certain threshold time.
- other energy fluctuation such as the energy fluctuation 802 of the audio signal 800 in FIG. 8 a are not detected as transients.
- Transient detectors are known in the art and are extensively described in the literature and rely on many different algorithms, which may comprise frequency-selective processing and a comparison of a result of a frequency-selective processing to a threshold and a subsequent decision whether there was a transient or not.
- FIG. 8 b illustrates a windowed transient.
- the area delimited by the solid line is subtracted from the signal weighted by the depicted window shape.
- the area marked by the dashed line is added again after processing.
- the transient occurring at a certain transient time 803 has to be cut out from the audio signal 800 .
- the first time portion 804 is determined, where the first time portion extends from a starting time instant 805 to a stop time instant 806 .
- the first time portion 804 is selected so that the transient time 803 is included within the first time portion 804 .
- FIG. 8 c illustrates a signal without a transient prior to being stretched.
- the first time portion is not just cut out by a rectangular fitter/windower, but a windowing is performed to have slowly-decaying edges or flanks of the audio signal.
- FIG. 8 c now illustrates the audio signal on line 102 of FIG. 1 , i.e. subsequent to the transient signal removal.
- the slowly-decaying/increasing flanks 807 , 808 provide the fade-in or fade-out region to be used by the cross fader 128 of FIG. 4 .
- FIG. 8 d illustrates the signal of FIG. 8 c , but in a stretched state, i.e. subsequent to the processing applied by the signal processor 110 .
- the signal in FIG. 8 d is the signal on line 111 of FIG. 1 . Due to the stretching operation, the first portion 804 has become much longer.
- the second time portion 809 has been entered into FIG. 8 e .
- the start time instant 812 i.e. the first border of the second time portion 809 in the original audio signal
- the stop time instant 813 of the second time portion i.e. the second border of the second time portion in the original audio signal do not necessarily have to be symmetrical with respect to the transient event time 803 , 803 ′ so that the transient 801 is located on exactly the same time instant as it was in the original signal.
- the time instants 812 , 813 of FIG. 8 b can be slightly varied so that the cross correlation results between a signal shape on these borders in the original signal is, as much as possible, similar to corresponding portions in the stretched signal.
- the actual position of the transient 803 can be moved out of the center of the second time portion until a certain degree, which is indicated in FIG. 8 e by reference number 803 ′ indicating a certain time with respect to the second time portion, which deviates from the corresponding time 803 with respect to the second time portion in FIG. 8 b.
- FIG. 8 e additionally illustrates the crossover/transition regions 813 a, 813 b in which the cross-fader 128 provides a cross-fader between the stretched signal without the transient and the copy of the original signal including the transient.
- the calculator for calculating the length of the second time portion 122 is configured for receiving the length of the first time portion and the stretching factor.
- the calculator 122 can also receive an information on the allowability of neighboring transients to be included within one and the same first time portion. Therefore, based on this allowability, the calculator may determine the length of the first time portion 804 by itself and, depending on the stretching/shortening factor, then calculates the length of the second time portion 809 .
- the functionality of the signal inserter is that the signal inserter removes a suitable area for the gap in FIG. 8 e , which is enlarged within the stretched signal from the original signal and fits this suitable area, i.e. the second time portion into the processed signal using a cross-correlation calculation for determining time instant 812 and 813 and performing a cross-fading operation in cross-fade regions 813 a and 813 b as well.
- FIG. 9 illustrates an apparatus for generating side information for an audio signal, which can be used in the context of the present invention when the transient detection is performed on the encoder side and side information regarding this transient detection is calculated and transmitted to a signal manipulator, which then would represent the decoder side.
- a transient detector similar to the transient detector 103 in FIG. 2 is applied for analyzing the audio signal including a transient event.
- the transient detector calculates a transient time, i.e. time 803 in FIG. 1 and forwards this transient time to a meta data calculator 104 ′, which can be structured similarly to the fade-out/fade-in calculator 104 ′ in FIG. 2 .
- the meta data calculator 104 ′ can calculate meta data to be forwarded to a signal output interface 900 where this meta data may comprise borders for the transient removal, i.e. borders for the first time portion, i.e. borders 805 and 806 of FIG. 8 b or borders for the transient insertion (second time portion) as illustrated at 812 , 813 in FIG. 8 b or the transient event time instant 803 or even 803 ′. Even in the latter case, the signal manipulator would be in the position to determine all needed data, i.e. the first time portion data, the second time portion data, etc. based on a transient event time instant 803 .
- the meta data as generated by item 104 ′ are forwarded to the signal output interface so that the signal output interface generates a signal, i.e. an output signal for transmission or storage.
- the output signal may include only the meta data or may include the meta data and the audio signal where, in the latter case, the meta data would represent side information for the audio signal.
- the audio signal can be forwarded to the signal output interface 900 via line 901 .
- the output signal generated by the signal output interface 900 can be stored on any kind of storage medium or can be transmitted via any kind of transmission channel to a signal manipulator or any other device requiring transient information.
- the inventive methods can be implemented in hardware or in software.
- the implementation can be performed using a digital storage medium, in particular, a disc, a DVD or a CD having electronically-readable control signals stored thereon, which co-operate with programmable computer systems such that the inventive methods are performed.
- the present can therefore be implemented as a computer program product with a program code stored on a machine-readable carrier, the program code being operated for performing the inventive methods when the computer program product runs on a computer.
- the inventive methods are, therefore, a computer program having a program code for performing at least one of the inventive methods when the computer program runs on a computer.
- the inventive meta data signal can be stored on any machine readable storage medium such as a digital storage medium.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Stereophonic System (AREA)
- Circuit For Audible Band Transducer (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
- Electrophonic Musical Instruments (AREA)
- Amplifiers (AREA)
- Soundproofing, Sound Blocking, And Sound Damping (AREA)
Abstract
A signal manipulator for manipulating an audio signal having a transient event may have a transient remover, a signal processor and a signal inserter for inserting a time portion in a processed audio signal at a signal location where the transient event was removed before processing by the transient remover, so that a manipulated audio signal has a transient event not influenced by the processing, whereby the vertical coherence of the transient event is maintained instead of any processing performed in the signal processor, which would destroy the vertical coherence of a transient.
Description
- This application is a Divisional of U.S. patent application Ser. No. 12/921,550, filed Jan. 5, 2011, which is a U.S. National Phase entry of PCT/EP2009/001108 filed Feb. 17, 2009, and claims priority to U.S. Patent Application No. 61/035,317 filed Mar. 10, 2008, each of which is incorporated herein by references hereto.
- The present invention relates to audio signal processing and, particularly, to audio signal manipulation in the context of applying audio effects to a signal containing transient events.
- It is known to manipulate audio signals such that the reproduction speed is changed, while the pitch is maintained. Known methods for such a procedure are implemented by phase vocoders or methods, like (pitch synchronous) overlap-add, (P)SOLA, as, for example, described in J. L. Flanagan and R. M. Golden, The Bell System Technical Journal, November 1966, pp. 1394 to 1509; U.S. Pat. No. 6,549,884 Laroche, J. & Dolson, M.: Phase-vocoder pitch-shifting; Jean Laroche and Mark Dolson, New Phase-Vocoder Techniques for Pitch-Shifting, Harmonizing And Other Exotic Effects”, Proc. 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, N.Y., Oct. 17-20, 1999; and Zölzer, U: DAFX: Digital Audio Effects; Wiley & Sons; Edition: 1 (Feb. 26, 2002); pp. 201-298.
- Additionally, audio signals can be subjected to a transposition using such methods, i.e. phase vocoders or (P)SOLA where the special issue of this kind of transposition is that the transposed audio signal has the same reproduction/replay length as the original audio signal before transposition, while the pitch is changed. This is obtained by an accelerated reproduction of the stretched signals where the acceleration factor for performing the accelerated reproduction depends on the stretching factor for stretching the original audio signal in time. When one has a time-discrete signal representation, this procedure corresponds to a down-sampling of the stretched signal or decimation of the stretched signal by a factor equal to the stretching factor where the sampling frequency is maintained.
- A specific challenge in such audio signal manipulations are transient events. Transient events are events in a signal in which the energy of the signal in the whole band or in a certain frequency range is rapidly changing, i.e. rapidly increasing or rapidly decreasing. Characteristic features of specific transients (transient events) are the distribution of signal energy in the spectrum. Typically, the energy of the audio signal during a transient event is distributed over the whole frequency while, in non-transient signal portions, the energy is normally concentrated in the low frequency portion of the audio signal or in specific bands. This means that a non-transient signal portion, which is also called a stationary or tonal signal portion has a spectrum, which is non-flat. In other words, the energy of the signal is included in a comparatively small number of spectral lines/spectral bands, which are strongly raised over a noise floor of an audio signal. In a transient portion however, the energy of the audio signal will be distributed over many different frequency bands and, specifically, will be distributed in the high frequency portion so that a spectrum for a transient portion of the audio signal will be comparatively flat and will, in any event be flatter than a spectrum of a tonal portion of the audio signal. Typically, a transient event is a strong change in time, which means that the signal will include many higher harmonics when a Fourier decomposition is performed. An important feature of these many higher harmonics is that the phases of these higher harmonics are in a very specific mutual relationship so that a superposition of all these sine waves will result in a rapid change of signal energy. In other words, there exists a strong correlation across the spectrum.
- The specific phase situation among all harmonics can also be termed as a “vertical coherence”. This “vertical coherence” is related to a time/frequency spectrogram representation of the signal where a horizontal direction corresponds to the development of the signal over time and where the vertical dimension describes the interdependence over the frequency of the spectral components (transform frequency bins) in one short-time spectrum over frequency.
- Due to the typical processing steps, which are performed in order to time stretch or shorten an audio signal, this vertical coherence is destroyed, which means that a transient is “smeared” over time when a transient is subjected to a time stretching or time shortening operation as e.g. performed by a phase vocoder or any other method, which performs a frequency-dependent processing introducing phase shifts into the audio signal, which are different for different frequency coefficients.
- When the vertical coherence of transients is destroyed by an audio signal processing method, the manipulated signal will be very similar to the original signal in stationary or non-transient portions, but the transient portions will have a reduced quality in the manipulated signal. The uncontrolled manipulation of the vertical coherence of a transient results in temporal dispersion of the same, since many harmonic components contribute to a transient event and changing the phases of all these components in an uncontrolled manner inevitably results in such artifacts.
- However, transient portions are extremely important for the dynamics of an audio signal, such as a music signal or a speech signal where sudden changes of energy in a specific time represent a great deal of the subjective user impression on the quality of the manipulated signal. In other words, transient events in an audio signal are typically quite remarkable “milestones” of an audio signal, which have an over-proportional influence on the subjective quality impression. Manipulated transients in which the vertical coherence has been destroyed by a signal processing operation or has been degraded with respect to the transient portion of the original signal will sound distorted, reverberant and unnatural to the listener.
- Some current methods stretch the time around the transients to a higher extent so as to have to subsequently perform, during the duration of the transient, no or only minor time stretching. Such known references and patents describe methods for time and/or pitch manipulation. Known references are: Laroche L., Dolson M.: Improved phase vocoder timescale modification of audio”, IEEE Trans. Speech and Audio Processing, vol. 7, no. 3, pp. 323-332; Emmanuel Ravelli, Mark Sandler and Juan P. Bello: Fast implementation for non-linear time-scaling of stereo audio; Proc. of the 8th Int. Conference on Digital Audio Effects (DAFx'05), Madrid, Spain, Sep. 20-22, 2005; Duxbury, C. M. Davies, and M. Sandler (2001, December). Separation of transient information in musical audio using multiresolution analysis techniques. In Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-01), Limerick, Ireland; and Röbel, A.: A NEW APPROACH TO TRANSIENT PROCESSING IN THE PHASE VOCODER; Proc. of the 6th Int. Conference on Digital Audio Effects (DAFx-03), London, UK, Sep. 8-11, 2003.
- During time stretching of audio signals by phase vocoders, transient signal portions are “blurred” by dispersion, since the so-called vertical coherence of the signal is impaired. Methods using so-called overlap-add methods, like (P)SOLA may generate disturbing pre- and post-echoes of transient sound events. These problems may actually be addressed by increased time stretching in the environment of transients; however, if a transposition is to occur, the transposition factor will no longer be constant in the environment of the transients, i.e. the pitch of superimposed (possibly tonal) signal components will change and will be perceived as a disturbance.
- According to an embodiment, an apparatus for manipulating an audio signal having a transient event may have a signal processor for processing a transient reduced audio signal in which a first time portion having the transient event is removed or, for processing an audio signal having the transient event to acquire a processed audio signal; a signal inserter for inserting a second time portion into the processed audio signal at a signal location, where the first portion was removed or where the transient event is located in the processed audio signal, wherein the second time portion has a transient event not influenced by the processing performed by the signal processor so that a manipulated audio signal is acquired.
- According to another embodiment, an apparatus for generating a meta data signal for an audio signal having a transient event may have a transient detector for detecting a transient event in the audio signal; a meta data calculator for generating the meta data indicating a time position of the transient event in the audio signal or indicating a start-time instant before the transient event or a stop-time instant subsequent to the transient event or a duration of a time portion of the audio signal including the transient event; and a signal output interface for generating the meta data signal either having the meta data or having the audio signal and the meta data for transmission or storage.
- According to another embodiment, a method of manipulating an audio signal having a transient event may have the steps of processing a transient reduced audio signal in which a first time portion having the transient event is removed or for processing an audio signal having the transient event to acquire a processed audio signal; inserting a second time portion into the processed audio signal at a signal location, where the first portion was removed or where the transient event is located in the processed audio signal, wherein the second time portion has a transient event not influenced by the processing so that a manipulated audio signal is acquired.
- According to another embodiment, a method of generating a meta data signal for an audio signal having a transient event may have the steps of detecting a transient event in the audio signal; generating the meta data indicating a time position of the transient event in the audio signal or indicating a start-time instant before the transient event or a stop-time instant subsequent to the transient event or a duration of a time portion of the audio signal including the transient event; and generating the meta data signal either having the meta data or having the audio signal and the meta data for transmission or storage.
- According to another embodiment, a meta data signal for an audio signal may have transient event, the meta data signal having information indicating a time position of the transient event in the audio signal or indicating a start-time instant before the transient event or a stop-time instant subsequent to the transient event or a duration of a time portion of the audio signal indicating the transient event and an information on the position of the time portion in the audio signal.
- According to another embodiment, a computer program may have a program code for performing, when running on a computer, the method of manipulating an audio signal having a transient event, which may have the steps of processing a transient reduced audio signal in which a first time portion having the transient event is removed or for processing an audio signal having the transient event to acquire a processed audio signal; inserting a second time portion into the processed audio signal at a signal location, where the first portion was removed or where the transient event is located in the processed audio signal, wherein the second time portion has a transient event not influenced by the processing so that a manipulated audio signal is acquired, or the method of generating a meta data signal for an audio signal having a transient event which may have the steps of detecting a transient event in the audio signal; generating the meta data indicating a time position of the transient event in the audio signal or indicating a start-time instant before the transient event or a stop-time instant subsequent to the transient event or a duration of a time portion of the audio signal including the transient event; and generating the meta data signal either having the meta data or having the audio signal and the meta data for transmission or storage.
- For addressing the quality problems occurring in an uncontrolled processing of transient portions, the present invention makes sure that transient portions are not processed at all in a detrimental way, i.e. are removed before processing and are reinserted after processing or the transient events are processed, but are removed from the processed signal and replaced by non-processed transient events.
- The transient portions inserted into the processed signal are copies of corresponding transient portions in the original audio signal so that the manipulated signal consists of a processed portion not including a transient and a non- or differently processed portion including the transient. Exemplarily, the original transient can be subjected to decimation or any kind of weighting or parameterized processing. Alternatively, however, transient portions can be replaced by synthetically-created transient portions, which are synthesized in such a way that the synthesized transient portion is similar to the original transient portion with respect to some transient parameters such as the amount of energy change in a certain time or any other measure characterizing a transient event. Thus, one could even characterize a transient portion in the original audio signal and one could remove this transient before processing or replace the processed transient by a synthesized transient, which is synthetically created based on transient parametric information. For efficiency reasons, however, it is advantageous to copy a portion of the original audio signal before manipulation and to insert this copy into the processed audio signal, since this procedure guarantees that the transient portion in the processed signal is identical to the transient of the original signal. This procedure will make sure that the specific high influence of transients on a sound signal perception are maintained in the processed signal compared to the original signal before processing. Thus, a subjective or objective quality with respect to the transients is not degraded by any kind of audio signal processing for manipulating an audio signal.
- In embodiments, the present application provides a novel method for a perceptual favorable treatment of transient sound events within the framework of such processing, which would otherwise generate a temporal “blurring” by dispersion of a signal. This method essentially comprises the removal of the transient sound events prior to the signal manipulation for the purpose of time stretching and, subsequently, adding, while taking into account the stretching, the unprocessed transient signal portion to the modified (stretched) signal in an accurate manner.
- Embodiments of the present invention are subsequently explained with reference to the accompanying drawings, in which:
-
FIG. 1 illustrates an embodiment of an inventive apparatus or method for manipulating an audio signal having a transient; -
FIG. 2 illustrates an implementation of a transient signal remover ofFIG. 1 ; -
FIG. 3 a illustrates an implementation of a signal processor ofFIG. 1 ; -
FIG. 3 b illustrates a further embodiment for implementing the signal processor ofFIG. 1 ; -
FIG. 4 illustrates an implementation of the signal inserter ofFIG. 1 ; -
FIG. 5 a illustrates an overview of the implementation of a vocoder to be used in the signal processor ofFIG. 1 ; -
FIG. 5 b shows an implementation of parts (analysis) of a signal processor ofFIG. 1 ; -
FIG. 5 c illustrates other parts (stretching) of a signal processor ofFIG. 1 ; -
FIG. 6 illustrates a transform implementation of a phase vocoder to be used in the signal processor ofFIG. 1 ; -
FIG. 7 a illustrates an encoder side of a bandwidth extension processing scheme; -
FIG. 7 b illustrates a decoder side of a bandwidth extension scheme; -
FIG. 8 a illustrates an energy representation of an audio input signal with a transient event; -
FIG. 8 b illustrates the signal ofFIG. 8 a, but with a windowed transient; -
FIG. 8 c illustrates a signal without the transient portion prior to being stretched; -
FIG. 8 d illustrates the signal ofFIG. 8 c subsequent to being stretched; and -
FIG. 8 e illustrates the manipulated signal after the corresponding portion of the original signal has been inserted. -
FIG. 9 illustrates an apparatus for generating side information for an audio signal. -
FIG. 1 illustrates an apparatus for manipulating an audio signal having a transient event. The apparatus comprises atransient signal remover 100 having aninput 101 for an audio signal with a transient event. Theoutput 102 of the transient signal remover is connected to asignal processor 110. Thesignal processor output 111 is connected to asignal inserter 120. Thesignal inserter output 121 on which a manipulated audio signal with an unprocessed “natural” or synthesized transient is available may be connected to a further device such as asignal conditioner 130, which can perform any further processing of the manipulated signal such as a down-sampling/decimation to be needed for bandwidth extension purposes as discussed in connection withFIGS. 7A and 7B . - However, the
signal conditioner 130 cannot be used at all if the manipulated audio signal obtained at the output of thesignal inserter 120 is used as it is, i.e. is stored for further processing, is transmitted to a receiver or is transmitted to a digital/analog converter which, in the end, is connected to a loudspeaker equipment to finally generate a sound signal representing the manipulated audio signal. - In the case of bandwidth extension, the signal on
line 121 can already be the high band signal. Then, the signal processor has generated the high band signal from the input low band signal, and the lowband transient portion extracted from theaudio signal 101 would have to be put into the frequency range of the high band, which is done by a signal processing not disturbing the vertical coherence, such as a decimation. This decimation would be performed before the signal inserter so that the decimated transient portion is inserted in the high band signal at the output ofblock 110. In this embodiment, the signal conditioner would perform any further processing of the high band signal such as envelope shaping, noise addition, inverse filtering or adding of harmonics etc. as done e.g. in MPEG 4 Spectral Band Replication. - The
signal inserter 120 receives side information from theremover 100 vialine 123 in order to choose the right portion from the unprocessed signal to be inserted in 111 - When the
embodiment having devices FIGS. 8 a to 8 e may be obtained. However, it is not necessarily needed to remove the transient portion before performing the signal processing operation in thesignal processor 110. In this embodiment, thetransient signal remover 100 is not needed and thesignal inserter 120 determines a signal portion to be cut out from the processed signal onoutput 111 and to replace this cut-out signal by a portion of the original signal as schematically illustrated byline 121 or by a synthesized signal as illustrated byline 141 where this synthesized signal can be generated in atransient signal generator 140. In order to be able to generate a suitable transient, thesignal inserter 120 is configured to communicate transient description parameters to the transient signal generator. Therefore, the connection betweenblocks item 141 is illustrated as a two-way connection. When a specific transient detector is provided in the apparatus for manipulating, then the information on the transient can be provided from this transient detector (not shown inFIG. 1 ) to thetransient signal generator 140. The transient signal generator may be implemented to have transient samples, which can directly be used or to have pre-stored transient samples, which can be weighted using transient parameters in order to actually generate/synthesize a transient to be used by thesignal inserter 120. - In one embodiment, the
transient signal remover 100 is configured for removing a first time portion from the audio signal to obtain a transient-reduced audio signal, wherein the first time portion comprises the transient event. - Furthermore, the signal processor is configured for processing the transient-reduced audio signal in which a first time portion comprising the transient event is removed or for processing the audio signal including the transient event to obtain the processed audio signal on
line 111. - The
signal inserter 120 is configured for inserting a second time portion into the processed audio signal at a signal location where the first time portion has been removed or where the transient event is located in the audio signal, wherein the second time portion comprises a transient event not influenced by the processing performed by thesignal processor 110 so that the manipulated audio signal atoutput 121 is obtained. -
FIG. 2 illustrates an embodiment of thetransient signal remover 100. In one embodiment in which the audio signal does not include any side information/meta information on transients, thetransient signal remover 100 comprises atransient detector 103, a fade-out/fade-incalculator 104 and afirst portion remover 105. In an alternative embodiment in which information on transients in the audio signal have been collected as attached to the audio signal by an encoding device as discussed later on with respect toFIG. 9 , thetransient signal remover 100 comprises aside information extractor 106, which extracts the side information attached to the audio signal as indicated byline 107. The information on the transient time may be provided to the fade-out/fade-incalculator 104 as illustrated byline 107. When, however, the audio signal includes, as meta information, not (only) the transient time, i.e. the accurate time at which the transient event is occurring, but the start/stop time of the portion to be excluded from the audio signal, i.e. the start time and the stop time of the “first portion” of the audio signal, then the fade-out/fade-incalculator 104 is not needed as well and the start/stop time information can be directly forwarded to thefirst portion remover 105 as illustrated byline 108.Line 108 illustrates an option and all other lines, which are indicated by broken lines, are optional as well. - In
FIG. 2 , the fade-in/fade-outcalculator 104outputs side information 109. Thisside information 109 is different from the start/stop times of the first portion, since the nature of the processing in theprocessor 110 ofFIG. 1 is taken into account. Furthermore, the input audio signal is fed into theremover 105. - The fade-out/fade-in
calculator 104 provides for the start/stop times of the first portion. These times are calculated based on the transient time so that not only the transient event, but also some samples surrounding the transient event are removed by thefirst portion remover 105. Furthermore, it is advantageous to not just cut out the transient portion by a time domain rectangular window, but to perform the extraction by a fade-out portion and a fade-in portion. For performing a fade-out or/a fade-in portion, any kind of window having a smoother transition compared to a rectangular filter such as a raised cosine window can be applied so that the frequency response of this extraction is not as problematic as it would be when a rectangular window would be applied, although this is also an option. This time domain windowing operation outputs the remainder of the windowing operation, i.e. the audio signal without the windowed portion. - Any transient suppression method can be applied in this context including such transient suppression methods leaving a transient-reduced or fully non-transient residual signal after the transient removal. Compared to a complete removal of the transient portion, in which the audio signal is set to zero over a certain portion of time, the transient suppression is advantageous in situations, in which a further processing of the audio signal would suffer from portions set to zero, since such portions set to zero are very unnatural for an audio signal.
- Naturally, all calculations performed by the
transient detector 103 and the fade-out/fade-incalculator 104 can be applied as well on the encoding side as discussed in connection withFIG. 9 as long as the results of these calculations such as the transient time and/or the start/stop times of the first portion are transmitted to a signal manipulator either as side information or meta information together with the audio signal or separately from the audio signal such as within a separate audio meta data signal to he transmitted via a separate transmission channel. -
FIG. 3 a illustrates an implementation of thesignal processor 110 ofFIG. 1 . This implementation comprises a frequencyselective analyzer 112 and a subsequently-connected frequency-selective processing device 113. The frequency-selective processing device 113 is implemented such that it applies a negative influence on the vertical coherence of the original audio signal. Examples for this processing is the stretching of a signal in time or the shortening of a signal in time where this stretching or shortening is applied in a frequency-selective manner, so that, for example, the processing introduces phase shifts into the processed audio signal, which are different for different frequency bands. - A way of processing is illustrated in
FIG. 3B in the context of a phase vocoder processing. Generally, a phase vocoder comprises a sub-band/transform analyzer 114, a subsequently-connectedprocessor 115 for performing a frequency-selective processing of a plurality of output signals provided byitem 114 and, subsequently, a sub-band/transform combiner 116, which combines the signals processed byitem 115 in order to finally obtain a processed signal in the time domain atoutput 117 where this processed signal in the time domain, again, is a full bandwidth signal or a lowpass filtered signal as long as the bandwidth of the processedsignal 117 is larger than the bandwidth represented by a single branch betweenitem transform combiner 116 performs a combination of frequency-selective signals. - Further details on the phase vocoder are subsequently discussed in connection with
FIGS. 5A , 5B, 5C and 6. - Subsequently, an implementation of the
signal inserter 120 ofFIG. 1 is discussed and is depicted inFIG. 4 . The signal inserter comprises acalculator 122 for calculating the length of the second time portion. In order to be able to calculate the length for the second time portion in the embodiment in which the transient portion has been removed before the signal processing in thesignal processor 110 inFIG. 1 , the length of the removed first portion and the time stretching factor (or the time shortening factor) are needed so that the length of the second time portion is calculated initem 122. These data items can be input from outside as discussed in connection withFIGS. 1 and 2 . - Exemplarily, the length of the second time portion is calculated by multiplying the length of the first portion by the stretching factor.
- The length of the second time portion is forwarded to a
calculator 123 for calculating the first border and the second border of the second time portion in the audio signal. In particular, the calculator 133 may be implemented to perform a cross-correlation processing between the processed audio signal without the transient event supplied atinput 124 and the audio signal with the transient event, which provides the second portion as supplied at input 125.Thecalculator 123 is controlled by afurther control input 126 so that a positive shift of the transient event within the second time portion is advantageous versus a negative shift of the transient event as discussed later. - The first border and the second border of the second time portion are provided to an extractor 127.The
extractor 127 cuts out the portion, i.e. the second time portion out of the original audio signal provided atinput 125. Since asubsequent cross-fader 128 is used, the cut-out takes place using a rectangular filter. In thecross-fader 128, the start portion of the second time portion and the stop portion of the second time portion are weighted by an increasing weight from 0 to 1 for the start portion and/or decreasing weight from 1 to 0 in the end portion so that in this cross-fade region, the end portion of the processed signal together with the start portion of the extracted signal, when added together, result in a useful signal. A similar processing is performed in thecross-fader 128 for the end of the second time portion and the beginning of the processed audio signal after the extraction. The cross-fading makes sure that no time domain artifacts occur which would otherwise be perceivable as clicking artifacts when the borders of the processed audio signal without the transient portion and the second time portion borders do not perfectly match together. - Subsequently, reference is made to
FIGS. 5 a, 5 b, 5 c and 6 in order to illustrate an implementation of thesignal processor 110 in the context of a phase vocoder. - In the following, with reference to
FIGS. 5 and 6 , implementations for a vocoder are illustrated according to the present invention.FIG. 5 a shows a filterbank implementation of a phase vocoder, wherein an audio signal is fed in at aninput 500 and obtained at anoutput 510. In particular, each channel of the schematic filterbank illustrated inFIG. 5 a includes abandpass filter 501 and adownstream oscillator 502. Output signals of all oscillators from every channel are combined by a combiner, which is for example implemented as an adder and indicated at 503, in order to obtain the output signal. Eachfilter 501 is implemented such that it provides an amplitude signal on the one hand and a frequency signal on the other hand. The amplitude signal and the frequency signal are time signals illustrating a development of the amplitude in afilter 501 over time, while the frequency signal represents a development of the frequency of the signal filtered by afilter 501. - A schematical setup of
filter 501 is illustrated inFIG. 5 b. Eachfilter 501 ofFIG. 5 a may be set up as inFIG. 5 b, wherein, however, only the frequencies fi supplied to the twoinput mixers 551 and theadder 552 are different from channel to channel. The mixer output signals are both lowpass filtered by lowpasses 553, wherein the lowpass signals are different insofar as they were generated by local oscillator frequencies (LO frequencies), which are out of phase by 90°. The upper lowpass filter 553 provides aquadrature signal 554, while the lower filter 553 provides an in-phase signal 555. These two signals, i.e. I and Q, are supplied to a coordinatetransformer 556 which generates a magnitude phase representation from the rectangular representation. The magnitude signal or amplitude signal, respectively, ofFIG. 5 a over time is output at anoutput 557. The phase signal is supplied to aphase unwrapper 558. At the output of theelement 558, there is no phase value present any more which is between 0 and 360°, but a phase value which increases linearly. This “unwrapped” phase value is supplied to a phase/frequency converter 559 which may for example be implemented as a simple phase difference former which subtracts a phase of a previous point in time from a phase at a current point in time to obtain a frequency value for the current point in time. This frequency value is added to the constant frequency value fi of the filter channel i to obtain a temporarily varying frequency value at theoutput 560. The frequency value at theoutput 560 has a direct component=fi and an alternating component=the frequency deviation by which a current frequency of the signal in the filter channel deviates from the average frequency fi. - Thus, as illustrated in
FIGS. 5 a and 5 b, the phase vocoder achieves a separation of the spectral information and time information. The spectral information is in the special channel or in the frequency fi which provides the direct portion of the frequency for each channel, while the time information is contained in the frequency deviation or the magnitude over time, respectively. -
FIG. 5 c shows a manipulation as it is executed for the bandwidth increase according to the invention, in particular, in the vocoder and, in particular, at the location of the illustrated circuit plotted in dashed lines inFIG. 5 a. - For time scaling, e.g. the amplitude signals A(t) in each channel or the frequency of the signals f(t) in each signal may be decimated or interpolated, respectively. For purposes of transposition, as it is useful for the present invention, an interpolation, i.e. a temporal extension or spreading of the signals A(t) and f(t) is performed to obtain spread signals A′(t) and f′(t), wherein the interpolation is controlled by a spread factor in a bandwidth extension scenario. By the interpolation of the phase variation, i.e. the value before the addition of the constant frequency by the
adder 552, the frequency of eachindividual oscillator 502 inFIG. 5 a is not changed. The temporal change of the overall audio signal is slowed down, however, i.e. by the factor 2. The result is a temporally spread tone having the original pitch, i.e. the original fundamental wave with its harmonics. - By performing the signal processing illustrated in
FIG. 5 c, wherein such a processing is executed in every filter band channel inFIG. 5 a, and by the resulting temporal signal then being decimated in a decimator, the audio signal is shrunk back to its original duration while all frequencies are doubled simultaneously. This leads to a pitch transposition by the factor 2 wherein, however, an audio signal is obtained which has the same length as the original audio signal, i.e. the same number of samples. - As an alternative to the filterbank implementation illustrated in
FIG. 5 a, a transform implementation of a phase vocoder may also be used as depicted inFIG. 6 . Here, theaudio signal 100 is fed into an FFT processor, or more generally, into a Short-Time-Fourier-Transform-Processor 600 as a sequence of time samples. TheFFT processor 600 is implemented schematically inFIG. 6 to perform a time windowing of an audio signal in order to then, by means of an FFT, calculate magnitude and phase of the spectrum, wherein this calculation is performed for successive spectra which are related to blocks of the audio signal, which are strongly overlapping. - In an extreme case, for every new audio signal sample a new spectrum may be calculated, wherein a new spectrum may be calculated also e.g. only for each twentieth new sample. This distance a in samples between two spectra is given by a
controller 602. Thecontroller 602 is further implemented to feed anIFFT processor 604 which is implemented to operate in an overlapping operation. In particular, theIFFT processor 604 is implemented such that it performs an inverse short-time Fourier Transformation by performing one IFFT per spectrum based on magnitude and phase of a modified spectrum, in order to then perform an overlap add operation, from which the resulting time signal is obtained. The overlap add operation eliminates the effects of the analysis window. - A spreading of the time signal is achieved by the distance b between two spectra, as they are processed by the
IFFT processor 604, being greater than the distance a between the spectrums in the generation of the FFT spectrums. The basic idea is to spread the audio signal by the inverse FFTs simply being spaced apart further than the analysis FFTs. As a result, temporal changes in the synthesized audio signal occur more slowly than in the original audio signal. - Without a phase resealing in
block 606, this would, however, lead to artifacts. When, for example, one single frequency bin is considered for which successive phase values by 45° are implemented, this implies that the signal within this filterbank increases in the phase with a rate of ⅛ of a cycle, i.e. by 45° per time interval, wherein the time interval here is the time interval between successive FFTs. If now the inverse FFTs are being spaced farther apart from each other, this means that the 45° phase increase occurs across a longer time interval. This means that due to the phase shift a mismatch in the subsequent overlap-add process occurs leading to unwanted signal cancellation. To eliminate this artifact, the phase is resealed by exactly the same factor by which the audio signal was spread in time. The phase of each FFT spectral value is thus increased by the factor b/a, so that this mismatch is eliminated. - While in the embodiment illustrated in
FIG. 5 c the spreading by interpolation of the amplitude/frequency control signals was achieved for one signal oscillator in the filterbank implementation ofFIG. 5 a, the spreading inFIG. 6 is achieved by the distance between two IFFT spectra being greater than the distance between two FFT spectra, i.e. b being greater than a, wherein, however, for an artifact prevention a phase resealing is executed according to b/a. - With regard to a detailed description of phase-vocoders reference is made to the following documents:
- “The phase Vocoder: A tutorial”, Mark Dolson, Computer Music Journal, vol. 10, no. 4, pp. 14-27, 1986, or “New phase Vocoder techniques for pitch-shifting, harmonizing and other exotic effects”, L. Laroche and M. Dolson, Proceedings 1999 IEEE Workshop on applications of signal processing to audio and acoustics, New Paltz, N.Y., Oct. 17-20, 1999, pages 91 to 94; “New approached to transient processing interphase vocoder”, A. Röbel, Proceeding of the 6th international conference on digital audio effects (DAFx-03), London, UK, Sep. 8-11, 2003, pages DAFx-1 to DAFx-6; “Phase-locked Vocoder”, Meller Puckette, Proceedings 1995, IEEE ASSP, Conference on applications of signal processing to audio and acoustics, or U.S. Pat. No. 6,549,884.
- Alternatively, other methods for signal spreading are available, such as, for example, the ‘Pitch Synchronous Overlap Add’ method. Pitch Synchronous Overlap Add, in short PSOLA, is a synthesis method in which recordings of speech signals are located in the database. As far as these are periodic signals, the same are provided with information on the fundamental frequency (pitch) and the beginning of each period is marked. In the synthesis, these periods are cut out with a certain environment by means of a window function, and added to the signal to be synthesized at a suitable location: Depending on whether the desired fundamental frequency is higher or lower than that of the database entry, they are combined accordingly denser or less dense than in the original. For adjusting the duration of the audible, periods may be omitted or output in double. This method is also called TD-PSOLA, wherein TD stands for time domain and emphasizes that the methods operate in the time domain. A further development is the MultiBand Resynthesis OverLap Add Method, in short MBROLA. Here the segments in the database are brought to a uniform fundamental frequency by a pre-processing and the phase position of the harmonic is normalized. By this, in the synthesis of a transition from a segment to the next, less perceptive interferences result and the achieved speech quality is higher.
- In a further alternative, the audio signal is already bandpass filtered before spreading, so that the signal after spreading and decimation already contains the desired portions and the subsequent bandpass filtering may be omitted. In this case, the bandpass filter is set so that the portion of the audio signal which would have been filtered out after bandwidth extension is still contained in the output signal of the bandpass filter. The bandpass filter thus contains a frequency range which is not contained in the audio signal after spreading and decimation. The signal with this frequency range is the desired signal forming the synthesized high-frequency signal.
- The signal manipulator as illustrated in
FIG. 1 may, additionally, comprise thesignal conditioner 130 for further processing the audio signal with the unprocessed “natural” or synthesized transient online 121. This signal conditioner can be a signal decimator within a bandwidth extension application, which, at its output, generates a high-band signal, which can then be further adapted to closely resemble the characteristics of the original highband signal by using high frequency (HF) parameters to be transmitted together with an HFR (high frequency reconstruction) datastream. -
FIGS. 7 a and 7 b illustrate a bandwidth extension scenario, which can advantageously use the output signal of the signal conditioner within thebandwidth extension coder 720 ofFIG. 7 b. An audio signal is fed into a lowpass/highpass combination at aninput 700. The lowpass/highpass combination on the one hand includes a lowpass (LP), to generate a lowpass filtered version of theaudio signal 700, illustrated at 703 inFIG. 7 a. This lowpass filtered audio signal is encoded with anaudio encoder 704. The audio encoder is, for example, an MP3 encoder (MPEG1 Layer 3) or an AAC encoder, also known as an MP4 encoder and described in the MPEG4 Standard. Alternative audio encoders providing a transparent or advantageously perceptually transparent representation of the band-limitedaudio signal 703 may be used in theencoder 704 to generate a completely encoded or perceptually encoded and perceptually transparently encodedaudio signal 705, respectively. - The upper band of the audio signal is output at an
output 706 by the highpass portion of thefilter 702, designated by “HP”. The highpass portion of the audio signal, i.e. the upper band or HF band, also designated as the HF portion, is supplied to aparameter calculator 707 which is implemented to calculate the different parameters. These parameters are, for example, the spectral envelope of theupper band 706 in a relatively coarse resolution, for example, by representation of a scale factor for each psychoacoustic frequency group or for each Bark band on the Bark scale, respectively. A further parameter which may be calculated by theparameter calculator 707 is the noise floor in the upper band, whose energy per band may be related to the energy of the envelope in this band. Further parameters which may be calculated by theparameter calculator 707 include a tonality measure for each partial band of the upper band which indicates how the spectral energy is distributed in a band, i.e. whether the spectral energy in the band is distributed relatively uniformly, wherein then a non-tonal signal exists in this band, or whether the energy in this band is relatively strongly concentrated at a certain location in the band, wherein then rather a tonal signal exists for this band. - Further parameters consist in explicitly encoding peaks relatively strongly protruding in the upper band with regard to their height and their frequency, as the bandwidth extension concept, in the reconstruction without such an explicit encoding of prominent sinusoidal portions in the upper band, will only recover the same very rudimentarily, or not at all.
- In any case, the
parameter calculator 707 is implemented to generateonly parameters 708 for the upper band which may be subjected to similar entropy reduction steps as they may also be performed in theaudio encoder 704 for quantized spectral values, such as for example differential encoding, prediction or Huffman encoding, etc. Theparameter representation 708 and theaudio signal 705 are then supplied to adatastream formatter 709 which is implemented to provide anoutput side datastream 710 which will typically be a bitstream according to a certain format as it is for example standardized in the MPEG4 standard. - The decoder side, as it is especially suitable for the present invention, is in the following illustrated with regard to
FIG. 7 b. Thedatastream 710 enters adatastream interpreter 711 which is implemented to separate the bandwidth extension relatedparameter portion 708 from theaudio signal portion 705. Theparameter portion 708 is decoded by aparameter decoder 712 to obtain decodedparameters 713. In parallel to this, theaudio signal portion 705 is decoded by anaudio decoder 714 to obtain an audio signal. - Depending on the implementation, the
audio signal 100 may be output via afirst output 715. At theoutput 715, an audio signal with a small bandwidth and thus also a low quality may then be obtained. For a quality improvement, however, theinventive bandwidth extension 720 is performed to obtain theaudio signal 712 on the output side with an extended or high bandwidth, respectively, and thus a high quality. - It is known from WO 98/57436 to subject the audio signal to a band limiting in such a situation on the encoder side and to encode only a lower band of the audio signal by means of a high quality audio encoder. The upper band, however, is only very coarsely characterized, i.e. by a set of parameters which reproduces the spectral envelope of the upper band. On the decoder side, the upper band is then synthesized. For this purpose, a harmonic transposition is proposed, wherein the lower band of the decoded audio signal is supplied to a filterbank. Filterbank channels of the lower band are connected to filterbank channels of the upper hand, or are “patched”, and each patched bandpass signal is subjected to an envelope adjustment. The synthesis filterbank belonging to a special analysis filterbank here receives bandpass signals of the audio signal in the lower band and envelope-adjusted bandpass signals of the lower band which were harmonically patched in the upper band. The output signal of the synthesis filterbank is an audio signal extended with regard to its bandwidth, which was transmitted from the encoder side to the decoder side with a very low data rate. In particular, filterbank calculations and patching in the filterbank domain may become a high computational effort.
- The method presented here solves the problems mentioned. The inventive novelty of the method consists in that in contrast to existing methods, a windowed portion, which contains the transient, is removed from the signal to be manipulated, and in that from the original signal, a second windowed portion (generally different from the first portion) is additionally selected which may be reinserted into the manipulated signal such that the temporal envelope is preserved as much as possible in the environment of the transient. This second portion is selected such that it will accurately fit into the recess changed by the time-stretching operation. The accurate fitting-in is performed by calculating the maximum of the cross-correlation of the edges of the resulting recess with the edges of the original transient portion.
- Thus, the subjective audio quality of the transient is no longer impaired by dispersion and echo effects.
- Precise determination of the position of the transient for the purpose of selecting a suitable portion may be performed, e.g., using a moving centroid calculation of the energy over a suitable period of time.
- Along with the time-stretching factor, the size of the first portion determines the needed size of the second portion. This size is to be selected such that more than one transient is accommodated by the second portion used for reinsertion only if the time interval between the closely adjacent transients is below the threshold for human perceptibility of individual temporal events.
- Optimum fitting-in of the transient in accordance with the maximum cross-correlation may need a slight offset in time relative to the original position of same. However, due to the existence of temporal pre- and, particularly, post-masking effects, the position of the reinserted transient need not precisely match the original position. Due to the extended period of action of the post-masking, a shift of the transient in the positive time direction is advantageous.
- By inserting the original signal portion, the timbre or pitch of the same will be changed when the sampling rate is changed by a subsequent decimation step. Generally, however, this is masked by the transient itself by means of psychoacoustic temporal masking mechanisms. In particular, if stretching by an integer factor occurs, the timbre will only be changed slightly, since outside of the environment of the transient, only every n.th (n=stretching factor) harmonic wave will be occupied.
- Using the new method, artifacts (dispersion, pre- and post-echoes) which result during processing of transients by means of time stretching and transposition methods are effectively prevented. Potential impairment of the quality of superposed (possible tonal) signal portions is avoided.
- The method is suitable for any audio applications wherein the reproduction speeds of audio signals or their pitches are to be changed.
- Subsequently, an embodiment in the context of
FIGS. 8 a to 8 e is discussed.FIG. 8 a illustrates a representation of the audio signal, but in contrast to a straight-forward time domain audio sample sequence,FIG. 8 a illustrates an energy envelope representation, which can, for example, be obtained when each audio sample in a time domain sample illustration is squared. Specifically,FIG. 8 a illustrates anaudio signal 800 having atransient event 801 where the transient event is characterized by a sharp increase and decrease of energy over time. Naturally, a transient would also be a sharp increase of energy when this energy remains on a certain high level or a sharp decrease of energy when the energy has been on a high level for a certain time before the decrease. A specific pattern for a transient is, for example, a clapping of hands or any other tone generated by a percussion instrument. Additionally, transients are rapid attacks of an instrument, which starts playing a tone loudly, i.e. which provides sound energy into a certain band or a plurality of bands above a certain threshold level below a certain threshold time. Naturally, other energy fluctuation such as theenergy fluctuation 802 of theaudio signal 800 inFIG. 8 a are not detected as transients. Transient detectors are known in the art and are extensively described in the literature and rely on many different algorithms, which may comprise frequency-selective processing and a comparison of a result of a frequency-selective processing to a threshold and a subsequent decision whether there was a transient or not. -
FIG. 8 b illustrates a windowed transient. The area delimited by the solid line is subtracted from the signal weighted by the depicted window shape. The area marked by the dashed line is added again after processing. Specifically, the transient occurring at a certaintransient time 803 has to be cut out from theaudio signal 800. To be on the safe side, not only the transient, but also some adjacent/neighboring samples are to be cut out from the original signal. Therefore, thefirst time portion 804 is determined, where the first time portion extends from a startingtime instant 805 to astop time instant 806. Generally, thefirst time portion 804 is selected so that thetransient time 803 is included within thefirst time portion 804.FIG. 8 c illustrates a signal without a transient prior to being stretched. As can be seen from slowly-decayingedges - Importantly,
FIG. 8 c now illustrates the audio signal online 102 ofFIG. 1 , i.e. subsequent to the transient signal removal. The slowly-decaying/increasingflanks cross fader 128 ofFIG. 4 .FIG. 8 d illustrates the signal ofFIG. 8 c, but in a stretched state, i.e. subsequent to the processing applied by thesignal processor 110. Thus, the signal inFIG. 8 d is the signal online 111 ofFIG. 1 . Due to the stretching operation, thefirst portion 804 has become much longer. Thus, thefirst portion 804 ofFIG. 8 d has been stretched to thesecond time portion 809, which has a second time portion start instant 810 and a second timeportion stop instant 811. By stretching the signal, theflanks flanks 807′, 808′ has been stretched as well. This stretching has to be accounted for when calculating the length of the second time portion as performed by thecalculator 122 ofFIG. 4 . - As soon as the length of the second time portion is determined, a portion corresponding to the length of the second time portion is cut out from the original audio signal illustrated at
FIG. 8 a as indicated by the broken line inFIG. 8 b. To this end, thesecond time portion 809 has been entered intoFIG. 8 e. As discussed, thestart time instant 812, i.e. the first border of thesecond time portion 809 in the original audio signal and thestop time instant 813 of the second time portion, i.e. the second border of the second time portion in the original audio signal do not necessarily have to be symmetrical with respect to thetransient event time time instants FIG. 8 b can be slightly varied so that the cross correlation results between a signal shape on these borders in the original signal is, as much as possible, similar to corresponding portions in the stretched signal. Thus, the actual position of the transient 803 can be moved out of the center of the second time portion until a certain degree, which is indicated inFIG. 8 e byreference number 803′ indicating a certain time with respect to the second time portion, which deviates from thecorresponding time 803 with respect to the second time portion inFIG. 8 b. As discussed in connection withFIG. 4 ,item 126, a positive shift of the transient to atime 803′ with respect to atime 803 is advantageous due to the post-masking effect, which is more pronounced than the pre-masking effect.FIG. 8 e additionally illustrates the crossover/transition regions cross-fader 128 provides a cross-fader between the stretched signal without the transient and the copy of the original signal including the transient. - As illustrated in
FIG. 4 , the calculator for calculating the length of thesecond time portion 122 is configured for receiving the length of the first time portion and the stretching factor. Alternatively, thecalculator 122 can also receive an information on the allowability of neighboring transients to be included within one and the same first time portion. Therefore, based on this allowability, the calculator may determine the length of thefirst time portion 804 by itself and, depending on the stretching/shortening factor, then calculates the length of thesecond time portion 809. - As discussed above, the functionality of the signal inserter is that the signal inserter removes a suitable area for the gap in
FIG. 8 e, which is enlarged within the stretched signal from the original signal and fits this suitable area, i.e. the second time portion into the processed signal using a cross-correlation calculation for determiningtime instant cross-fade regions -
FIG. 9 illustrates an apparatus for generating side information for an audio signal, which can be used in the context of the present invention when the transient detection is performed on the encoder side and side information regarding this transient detection is calculated and transmitted to a signal manipulator, which then would represent the decoder side. To this end, a transient detector similar to thetransient detector 103 inFIG. 2 is applied for analyzing the audio signal including a transient event. The transient detector calculates a transient time, i.e.time 803 inFIG. 1 and forwards this transient time to ameta data calculator 104′, which can be structured similarly to the fade-out/fade-incalculator 104′ inFIG. 2 . Generally, themeta data calculator 104′ can calculate meta data to be forwarded to asignal output interface 900 where this meta data may comprise borders for the transient removal, i.e. borders for the first time portion, i.e. borders 805 and 806 ofFIG. 8 b or borders for the transient insertion (second time portion) as illustrated at 812, 813 inFIG. 8 b or the transientevent time instant 803 or even 803′. Even in the latter case, the signal manipulator would be in the position to determine all needed data, i.e. the first time portion data, the second time portion data, etc. based on a transientevent time instant 803. - The meta data as generated by
item 104′ are forwarded to the signal output interface so that the signal output interface generates a signal, i.e. an output signal for transmission or storage. The output signal may include only the meta data or may include the meta data and the audio signal where, in the latter case, the meta data would represent side information for the audio signal. To this end, the audio signal can be forwarded to thesignal output interface 900 vialine 901. The output signal generated by thesignal output interface 900 can be stored on any kind of storage medium or can be transmitted via any kind of transmission channel to a signal manipulator or any other device requiring transient information. - It is to be noted that although the present invention has been described in the context of block diagrams where the blocks represent actual or logical hardware components, the present invention can also be implemented by a computer-implemented method. In the latter case, the blocks represent corresponding method steps where these steps stand for the functionalities performed by corresponding logical or physical hardware blocks.
- The described embodiments are merely illustrative for the principles of the present invention. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. It is the intent, therefore, to be limited only by the scope of the impending patent claims and not by the specific details presented by way of description and explanation of the embodiments herein.
- Depending on certain implementation requirements of the inventive methods, the inventive methods can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, in particular, a disc, a DVD or a CD having electronically-readable control signals stored thereon, which co-operate with programmable computer systems such that the inventive methods are performed. Generally, the present can therefore be implemented as a computer program product with a program code stored on a machine-readable carrier, the program code being operated for performing the inventive methods when the computer program product runs on a computer. In other words, the inventive methods are, therefore, a computer program having a program code for performing at least one of the inventive methods when the computer program runs on a computer. The inventive meta data signal can be stored on any machine readable storage medium such as a digital storage medium.
- While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.
Claims (9)
1. Apparatus for manipulating an audio signal comprising a transient event, comprising:
a signal processor for processing a transient reduced audio signal in which a first time portion comprising the transient event is removed or, for processing an audio signal comprising the transient event to acquire a processed audio signal;
a signal inserter for inserting a second time portion into the processed audio signal at a signal location, where the first portion was removed or where the transient event is located in the processed audio signal, wherein the second time portion comprises a transient event not influenced by the processing performed by the signal processor so that a manipulated audio signal is acquired; and
a side information extractor for extracting and interpreting a side information associated with the audio signal, the side information indicating a time position of the transient event or indicating a start time instant or a stop time instant of the first time portion or the second time portion.
2. Apparatus in accordance with claim 1 , further comprising a transient signal remover for removing the first time portion from the audio signal to acquire the transient-reduced audio signal, the first time portion comprising the transient event.
3. Apparatus in accordance with claim 1 , in which the signal processor is configured to process the transient-reduced audio signal in a frequency-dependent way so that the processing introduces phase shifts into the transient-reduced audio signal, which are different for different spectral components.
4. Apparatus in accordance with claim 1 , in which the signal inserter is configured to generate the second time portion by copying at least the first time portion so that the second time portion comprises at least a copy of the first time portion from the audio signal comprising the transient event.
5. Apparatus in accordance with claim 1 , in which the signal processor comprises a vocoder, a phase vocoder or an (P)SOLA processor.
6. Apparatus in accordance with claim 1 , further comprising a signal conditioner for conditioning the manipulated audio signal by decimation or interpolation of a time-discrete version of the manipulated audio signal.
7. Apparatus in accordance with claim 1 , further comprising a transient detector for detecting the transient event in the audio signal, or
further comprising a side information extractor for extracting and interpreting a side information associated with the audio signal, the side information indicating a time position of the transient event or indicating a start time instant or a stop time instant of the first time portion or the second time portion.
8. Method of manipulating an audio signal comprising a transient event, comprising:
processing a transient reduced audio signal in which a first time portion comprising the transient event is removed or for processing an audio signal comprising the transient event to acquire a processed audio signal;
inserting a second time portion into the processed audio signal at a signal location, where the first portion was removed or where the transient event is located in the processed audio signal, wherein the second time portion comprises a transient event not influenced by the processing so that a manipulated audio signal is acquired; and
extracting and interpreting a side information associated with the audio signal, the side information indicating a time position of the transient event or indicating a start time instant or a stop time instant of the first time portion or the second time portion.
9. Computer program comprising a program code for performing, when running on a computer, the method of manipulating an audio signal comprising a transient event, comprising:
processing a transient reduced audio signal in which a first time portion comprising the transient event is removed or for processing an audio signal comprising the transient event to acquire a processed audio signal;
inserting a second time portion into the processed audio signal at a signal location, where the first portion was removed or where the transient event is located in the processed audio signal, wherein the second time portion comprises a transient event not influenced by the processing so that a manipulated audio signal is acquired, or
the method of generating a meta data signal for an audio signal comprising a transient event, comprising:
detecting a transient event in the audio signal;
generating the meta data indicating a time position of the transient event in the audio signal or indicating a start-time instant before the transient event or a stop-time instant subsequent to the transient event or a duration of a time portion of the audio signal including the transient event; and
generating the meta data signal either comprising the meta data or comprising the audio signal and the meta data for transmission or storage.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/465,958 US20130010983A1 (en) | 2008-03-10 | 2012-05-07 | Device and method for manipulating an audio signal having a transient event |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US3531708P | 2008-03-10 | 2008-03-10 | |
PCT/EP2009/001108 WO2009112141A1 (en) | 2008-03-10 | 2009-02-17 | Device and method for manipulating an audio signal having a transient event |
US92155011A | 2011-01-05 | 2011-01-05 | |
US13/465,958 US20130010983A1 (en) | 2008-03-10 | 2012-05-07 | Device and method for manipulating an audio signal having a transient event |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2009/001108 Division WO2009112141A1 (en) | 2008-03-10 | 2009-02-17 | Device and method for manipulating an audio signal having a transient event |
US92155011A Division | 2008-03-10 | 2011-01-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130010983A1 true US20130010983A1 (en) | 2013-01-10 |
Family
ID=40613146
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/921,550 Active 2031-11-16 US9275652B2 (en) | 2008-03-10 | 2009-02-17 | Device and method for manipulating an audio signal having a transient event |
US13/465,936 Active US9230558B2 (en) | 2008-03-10 | 2012-05-07 | Device and method for manipulating an audio signal having a transient event |
US13/465,946 Active 2029-09-29 US9236062B2 (en) | 2008-03-10 | 2012-05-07 | Device and method for manipulating an audio signal having a transient event |
US13/465,958 Abandoned US20130010983A1 (en) | 2008-03-10 | 2012-05-07 | Device and method for manipulating an audio signal having a transient event |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/921,550 Active 2031-11-16 US9275652B2 (en) | 2008-03-10 | 2009-02-17 | Device and method for manipulating an audio signal having a transient event |
US13/465,936 Active US9230558B2 (en) | 2008-03-10 | 2012-05-07 | Device and method for manipulating an audio signal having a transient event |
US13/465,946 Active 2029-09-29 US9236062B2 (en) | 2008-03-10 | 2012-05-07 | Device and method for manipulating an audio signal having a transient event |
Country Status (14)
Country | Link |
---|---|
US (4) | US9275652B2 (en) |
EP (4) | EP2250643B1 (en) |
JP (4) | JP5336522B2 (en) |
KR (4) | KR101230479B1 (en) |
CN (4) | CN102881294B (en) |
AU (1) | AU2009225027B2 (en) |
BR (4) | BR122012006270B1 (en) |
CA (4) | CA2897278A1 (en) |
ES (3) | ES2747903T3 (en) |
MX (1) | MX2010009932A (en) |
RU (4) | RU2565009C2 (en) |
TR (1) | TR201910850T4 (en) |
TW (4) | TWI505266B (en) |
WO (1) | WO2009112141A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120330670A1 (en) * | 2009-10-20 | 2012-12-27 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding an audio information, method for decoding an audio information and computer program using an iterative interval size reduction |
US8645145B2 (en) | 2010-01-12 | 2014-02-04 | Fraunhoffer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding and audio information, method for decoding an audio information and computer program using a hash table describing both significant state values and interval boundaries |
US9626986B2 (en) | 2013-12-19 | 2017-04-18 | Telefonaktiebolaget Lm Ericsson (Publ) | Estimation of background noise in audio signals |
US9640157B1 (en) * | 2015-12-28 | 2017-05-02 | Berggram Development Oy | Latency enhanced note recognition method |
US20170186413A1 (en) * | 2015-12-28 | 2017-06-29 | Berggram Development Oy | Latency enhanced note recognition method in gaming |
Families Citing this family (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101230479B1 (en) * | 2008-03-10 | 2013-02-06 | 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. | Device and method for manipulating an audio signal having a transient event |
USRE47180E1 (en) * | 2008-07-11 | 2018-12-25 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for generating a bandwidth extended signal |
PL4231290T3 (en) * | 2008-12-15 | 2024-04-02 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Audio bandwidth extension decoder, corresponding method and computer program |
RU2493618C2 (en) | 2009-01-28 | 2013-09-20 | Долби Интернешнл Аб | Improved harmonic conversion |
BRPI1007528B1 (en) | 2009-01-28 | 2020-10-13 | Dolby International Ab | SYSTEM FOR GENERATING AN OUTPUT AUDIO SIGNAL FROM AN INPUT AUDIO SIGNAL USING A T TRANSPOSITION FACTOR, METHOD FOR TRANSPORTING AN INPUT AUDIO SIGNAL BY A T TRANSPOSITION FACTOR AND STORAGE MEDIA |
EP2214165A3 (en) | 2009-01-30 | 2010-09-15 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus, method and computer program for manipulating an audio signal comprising a transient event |
JP5433022B2 (en) | 2009-09-18 | 2014-03-05 | ドルビー インターナショナル アーベー | Harmonic conversion |
DE102010001147B4 (en) | 2010-01-22 | 2016-11-17 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Multi-frequency band receiver based on path overlay with control options |
EP2362375A1 (en) * | 2010-02-26 | 2011-08-31 | Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. | Apparatus and method for modifying an audio signal using harmonic locking |
PL2545551T3 (en) | 2010-03-09 | 2018-03-30 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Improved magnitude response and temporal alignment in phase vocoder based bandwidth extension for audio signals |
KR101412117B1 (en) * | 2010-03-09 | 2014-06-26 | 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. | Apparatus and method for handling transient sound events in audio signals when changing the replay speed or pitch |
ES2522171T3 (en) | 2010-03-09 | 2014-11-13 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for processing an audio signal using patching edge alignment |
CN102436820B (en) | 2010-09-29 | 2013-08-28 | 华为技术有限公司 | High frequency band signal coding and decoding methods and devices |
JP5807453B2 (en) * | 2011-08-30 | 2015-11-10 | 富士通株式会社 | Encoding method, encoding apparatus, and encoding program |
KR101833463B1 (en) * | 2011-10-12 | 2018-04-16 | 에스케이텔레콤 주식회사 | Audio signal quality improvement system and method thereof |
US9286942B1 (en) * | 2011-11-28 | 2016-03-15 | Codentity, Llc | Automatic calculation of digital media content durations optimized for overlapping or adjoined transitions |
EP2631906A1 (en) | 2012-02-27 | 2013-08-28 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Phase coherence control for harmonic signals in perceptual audio codecs |
WO2013189528A1 (en) * | 2012-06-20 | 2013-12-27 | Widex A/S | Method of sound processing in a hearing aid and a hearing aid |
US9064318B2 (en) | 2012-10-25 | 2015-06-23 | Adobe Systems Incorporated | Image matting and alpha value techniques |
US10638221B2 (en) | 2012-11-13 | 2020-04-28 | Adobe Inc. | Time interval sound alignment |
US9355649B2 (en) * | 2012-11-13 | 2016-05-31 | Adobe Systems Incorporated | Sound alignment using timing information |
US9201580B2 (en) | 2012-11-13 | 2015-12-01 | Adobe Systems Incorporated | Sound alignment user interface |
US9076205B2 (en) | 2012-11-19 | 2015-07-07 | Adobe Systems Incorporated | Edge direction and curve based image de-blurring |
US10249321B2 (en) | 2012-11-20 | 2019-04-02 | Adobe Inc. | Sound rate modification |
US9451304B2 (en) | 2012-11-29 | 2016-09-20 | Adobe Systems Incorporated | Sound feature priority alignment |
US10455219B2 (en) | 2012-11-30 | 2019-10-22 | Adobe Inc. | Stereo correspondence and depth sensors |
US9135710B2 (en) | 2012-11-30 | 2015-09-15 | Adobe Systems Incorporated | Depth map stereo correspondence techniques |
US9208547B2 (en) | 2012-12-19 | 2015-12-08 | Adobe Systems Incorporated | Stereo correspondence smoothness tool |
US10249052B2 (en) | 2012-12-19 | 2019-04-02 | Adobe Systems Incorporated | Stereo correspondence model fitting |
US9214026B2 (en) | 2012-12-20 | 2015-12-15 | Adobe Systems Incorporated | Belief propagation and affinity measures |
JPWO2014136628A1 (en) | 2013-03-05 | 2017-02-09 | 日本電気株式会社 | Signal processing apparatus, signal processing method, and signal processing program |
WO2014136629A1 (en) * | 2013-03-05 | 2014-09-12 | 日本電気株式会社 | Signal processing device, signal processing method, and signal processing program |
US20140355769A1 (en) | 2013-05-29 | 2014-12-04 | Qualcomm Incorporated | Energy preservation for decomposed representations of a sound field |
EP2838086A1 (en) | 2013-07-22 | 2015-02-18 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | In an reduction of comb filter artifacts in multi-channel downmix with adaptive phase alignment |
JP6242489B2 (en) * | 2013-07-29 | 2017-12-06 | ドルビー ラボラトリーズ ライセンシング コーポレイション | System and method for mitigating temporal artifacts for transient signals in a decorrelator |
US9812150B2 (en) | 2013-08-28 | 2017-11-07 | Accusonus, Inc. | Methods and systems for improved signal decomposition |
CN105706166B (en) * | 2013-10-31 | 2020-07-14 | 弗劳恩霍夫应用研究促进协会 | Audio decoder apparatus and method for decoding a bitstream |
US9502045B2 (en) | 2014-01-30 | 2016-11-22 | Qualcomm Incorporated | Coding independent frames of ambient higher-order ambisonic coefficients |
US9922656B2 (en) | 2014-01-30 | 2018-03-20 | Qualcomm Incorporated | Transitioning of ambient higher-order ambisonic coefficients |
US10468036B2 (en) * | 2014-04-30 | 2019-11-05 | Accusonus, Inc. | Methods and systems for processing and mixing signals using signal decomposition |
US10770087B2 (en) | 2014-05-16 | 2020-09-08 | Qualcomm Incorporated | Selecting codebooks for coding vectors decomposed from higher-order ambisonic audio signals |
US9852737B2 (en) | 2014-05-16 | 2017-12-26 | Qualcomm Incorporated | Coding vectors decomposed from higher-order ambisonics audio signals |
EP2963646A1 (en) * | 2014-07-01 | 2016-01-06 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Decoder and method for decoding an audio signal, encoder and method for encoding an audio signal |
US9747910B2 (en) | 2014-09-26 | 2017-08-29 | Qualcomm Incorporated | Switching between predictive and non-predictive quantization techniques in a higher order ambisonics (HOA) framework |
IL313348A (en) | 2018-04-25 | 2024-08-01 | Dolby Int Ab | Integration of high frequency reconstruction techniques with reduced post-processing delay |
IL278223B2 (en) | 2018-04-25 | 2023-12-01 | Dolby Int Ab | Integration of high frequency audio reconstruction techniques |
US11158297B2 (en) * | 2020-01-13 | 2021-10-26 | International Business Machines Corporation | Timbre creation system |
CN112562703B (en) * | 2020-11-17 | 2024-07-26 | 普联国际有限公司 | Audio high-frequency optimization method, device and medium |
Citations (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5754427A (en) * | 1995-06-14 | 1998-05-19 | Sony Corporation | Data recording method |
US6049766A (en) * | 1996-11-07 | 2000-04-11 | Creative Technology Ltd. | Time-domain time/pitch scaling of speech or audio signals with transient handling |
US6316712B1 (en) * | 1999-01-25 | 2001-11-13 | Creative Technology Ltd. | Method and apparatus for tempo and downbeat detection and alteration of rhythm in a musical segment |
US20040122662A1 (en) * | 2002-02-12 | 2004-06-24 | Crockett Brett Greham | High quality time-scaling and pitch-scaling of audio signals |
US6766300B1 (en) * | 1996-11-07 | 2004-07-20 | Creative Technology Ltd. | Method and apparatus for transient detection and non-distortion time scaling |
US20040165730A1 (en) * | 2001-04-13 | 2004-08-26 | Crockett Brett G | Segmenting audio signals into auditory events |
US20040196989A1 (en) * | 2003-04-04 | 2004-10-07 | Sol Friedman | Method and apparatus for expanding audio data |
US6982377B2 (en) * | 2003-12-18 | 2006-01-03 | Texas Instruments Incorporated | Time-scale modification of music signals based on polyphase filterbanks and constrained time-domain processing |
US20060002572A1 (en) * | 2004-07-01 | 2006-01-05 | Smithers Michael J | Method for correcting metadata affecting the playback loudness and dynamic range of audio information |
US20060092295A1 (en) * | 2004-10-29 | 2006-05-04 | Microsoft Corporation | Features such as titles, transitions, and/or effects which vary according to positions |
US20060100885A1 (en) * | 2004-10-26 | 2006-05-11 | Yoon-Hark Oh | Method and apparatus to encode and decode an audio signal |
US7096481B1 (en) * | 2000-01-04 | 2006-08-22 | Emc Corporation | Preparation of metadata for splicing of encoded MPEG video and audio |
US20070040934A1 (en) * | 2004-04-07 | 2007-02-22 | Arun Ramaswamy | Data insertion apparatus and methods for use with compressed audio/video data |
US20070186235A1 (en) * | 2006-01-30 | 2007-08-09 | Jarman Matthew T | Synchronizing filter metadata with a multimedia presentation |
US20070255739A1 (en) * | 2006-03-16 | 2007-11-01 | Sony Corporation | Method and apparatus for attaching metadata |
US20080047414A1 (en) * | 2006-08-25 | 2008-02-28 | Sol Friedman | Method for shifting pitches of audio signals to a desired pitch relationship |
US20080097750A1 (en) * | 2005-06-03 | 2008-04-24 | Dolby Laboratories Licensing Corporation | Channel reconfiguration with side information |
US20080181298A1 (en) * | 2007-01-26 | 2008-07-31 | Apple Computer, Inc. | Hybrid scalable coding |
US20080221876A1 (en) * | 2007-03-08 | 2008-09-11 | Universitat Fur Musik Und Darstellende Kunst | Method for processing audio data into a condensed version |
US20090024234A1 (en) * | 2007-07-19 | 2009-01-22 | Archibald Fitzgerald J | Apparatus and method for coupling two independent audio streams |
US20090056526A1 (en) * | 2006-01-25 | 2009-03-05 | Sony Corporation | Beat extraction device and beat extraction method |
US20090216353A1 (en) * | 2005-12-13 | 2009-08-27 | Nxp B.V. | Device for and method of processing an audio data stream |
US20090220109A1 (en) * | 2006-04-27 | 2009-09-03 | Dolby Laboratories Licensing Corporation | Audio Gain Control Using Specific-Loudness-Based Auditory Event Detection |
US20090276069A1 (en) * | 2005-09-30 | 2009-11-05 | Apple Inc. | Echo Avoidance in Audio Time Stretching |
US20090272253A1 (en) * | 2005-12-09 | 2009-11-05 | Sony Corporation | Music edit device and music edit method |
US20090304082A1 (en) * | 2006-11-30 | 2009-12-10 | Regunathan Radhakrishnan | Extracting features of video & audio signal conten to provide reliable identification of the signals |
US7917358B2 (en) * | 2005-09-30 | 2011-03-29 | Apple Inc. | Transient detection by power weighted average |
US20110075729A1 (en) * | 2006-12-28 | 2011-03-31 | Gokce Dane | method and apparatus for automatic visual artifact analysis and artifact reduction |
US7933768B2 (en) * | 2003-03-24 | 2011-04-26 | Roland Corporation | Vocoder system and method for vocal sound synthesis |
US20110112670A1 (en) * | 2008-03-10 | 2011-05-12 | Sascha Disch | Device and Method for Manipulating an Audio Signal Having a Transient Event |
US8046749B1 (en) * | 2006-06-27 | 2011-10-25 | The Mathworks, Inc. | Analysis of a sequence of data in object-oriented environments |
US8121836B2 (en) * | 2005-07-11 | 2012-02-21 | Lg Electronics Inc. | Apparatus and method of processing an audio signal |
US8239190B2 (en) * | 2006-08-22 | 2012-08-07 | Qualcomm Incorporated | Time-warping frames of wideband vocoder |
US20120215546A1 (en) * | 2009-10-30 | 2012-08-23 | Dolby International Ab | Complexity Scalable Perceptual Tempo Estimation |
US8379868B2 (en) * | 2006-05-17 | 2013-02-19 | Creative Technology Ltd | Spatial audio coding based on universal spatial cues |
Family Cites Families (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5933801A (en) * | 1994-11-25 | 1999-08-03 | Fink; Flemming K. | Method for transforming a speech signal using a pitch manipulator |
JPH08223049A (en) * | 1995-02-14 | 1996-08-30 | Sony Corp | Signal coding method and device, signal decoding method and device, information recording medium and information transmission method |
SE512719C2 (en) | 1997-06-10 | 2000-05-02 | Lars Gustaf Liljeryd | A method and apparatus for reducing data flow based on harmonic bandwidth expansion |
JP3017715B2 (en) * | 1997-10-31 | 2000-03-13 | 松下電器産業株式会社 | Audio playback device |
US6266003B1 (en) * | 1998-08-28 | 2001-07-24 | Sigma Audio Research Limited | Method and apparatus for signal processing for time-scale and/or pitch modification of audio signals |
US6266644B1 (en) * | 1998-09-26 | 2001-07-24 | Liquid Audio, Inc. | Audio encoding apparatus and methods |
SE9903553D0 (en) | 1999-01-27 | 1999-10-01 | Lars Liljeryd | Enhancing conceptual performance of SBR and related coding methods by adaptive noise addition (ANA) and noise substitution limiting (NSL) |
JP2001075571A (en) * | 1999-09-07 | 2001-03-23 | Roland Corp | Waveform generator |
US6549884B1 (en) | 1999-09-21 | 2003-04-15 | Creative Technology Ltd. | Phase-vocoder pitch-shifting |
US6978236B1 (en) * | 1999-10-01 | 2005-12-20 | Coding Technologies Ab | Efficient spectral envelope coding using variable time/frequency resolution and time/frequency switching |
GB2357683A (en) * | 1999-12-24 | 2001-06-27 | Nokia Mobile Phones Ltd | Voiced/unvoiced determination for speech coding |
US7447639B2 (en) * | 2001-01-24 | 2008-11-04 | Nokia Corporation | System and method for error concealment in digital audio transmission |
US6876968B2 (en) | 2001-03-08 | 2005-04-05 | Matsushita Electric Industrial Co., Ltd. | Run time synthesizer adaptation to improve intelligibility of synthesized speech |
EP1377967B1 (en) | 2001-04-13 | 2013-04-10 | Dolby Laboratories Licensing Corporation | High quality time-scaling and pitch-scaling of audio signals |
MXPA03010237A (en) * | 2001-05-10 | 2004-03-16 | Dolby Lab Licensing Corp | Improving transient performance of low bit rate audio coding systems by reducing pre-noise. |
DK1504445T3 (en) * | 2002-04-25 | 2008-12-01 | Landmark Digital Services Llc | Robust and invariant sound pattern matching |
WO2003104924A2 (en) * | 2002-06-05 | 2003-12-18 | Sonic Focus, Inc. | Acoustical virtual reality engine and advanced techniques for enhancing delivered sound |
TW594674B (en) * | 2003-03-14 | 2004-06-21 | Mediatek Inc | Encoder and a encoding method capable of detecting audio signal transient |
SE0301273D0 (en) | 2003-04-30 | 2003-04-30 | Coding Technologies Sweden Ab | Advanced processing based on a complex exponential-modulated filter bank and adaptive time signaling methods |
ATE527654T1 (en) | 2004-03-01 | 2011-10-15 | Dolby Lab Licensing Corp | MULTI-CHANNEL AUDIO CODING |
CN1930607B (en) * | 2004-03-05 | 2010-11-10 | 松下电器产业株式会社 | Error conceal device and error conceal method |
JP4355745B2 (en) * | 2004-03-17 | 2009-11-04 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Audio encoding |
US8843378B2 (en) | 2004-06-30 | 2014-09-23 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Multi-channel synthesizer and method for generating a multi-channel output signal |
WO2006079348A1 (en) * | 2005-01-31 | 2006-08-03 | Sonorit Aps | Method for generating concealment frames in communication system |
US7742914B2 (en) * | 2005-03-07 | 2010-06-22 | Daniel A. Kosek | Audio spectral noise reduction method and apparatus |
US7983922B2 (en) | 2005-04-15 | 2011-07-19 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for generating multi-channel synthesizer control signal and apparatus and method for multi-channel synthesizing |
US8270439B2 (en) * | 2005-07-08 | 2012-09-18 | Activevideo Networks, Inc. | Video game system using pre-encoded digital audio mixing |
US8473298B2 (en) * | 2005-11-01 | 2013-06-25 | Apple Inc. | Pre-resampling to achieve continuously variable analysis time/frequency resolution |
DE102006017280A1 (en) * | 2006-04-12 | 2007-10-18 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Ambience signal generating device for loudspeaker, has synthesis signal generator generating synthesis signal, and signal substituter substituting testing signal in transient period with synthesis signal to obtain ambience signal |
US8380331B1 (en) * | 2008-10-30 | 2013-02-19 | Adobe Systems Incorporated | Method and apparatus for relative pitch tracking of multiple arbitrary sounds |
RU2493618C2 (en) * | 2009-01-28 | 2013-09-20 | Долби Интернешнл Аб | Improved harmonic conversion |
-
2009
- 2009-02-17 KR KR1020127005832A patent/KR101230479B1/en active IP Right Grant
- 2009-02-17 EP EP09719651.3A patent/EP2250643B1/en active Active
- 2009-02-17 US US12/921,550 patent/US9275652B2/en active Active
- 2009-02-17 BR BR122012006270-7A patent/BR122012006270B1/en active IP Right Grant
- 2009-02-17 RU RU2012113092/08A patent/RU2565009C2/en active IP Right Revival
- 2009-02-17 BR BRPI0906142-8A patent/BRPI0906142B1/en active IP Right Grant
- 2009-02-17 CN CN201210261998.1A patent/CN102881294B/en active Active
- 2009-02-17 ES ES10194088T patent/ES2747903T3/en active Active
- 2009-02-17 KR KR1020107020270A patent/KR101291293B1/en active IP Right Grant
- 2009-02-17 CN CN201210262760.0A patent/CN102789785B/en active Active
- 2009-02-17 ES ES10194086T patent/ES2739667T3/en active Active
- 2009-02-17 ES ES09719651T patent/ES2738534T3/en active Active
- 2009-02-17 CA CA2897278A patent/CA2897278A1/en active Pending
- 2009-02-17 KR KR1020127005834A patent/KR101230481B1/en active IP Right Grant
- 2009-02-17 WO PCT/EP2009/001108 patent/WO2009112141A1/en active Application Filing
- 2009-02-17 CN CN2009801081751A patent/CN101971252B/en active Active
- 2009-02-17 CA CA2897276A patent/CA2897276C/en active Active
- 2009-02-17 KR KR1020127005833A patent/KR101230480B1/en active IP Right Grant
- 2009-02-17 RU RU2010137429/08A patent/RU2487429C2/en active
- 2009-02-17 EP EP10194095A patent/EP2293295A3/en not_active Withdrawn
- 2009-02-17 EP EP10194086.4A patent/EP2296145B1/en active Active
- 2009-02-17 TR TR2019/10850T patent/TR201910850T4/en unknown
- 2009-02-17 CN CN201210262522.XA patent/CN102789784B/en active Active
- 2009-02-17 MX MX2010009932A patent/MX2010009932A/en active IP Right Grant
- 2009-02-17 AU AU2009225027A patent/AU2009225027B2/en active Active
- 2009-02-17 RU RU2012113087/08A patent/RU2565008C2/en active
- 2009-02-17 BR BR122012006269-3A patent/BR122012006269A2/en not_active Application Discontinuation
- 2009-02-17 JP JP2010550054A patent/JP5336522B2/en active Active
- 2009-02-17 CA CA2717694A patent/CA2717694C/en active Active
- 2009-02-17 BR BR122012006265-0A patent/BR122012006265B1/en active IP Right Grant
- 2009-02-17 CA CA2897271A patent/CA2897271C/en active Active
- 2009-02-17 EP EP10194088.0A patent/EP2293294B1/en active Active
- 2009-02-23 TW TW101114956A patent/TWI505266B/en active
- 2009-02-23 TW TW101114948A patent/TWI505264B/en active
- 2009-02-23 TW TW098105710A patent/TWI380288B/en active
- 2009-02-23 TW TW101114952A patent/TWI505265B/en active
-
2012
- 2012-03-12 JP JP2012055128A patent/JP5425249B2/en active Active
- 2012-03-12 JP JP2012055129A patent/JP5425250B2/en active Active
- 2012-03-12 JP JP2012055130A patent/JP5425952B2/en active Active
- 2012-04-03 RU RU2012113063/08A patent/RU2598326C2/en active IP Right Revival
- 2012-05-07 US US13/465,936 patent/US9230558B2/en active Active
- 2012-05-07 US US13/465,946 patent/US9236062B2/en active Active
- 2012-05-07 US US13/465,958 patent/US20130010983A1/en not_active Abandoned
Patent Citations (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5754427A (en) * | 1995-06-14 | 1998-05-19 | Sony Corporation | Data recording method |
US6049766A (en) * | 1996-11-07 | 2000-04-11 | Creative Technology Ltd. | Time-domain time/pitch scaling of speech or audio signals with transient handling |
US6766300B1 (en) * | 1996-11-07 | 2004-07-20 | Creative Technology Ltd. | Method and apparatus for transient detection and non-distortion time scaling |
US6316712B1 (en) * | 1999-01-25 | 2001-11-13 | Creative Technology Ltd. | Method and apparatus for tempo and downbeat detection and alteration of rhythm in a musical segment |
US7096481B1 (en) * | 2000-01-04 | 2006-08-22 | Emc Corporation | Preparation of metadata for splicing of encoded MPEG video and audio |
US20040165730A1 (en) * | 2001-04-13 | 2004-08-26 | Crockett Brett G | Segmenting audio signals into auditory events |
US20040122662A1 (en) * | 2002-02-12 | 2004-06-24 | Crockett Brett Greham | High quality time-scaling and pitch-scaling of audio signals |
US7933768B2 (en) * | 2003-03-24 | 2011-04-26 | Roland Corporation | Vocoder system and method for vocal sound synthesis |
US20040196989A1 (en) * | 2003-04-04 | 2004-10-07 | Sol Friedman | Method and apparatus for expanding audio data |
US6982377B2 (en) * | 2003-12-18 | 2006-01-03 | Texas Instruments Incorporated | Time-scale modification of music signals based on polyphase filterbanks and constrained time-domain processing |
US20070040934A1 (en) * | 2004-04-07 | 2007-02-22 | Arun Ramaswamy | Data insertion apparatus and methods for use with compressed audio/video data |
US20060002572A1 (en) * | 2004-07-01 | 2006-01-05 | Smithers Michael J | Method for correcting metadata affecting the playback loudness and dynamic range of audio information |
US20060100885A1 (en) * | 2004-10-26 | 2006-05-11 | Yoon-Hark Oh | Method and apparatus to encode and decode an audio signal |
US20060092295A1 (en) * | 2004-10-29 | 2006-05-04 | Microsoft Corporation | Features such as titles, transitions, and/or effects which vary according to positions |
US20080097750A1 (en) * | 2005-06-03 | 2008-04-24 | Dolby Laboratories Licensing Corporation | Channel reconfiguration with side information |
US8121836B2 (en) * | 2005-07-11 | 2012-02-21 | Lg Electronics Inc. | Apparatus and method of processing an audio signal |
US7917358B2 (en) * | 2005-09-30 | 2011-03-29 | Apple Inc. | Transient detection by power weighted average |
US20090276069A1 (en) * | 2005-09-30 | 2009-11-05 | Apple Inc. | Echo Avoidance in Audio Time Stretching |
US20090272253A1 (en) * | 2005-12-09 | 2009-11-05 | Sony Corporation | Music edit device and music edit method |
US20090216353A1 (en) * | 2005-12-13 | 2009-08-27 | Nxp B.V. | Device for and method of processing an audio data stream |
US20090056526A1 (en) * | 2006-01-25 | 2009-03-05 | Sony Corporation | Beat extraction device and beat extraction method |
US20070186235A1 (en) * | 2006-01-30 | 2007-08-09 | Jarman Matthew T | Synchronizing filter metadata with a multimedia presentation |
US20070255739A1 (en) * | 2006-03-16 | 2007-11-01 | Sony Corporation | Method and apparatus for attaching metadata |
US20090220109A1 (en) * | 2006-04-27 | 2009-09-03 | Dolby Laboratories Licensing Corporation | Audio Gain Control Using Specific-Loudness-Based Auditory Event Detection |
US8379868B2 (en) * | 2006-05-17 | 2013-02-19 | Creative Technology Ltd | Spatial audio coding based on universal spatial cues |
US8046749B1 (en) * | 2006-06-27 | 2011-10-25 | The Mathworks, Inc. | Analysis of a sequence of data in object-oriented environments |
US8239190B2 (en) * | 2006-08-22 | 2012-08-07 | Qualcomm Incorporated | Time-warping frames of wideband vocoder |
US20080047414A1 (en) * | 2006-08-25 | 2008-02-28 | Sol Friedman | Method for shifting pitches of audio signals to a desired pitch relationship |
US20090304082A1 (en) * | 2006-11-30 | 2009-12-10 | Regunathan Radhakrishnan | Extracting features of video & audio signal conten to provide reliable identification of the signals |
US20110075729A1 (en) * | 2006-12-28 | 2011-03-31 | Gokce Dane | method and apparatus for automatic visual artifact analysis and artifact reduction |
US20080181298A1 (en) * | 2007-01-26 | 2008-07-31 | Apple Computer, Inc. | Hybrid scalable coding |
US20080221876A1 (en) * | 2007-03-08 | 2008-09-11 | Universitat Fur Musik Und Darstellende Kunst | Method for processing audio data into a condensed version |
US20090024234A1 (en) * | 2007-07-19 | 2009-01-22 | Archibald Fitzgerald J | Apparatus and method for coupling two independent audio streams |
US20110112670A1 (en) * | 2008-03-10 | 2011-05-12 | Sascha Disch | Device and Method for Manipulating an Audio Signal Having a Transient Event |
US20120215546A1 (en) * | 2009-10-30 | 2012-08-23 | Dolby International Ab | Complexity Scalable Perceptual Tempo Estimation |
Non-Patent Citations (1)
Title |
---|
ID3 Draft Specification; c1998 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9978380B2 (en) | 2009-10-20 | 2018-05-22 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding an audio information, method for decoding an audio information and computer program using a detection of a group of previously-decoded spectral values |
US8612240B2 (en) | 2009-10-20 | 2013-12-17 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding an audio information, method for decoding an audio information and computer program using a region-dependent arithmetic coding mapping rule |
US8655669B2 (en) * | 2009-10-20 | 2014-02-18 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding an audio information, method for decoding an audio information and computer program using an iterative interval size reduction |
US8706510B2 (en) | 2009-10-20 | 2014-04-22 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding an audio information, method for decoding an audio information and computer program using a detection of a group of previously-decoded spectral values |
US12080300B2 (en) | 2009-10-20 | 2024-09-03 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding an audio information, method for decoding an audio information and computer program using a detection of a group of previously-decoded spectral values |
US11443752B2 (en) | 2009-10-20 | 2022-09-13 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding an audio information, method for decoding an audio information and computer program using a detection of a group of previously-decoded spectral values |
US20120330670A1 (en) * | 2009-10-20 | 2012-12-27 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding an audio information, method for decoding an audio information and computer program using an iterative interval size reduction |
US8645145B2 (en) | 2010-01-12 | 2014-02-04 | Fraunhoffer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding and audio information, method for decoding an audio information and computer program using a hash table describing both significant state values and interval boundaries |
US8682681B2 (en) | 2010-01-12 | 2014-03-25 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding and decoding an audio information, and computer program obtaining a context sub-region value on the basis of a norm of previously decoded spectral values |
US8898068B2 (en) | 2010-01-12 | 2014-11-25 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding and audio information, method for decoding an audio information and computer program using a modification of a number representation of a numeric previous context value |
US9633664B2 (en) | 2010-01-12 | 2017-04-25 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Audio encoder, audio decoder, method for encoding and audio information, method for decoding an audio information and computer program using a modification of a number representation of a numeric previous context value |
US10311890B2 (en) | 2013-12-19 | 2019-06-04 | Telefonaktiebolaget Lm Ericsson (Publ) | Estimation of background noise in audio signals |
US9818434B2 (en) | 2013-12-19 | 2017-11-14 | Telefonaktiebolaget Lm Ericsson (Publ) | Estimation of background noise in audio signals |
US10573332B2 (en) | 2013-12-19 | 2020-02-25 | Telefonaktiebolaget Lm Ericsson (Publ) | Estimation of background noise in audio signals |
US11164590B2 (en) | 2013-12-19 | 2021-11-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Estimation of background noise in audio signals |
US9626986B2 (en) | 2013-12-19 | 2017-04-18 | Telefonaktiebolaget Lm Ericsson (Publ) | Estimation of background noise in audio signals |
US20170316769A1 (en) * | 2015-12-28 | 2017-11-02 | Berggram Development Oy | Latency enhanced note recognition method in gaming |
US9711121B1 (en) * | 2015-12-28 | 2017-07-18 | Berggram Development Oy | Latency enhanced note recognition method in gaming |
US20170186413A1 (en) * | 2015-12-28 | 2017-06-29 | Berggram Development Oy | Latency enhanced note recognition method in gaming |
US10360889B2 (en) * | 2015-12-28 | 2019-07-23 | Berggram Development Oy | Latency enhanced note recognition method in gaming |
US9640157B1 (en) * | 2015-12-28 | 2017-05-02 | Berggram Development Oy | Latency enhanced note recognition method |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9236062B2 (en) | Device and method for manipulating an audio signal having a transient event | |
CA2820996A1 (en) | Device and method for manipulating an audio signal having a transient event | |
AU2012216538B2 (en) | Device and method for manipulating an audio signal having a transient event |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |