EP2484127B1 - Verfahren, computer-programm und vorrichtung zur verarbeitung von audiosignalen - Google Patents
Verfahren, computer-programm und vorrichtung zur verarbeitung von audiosignalen Download PDFInfo
- Publication number
- EP2484127B1 EP2484127B1 EP10819956.3A EP10819956A EP2484127B1 EP 2484127 B1 EP2484127 B1 EP 2484127B1 EP 10819956 A EP10819956 A EP 10819956A EP 2484127 B1 EP2484127 B1 EP 2484127B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- groups
- difference
- interaural
- group
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000005236 sound signal Effects 0.000 title claims description 149
- 238000000034 method Methods 0.000 title claims description 35
- 238000012545 processing Methods 0.000 title claims description 20
- 238000004590 computer program Methods 0.000 title claims 2
- 230000006870 function Effects 0.000 claims description 32
- 238000012546 transfer Methods 0.000 claims description 17
- 230000004044 response Effects 0.000 claims description 15
- 230000001419 dependent effect Effects 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 10
- 239000013598 vector Substances 0.000 description 24
- 238000000605 extraction Methods 0.000 description 13
- 238000013461 design Methods 0.000 description 12
- 210000005069 ears Anatomy 0.000 description 7
- 230000002596 correlated effect Effects 0.000 description 6
- 239000003607 modifier Substances 0.000 description 6
- 239000004065 semiconductor Substances 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 230000004807 localization Effects 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 230000001934 delay Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000009499 grossing Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012732 spatial analysis Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/05—Generation or adaptation of centre channel in multi-channel audio systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/03—Application of parametric coding in stereophonic audio systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/07—Synergistic effects of band splitting and sub-band processing
Definitions
- the present invention relates to apparatus for processing of audio signals.
- the invention further relates to, but is not limited to, apparatus for processing audio and speech signals in audio playback devices.
- Audio rendering and sound virtualization has been a growing area in recent years. There are different playback techniques some of which are mono, stereo playback, surround 5.1, ambisonics etc.
- apparatus or signal processing integrated within apparatus or signal processing performed prior to the final playback apparatus has been designed to allow a virtual sound image to be created in many applications such as music playback, movie sound tracks, 3D audio, and gaming applications.
- stereo audio signal generation The standard for commercial audio content until recently, for music or movie, was stereo audio signal generation. Signals from different musical instruments, speech or voice, and other audio sources creating the sound scene were combined to form a stereo signal.
- Commercially available playback devices would typically have two loudspeakers placed at a suitable distance in front of the listener. The goal of stereo rendering was limited to creating phantom images at a position between the two speakers and is known as panned stereo.
- the same content could be played on portable playback devices as well, as it relied on a headphone or an earplug which uses 2 channels.
- stereo widening and 3D audio applications have recently become more popular especially for portable devices with audio playback capabilities. There are various techniques for these applications that provide user spatial feeling and 3D audio content.
- the techniques employ various signal processing algorithms and filters. It is known that the effectiveness of spatial audio is stronger over headphone playback
- Commercial audio today boasts of 5.1, 7.1 and 10.1 multichannel content where 5, 7 or 10 channels are used to generate surrounding audio scenery.
- An example of a 5.1 multichannel system is shown in Figure 2 where the user 211 is surrounded by a front left channel speaker 251, a front right channel speaker 253, a centre channel speaker 255, a left surround channel speaker 257 and a right surround channel speaker 259. Phantom images can be created using this type of setup lying anywhere on the circle 271 as shown in Figure 2 .
- a channel in multichannel audio is not necessarily unique. -Audio signals for one channel after frequency dependent phase shifts and magnitude modifications can become the audio signal for a different channel.
- the multichannel audio signals are matrix downmixed.
- the original multi-channel content is no longer available in its component form (each component being each channel in say 5.1). All of the channels from 5.1 are present in the down-mixed stereo.
- the phantom images lie on an imaginary line joining the left and right ears. This line is known as the interaural axis and the experience is often called inside-the-head feeling or lateralization.
- Each of these extracted audio signals may be then virtualized to different virtual locations.
- a virtualizer typically introduces frequency dependent relative delays and amplification or attenuation to the signals before the signals are sent to headphone speakers. The introduction of typical virtualization would pan certain sources away from the mid plane where the user does not have any control how loud or quiet these sources could be.
- the user may be interested in a vocalist located centre stage rather than the audience located off-centre stage and the stereo audio signals may easily mask the key sections of the vocalist by the background noise from the audience.
- the sources that appear to be originating from the centre can often be at higher or lower audio levels relative to the rest of the sources in the audio scene. Listeners typically do not have any control over this level and often want to amplify or attenuate these central sources depending on their perceptual preference. Lack of this feature often results in a poor audio experience.
- This invention proceeds from the consideration that prior art solutions for centre channel extraction do not produce good quality centre channel audio signals.
- listening to centre channel audio signals produces a poor listening experience.
- the poor quality centre channel audio signals produce poor quality listening experiences when virtualized.
- Embodiments of the present invention aim to address the above problem.
- EP1784048 discloses a signal processing apparatus generating, from left-channel and right-channel stereo signals, a centre-channel signal.
- the stereo signals are split into different frequency bands by two identical filter banks.
- the phase difference between the stereo signals is determined by a phase difference detector.
- a gain is calculated by gain generator as a function of the phase difference.
- the gain is set to 0 for phase differences of +-180 DEG and to 1 for phase differences of 0 DEG.
- This gain is applied by a multiplier to the average of the stereo signals within each frequency band.
- the resulting outputs of all frequency bands are synthesised by a signal synthesiser to form the resulting centre-channel signal.
- US2005169482 discloses an audio spatial environment engine for converting from an N channel audio system to an M channel audio system, where N is an integer greater than M.
- the audio spatial environment engine includes one or more correlators receiving two or more of the N channels of audio data and eliminating delays between the channels that are irrelevant to an average human listener.
- One or more Hilbert transform systems each perform a Hilbert transform on one or more of the correlated channels of audio data.
- One or more summers receive at least one of the correlated channels of audio data and at least one of the Hilbert transformed correlated channels of audio data and generate one of the M channels of audio data.
- JP2002078100 discloses a system for processing a stereophonic signal provided with frequency band division sections that divide a stereophonic signal into frequency bands in each channel, a similarity calculation section that calculates the similarity between channels for each frequency band, an attenuation coefficient calculation section that calculates an attenuation coefficient to suppress or emphasize a sound source signal localized around the middle on the basis of the similarity, a multiplier that multiplies the attenuation coefficient with each frequency band signal, and a sound source signal synthesis section and an output section that resynthesize each frequency band signal in each channel after the multiplication of the attenuation coefficient and provide an output of the result.
- the stimuli were manipulated by delaying or attenuating the signal to one ear (by up to 600 ⁇ s or 20 dB) or by altering the spectral cues at one or both ears. Listener weighting of the manipulated cues was determined by examining the resulting localization response biases. In accordance with the Duplex Theory defined for pure-tones, listeners gave high weight to ITD and low weight to ILD for low-pass stimuli, and high weight to ILD for high-pass stimuli. Most (but not all) listeners gave low weight to ITD for high-pass stimuli. This weight could be increased by amplitude-modulating the stimuli or reduced by lengthening stimulus onsets.
- the ITD weight was greater than or equal to that given to ILD.
- Manipulations of monaural spectral cues and the interaural level spectrum had little influence on lateral angle judgements.
- An example of duplex-theory of sound localization and use of ITD and ILD values is given by SENGPIEL: 'Die Duplex-Theorie von Lord Rayleigh', 30 April 2008, pages 1 - 1, XP055099399 .
- an apparatus for processing audio signals comprising the set of features according to claim 8.
- a computer-readable medium encoded with instructions that, when executed by a computer perform the method of any of claims 1 to 6.
- An electronic device or a chipset may comprise the apparatus as described above.
- FIG. 1 schematic block diagram of an exemplary electronic device 10 or apparatus, which may incorporate a centre channel extractor.
- the centre channel extracted by the centre channel extractor in some embodiments is suitable for an up-mixer.
- the electronic device 10 may for example be a mobile terminal or user equipment for a wireless communication system.
- the electronic device may be a Television (TV) receiver, portable digital versatile disc (DVD) player, or audio player such as an ipod.
- TV Television
- DVD portable digital versatile disc
- audio player such as an ipod.
- the electronic device 10 comprises a processor 21 which may be linked via a digital-to-analogue converter 32 to a headphone connector for receiving a headphone or headset 33.
- the processor 21 is further linked to a transceiver (TX/RX) 13, to a user interface (UI) 15 and to a memory 22.
- TX/RX transceiver
- UI user interface
- the processor 21 may be configured to execute various program codes.
- the implemented program codes comprise a channel extractor for extracting a centre channel audio signal from a stereo audio signal.
- the implemented program codes 23 may be stored for example in the memory 22 for retrieval by the processor 21 whenever needed.
- the memory 22 could further provide a section 24 for storing data, for example data that has been processed in accordance with the embodiments.
- the channel extracting code may in embodiments be implemented in hardware or firmware.
- the user interface 15 enables a user to input commands to the electronic device 10, for example via a keypad, and/or to obtain information from the electronic device 10, for example via a display.
- the transceiver 13 enables a communication with other electronic devices, for example via a wireless communication network.
- the apparatus 10 may in some examples further comprise at least two microphones for inputting audio or speech that is to be processed according to embodiments of the application or transmitted to some other electronic device or stored in the data section 24 of the memory 22.
- a corresponding application to capture stereo audio signals using the at least two microphones may be activated to this end by the user via the user interface 15.
- the apparatus 10 in such examples may further comprise an analogue-to-digital converter configured to convert the input analogue audio signal into a digital audio signal and provide the digital audio signal to the processor 21.
- the apparatus 10 may in some examples also receive a bit stream with correspondingly encoded stereo audio data from another electronic device via the transceiver 13.
- the processor 21 may execute the channel extraction program code stored in the memory 22.
- the processor 21 in these examples may process the received stereo audio signal data, and output the extracted channel data.
- the headphone connector 33 may be configured to communicate to a headphone set or earplugs wirelessly, for example by a Bluetooth profile, or using a conventional wired connection.
- the received stereo audio data may in some examples also be stored, instead of being processed immediately, in the data section 24 of the memory 22, for instance for enabling a later processing and presentation or forwarding to still another electronic device.
- FIG. 3 shows in further detail an up-mixer 106 suitable for the implementation of some examples of the application.
- the up-mixer is configured to receive a stereo audio signal and generate a left channel audio signal L", a centre channel audio signal C' and a right channel audio signal R".
- the up-mixer 106 is configured to receive the left channel audio signal at the left input 451 and the right channel audio signal at the right input 453.
- the up-mixer 106 furthermore comprises a centre channel extractor 455 which receives the left channel audio signal L and the right channel audio signal R and generates a centre channel audio signal C.
- a centre channel extractor 455 which receives the left channel audio signal L and the right channel audio signal R and generates a centre channel audio signal C.
- the centre channel audio signal C is in some examples furthermore passed to a first amplifier 461 which applies a gain A 1 to the signal and outputs the amplified signal to the left channel modifier 465.
- the left channel audio signal L is further passed to a left channel filter 454 which applies a delay to the audio signal substantially equal to the time required to generate the centre channel audio signal C.
- the left channel filter 454 in some examples may be implemented by an all pass filter.
- the filtered left channel audio signal is passed to the left channel modifier 465.
- the left channel modifier 465 is configured to subtract the amplified centre channel audio signal A 1 C from the filtered left channel audio signal to generate a modified left channel audio signal L.
- the modified left channel audio signal in some embodiments is passed to the left channel amplifier 487.
- the centre channel audio signal C is furthermore in some examples passed to a second amplifier 463 which applies a gain A 2 to the signal and outputs the amplified signal to the right channel modifier 467.
- the right channel audio signal R is further passed to a right channel filter 456 which applies a delay to the audio signal substantially equal to the time required to generate the centre channel audio signal C.
- the right channel filter 456 in some examples may be implemented by an all pass filter.
- the filtered right channel audio signal is passed to the right channel modifier 467.
- the right channel modifier 467 is configured to subtract the amplified centre channel audio signal A 2 C from the filtered right channel audio signal to generate a modified left channel audio signal R'.
- the modified right channel audio signal in some examples is passed to the right channel amplifier 491.
- the left channel amplifier 487 in some example is configured to receive the modified left channel audio signal L', amplify the modified left channel audio signal and output the amplified left channel signal L".
- the up-mixer 106 furthermore is configured in some examples to comprise a centre channel amplifier 489 configured to receive the centre channel audio signal C, amplify the centre channel audio signal and output an amplified centre channel signal C' .
- the up-mixer 106 in the same examples comprises a right channel amplifier 491 configured to receive the modified right channel audio signal R', amplify the modified right channel audio signal and output the amplified right channel signal R".
- the gain of the left channel amplifier 487, centre channel amplifier 489 and right channel amplifier 491 in some examples may be determined by the user for example using the user interface 15 so as to control the importance of the 'centre' stage audio components with respect to the 'left' and 'right' stage audio components.
- the user may control the gain of the 'centre' over the 'left' and 'right' components so that the user may emphasise the vocalist over the instruments or audience audio components according to the earlier examples.
- the gains may be controlled or determined automatically or semiautomatically. Such examples may be implemented for applications such as Karaoke.
- the extraction of the centre channel uses both magnitude and phase information for lower frequency components and magnitude information only for higher frequencies. More specifically the centre channel extractor 455 in some embodiments uses frequency dependent magnitude and phase difference information between the stereo signals and compares this information against the user interaural level difference (ILD) and interaural phase difference (IPD) to decide if the signal is located at the centre i.e. in the median plane (vertical plane passing through the midpoint between two ears and nose).
- ILD interaural level difference
- IPD interaural phase difference
- the proposed method can in some embodiments be customized according to user's own head related transfer function. It can in some other embodiments be used to extract sources in the median plane for a binaurally recorded signal.
- the methods and apparatus as described hereafter may extract the centre channel using at least one of the interaural level difference (ILD), the interaural phase difference (IPD) and the interaural time difference (ITD).
- the selection of the at least one difference used may differ according to the frequency being analysed. As the example described above and following, there is a first selection for a first frequency range where the interaural level difference and interaural phase difference are used at a low frequency range and a second selection of only the interaural level difference used at a higher frequency range.
- the centre channel audio signal components are present in both the left and right stereo audio signals where the components have the same intensity and zero delay i.e. no phase difference.
- a listener When listening over headphones, a listener would perceive this sound to be on the median plane (vertical plane passing through the midpoint of two ears and noise). The absence of finer frequency specific cues would mean that a listener would often perceive this signal at the centre of the head. In the other words the listener may not be able to determine whether the signal is at the front or back or up or down on that plane.
- the stereo Right channel audio signal does not contain any or significant components of the front left audio channel signal. As a result, the user perceives this signal to be at the left ear.
- the principle of how to identify the centre channel components for extraction from such a downmixed stereo audio signal is to determine a selection of at least one of the ITD, IPD and ILD in the stereo signal and compare the ITD, IPD and ILD values to accustomed ILD, IPD and ITD values in order to evaluate the direction.
- This approach may be termed as Perceptual Basis from here on.
- the overall level difference is minimal, there should be minimal interaural time delay (in other words the ITD is small), and furthermore minimal interaural phase delay (in other words the IPD is small).
- the analysis may be carried out on a time domain basis for example where ITD is the selected difference and in some other embodiments on a spectral domain basis.
- the spatial analysis may in some embodiments be done on a frequency sub-band basis.
- the analysis may employ time domain analysis such that in these other examples instead of calculating the relative phase, the time difference between the envelopes of signal pairs in the time domain are calculated.
- the frequency sub-band based analysis is in some embodiments based on the superimposition of signals from all the sources in that given frequency band.
- the extraction in some embodiments uses the differenced in different frequency sub-bands (such as level, time or phase differences or a selection or combination of differences) to estimate the direction of the source in that frequency sub band.
- the net differences are compared to the differences (ILD, IPD and ITD cues) that are unique to that particular listener. These values are obtained from Head Related Transfer Function (HRTF) for that particular listener.
- HRTF Head Related Transfer Function
- more than one of the cues may be used to estimate the source direction in the lower frequency ranges ( ⁇ 1.5 KHz) but a single cue (for example the ILD or in other embodiments the ITD) may be the dominant cue at a higher frequency range (>1.5 KHz).
- a single cue for example the ILD or in other embodiments the ITD
- the determination of the use of a dominant cue such as the use of the ILD for higher frequency ranges in some embodiments is because a high frequency source signal may see multiple phase wraparounds before reaching the contralateral ear.
- a crude or basic estimator for the centre channel is 0.5*(L(n)+R(n)). This average of samples in time domain may perfectly preserve the original centre channel, but all of the remaining channels may also leak into the extracted centre channel. This leakage may be controlled by applying frequency specific gating or gains.
- a weighting may be applied to the components of the band or sub-band to prevent leakage of non-centre components into the extracted centre channel audio signal.
- a beam pattern may be formed to gate or filter unwanted leakage from other channels. This may be considered to be forming a perceptual beam pattern to allow signals that are located in the median plane.
- the centre channel extractor 455 may receive the left channel audio signal L and the right channel audio signal R.
- the audio signals may be described with respect to a time and thus at time n the left channel audio signal may be labelled as L(n) and the right channel audio signal may be labelled as R(n).
- This operation of receiving the left and right channel audio signals may be shown with respect to Figure 6 in step 651.
- the centre channel extractor 455 may comprise a sub-band generator 601 which is configured to receive the left and right channel audio signals and output for each channel a number of frequency sub-band signals.
- the number of sub-bands may be N+1 and thus the output of the sub-band generator 601 comprises N+1 left channel sub-band audio signals L 0 (n),...,L N (n) and N+1 right channel sub-band audio signals R 0 (n),...,R N (n).
- the frequency range for each sub-band may be any suitable frequency division design.
- the sub-bands in some embodiments may be regular whilst in some other embodiments the sub-bands may be determined according to psychoacoustical principles.
- the sub-bands may have overlapping frequency ranges on some other embodiments at least some sub-bands may have abutting or separated frequency ranges.
- the sub-band generator is shown as a filterbank comprising a pair of first filters 603 (one left channel low pass filter 603 L and one right channel low pass filter 603 R ) with cut-off frequency of 150Hz, a pair of second filters 605 (one left channel band pass filter 605 L and one right channel band pass filter 605 R ) with a centre frequency of 200Hz and a bandwidth of 150Hz, a pair of third filters 607 (one left channel band pass filter 607 L and one right channel band pass filter 607 R ) with a centre frequency of 400Hz and a bandwidth of 200Hz, and on to a pair of N+1th filters 609 (one left channel band pass filter 609 L and one right channel band pass filter 609 R ) with centre frequency of 2500Hz and bandwidth of 500Hz.
- first filters 603 one left channel low pass filter 603 L and one right channel low pass filter 603 R
- second filters 605 one left channel band pass filter 605 L and one right channel band pass filter 605 R
- third filters 607 one left channel band pass
- Any suitable filter design may be used in embodiments of the application to implement the filters.
- gammatone or gammachirp filterbank models which are models of particularly suitable filterbanks for the human hearing system may be used.
- a suitable finite impulse response (FIR) filter design may be used to generate the sub-bands.
- the filtering process may be configured in some embodiments to be carried out in the frequency domain and thus the sub-band generator 601 may in these embodiments comprise a time to frequency domain converter, a frequency domain filtering and a frequency to time domain converter.
- step 653 The operation of generating sub-bands is shown in Figure 6 by step 653.
- the centre channel extractor 455 in some embodiments may further comprise a gain determiner 604.
- the gain determiner 604 is configured in some embodiments to receive the left and right channel sub-band audio signals from the sub-band generator 601 and determine a gain function value to be passed to a combined signal amplifier 610.
- the gain determiner 604 for clarity reasons is partially shown as separate gain determiner apparatus for the first sub-band (a first sub-band gain determiner 604 0 ) and the N+1th sub-band (a N+1th sub-band gain determiner 604 N ).
- This separation of the gain determination into sub-band apparatus allows the gain determination to be carried out in parallel or substantially in parallel.
- the same operation may be carried out serially for each sub-band in some embodiments of the application and as such may employ a number of separate sub-band gain determiner apparatus fewer than the number of sub-bands.
- the gain determiner 604 in some embodiments may comprise a gain estimator 633 and a threshold determiner 614.
- the gain estimator 633 in some embodiments receives the left and right channel sub-band audio signal values, and the threshold values for each sub-band from the threshold determiner 614 and determines the gain function value for each sub-band.
- the threshold determiner 614 is configured in some embodiments to generate the threshold values for each sub-band. In some embodiments the threshold determiner generates or stores two thresholds for each sub-band, a lower threshold value threshold 1 and a higher threshold value threshold 2 . The thresholds generated for each sub-band such as threshold 1 and threshold 2 are generated based on the listener's head related transfer function (HRTF). In some embodiments the HRTF for the specific listener may be determined using any suitable method for determining the HRTF. For example in some embodiments the HRTF may be generated by selecting a suitable HRTF from the Centre for Image Processing and Integrated Computing (CIPIC) database or any suitable HRTF database.
- CPIC Centre for Image Processing and Integrated Computing
- a suitable HRTF may be retrieved from a earlier determined HRTF for a user determined using a HRTF measuring device.
- the threshold determiner 614 generates sub-band threshold values dependent on an idealized or modelled HRTF function such as a dummy head model HRTF.
- Figure 8a shows a sample HRTF for the left and right ears for frequencies from 20Hz to 20kHz for azimuth of 0 degrees, in other words with a source directly in front of the listener. From the above plot, it can be seen that the Interaul level differences (ILD) most of the frequencies up to about 5KHz is less than 6dB. This would be true for sources that are directly in front of the listener.
- Figure 8b shows for the same listener a sample-HRTF for the left and right ears for frequencies from 20Hz to 20kHz for a source azimuth of -65 degrees. The level differences in this example are now much greater at higher frequencies.
- Figures 9a and 9b shows the signal level HRTF for the left and right ears for a 200Hz and 2 KHz signal for a sample listener for different azimuth angles all around the listener.
- the threshold determiner 614 in order to determine threshold values suitable so that the centre channel extractor may perceive a signal to be in the median plane (0,180 degrees) may have to determine threshold values where the left and right levels of a stereo signal (in other words the difference between the two traces for that azimuth angle) are very close at lower as well as for higher frequencies.
- This closeness metric is a function of frequency and tolerance around the intended azimuth angle (e.g. +/-15 degrees from 0 degree azimuth).
- phase differences may in some examples also be checked at lower frequencies and limits can be established.
- the threshold values generated by the threshold determiner thus specify the differences allowed between the left and right channels to enable the extraction of the centre channel for each frequency band.
- the selected or generated HRTF may be associated with a number of predetermined threshold values for each sub-band.
- the thresholds may be determined by determining the ILD between the left and right HRTF for the user at +/- 15 degree range from the centre.
- the thresholds may be determined by examining the total power in a frequency band or sub-band (for example in some examples this may be a indicated or selected critical band).
- a band filtered Head Related Impulse Response HRIR may be cross correlated to determine the difference between the left and right ear response in terms of phase/time differences.
- the threshold determiner 614 in these embodiments may use these Interaural Level Difference (ILD) values, Interaural Time Difference (ITD) and/or Interaural Phase Difference (IPD) values to set the threshold values for each band/sub-band accordingly.
- ILD Interaural Level Difference
- IPD Interaural Phase Difference
- HRTF or HRIR values for the Interaural Level Difference (ILD) and Interaural Phase Difference (IPD) may be used to set the threshold values for the lower frequency range.
- the difference selected for the higher frequency range is based on the Interaural Level Difference (ILD) values only then the HRTF or HRIR values for the Interaural Level Difference (ILD) may be used to set the threshold values for the higher frequency ranges.
- the threshold is set based on the selected difference or differences shown in the HRTF or HRIR. The operation of determining threshold values for sub-bands is shown in figure 6 by step 656.
- the gain estimator 633 in some examples, and as shown in Figure 4 , comprises a Discrete Fourier Transformer (DFT) computation block 606 and a coefficient comparator 608.
- the DFT computation block 606 in some examples receives the left and right channel sub-band audio signal values.
- the DFT computation block 606 generates complex frequency domain values for each sub-band for both the left and right channel.
- any suitable time to frequency domain transformer may be used to generate complex frequency domain values such as a discrete cosine transform (DCT), Fast Fourier Transform (FFT), or wavelet transform.
- DCT discrete cosine transform
- FFT Fast Fourier Transform
- the DFT computation block 606 may thus in these examples compute v k (n) for each new input sample.
- Values of M and k may in some examples be chosen for each sub-band independently to approximately capture the frequency range of the given sub-band filter.
- W M k and cos(2*pi*k/M) are constants.
- the DFT computation block 606 in these examples sets the values of v k (n-2) and v k (n-1) to zero initially, and also reset after every M samples. After doing the above processing for M samples y k (n) is the required DFT coefficient. The DFT computes these coefficients for all the sub-bands for both left and right channel signals.
- the DFT coefficients determined by the DFT computation block 606 are complex numbers.
- the left channel DFT coefficients are represented as H L (k), and the right channel DFT coefficients are represented as H R (k) where k represents the sub-band number.
- the DFT coefficients are passed to the coefficient comparator 608.
- the operation of generating the DFT coefficients is shown in figure 6 by step 655.
- the coefficient comparator 608 receives the DFT coefficients from the DFT computation block 606 and the threshold values for each sub-band from the threshold determiner 614 to determine the gain function value for each sub-band.
- the coefficient comparator 608 is configured in some examples to determine how close the sub-band Interaural difference (for example at least one of the Interaural level difference - ILD, the Interaural time difference ITD, and the Interaural phase difference - IPD) values are with respect to the ILD, IPD and ITD values for the centre of head (front or back) localization. In other words where the signal component was a part of the original centre channel there would be virtually no Interaural difference (or put another way the ILD, IPD and ITD values would be expected to be close to zero). The coefficient comparator 608 thus attempts to find closeness in H L (k) and H R (k) values.
- the sub-band Interaural difference for example at least one of the Interaural level difference - ILD, the Interaural time difference ITD, and the Interaural phase difference - IPD
- this 'closeness' can be measured by determining the Euclidean distance between the H L (k) and H R (k) points on the complex plane. In other examples other distance metrics may be applied.
- the pure phase difference value may be determined by calculating the minimum phase impulse response for the sub-band. For example if a head related impulse response for the left and right channel signal is determined and converted to a minimum phase impulse response form the difference in the phase between phase responses of the minimum phase impulse response may be treated as the IPD value.
- a graphical representation of the selection of the difference and the thresholds for some embodiments which have selected the differences as level and phase differences may be shown in which an example of the normalized H L (k) value H L (k)/(max(H L (k),H R (k))) 711 with a orientation of ⁇ L from the real plane and the normalized H R (k) value H R (k)/(max(H L (k),H R (k))) 713 with a orientation of ⁇ R from the real plane is shown. Furthermore the vector difference distance 705 is shown. It would be understood that non-normalized differences and values may be determined in some other embodiments.
- the coefficient comparator 608 may in some embodiments determine the distance of the difference vector (or scalar) 705 for the sub-band and also compare the distance against the defined/generated threshold values for the sub-band. For example in the embodiments described above where the differences selected are for the lower frequency range based on a selection of the Interaural Level Difference (ILD) values, and Interaural Phase Difference (IPD) values then the difference is the vector difference which is compared against a vector threshold - which may be represented by the circle from the end of one of the vectors in Figure 7 .
- ILD Interaural Level Difference
- IPD Interaural Phase Difference
- the difference selected for the higher frequency range is based on the Interaural Level Difference (ILD) values only then the difference is the scalar difference produced by rotating one of the left or right normalized vectors onto the other vector.
- ILD Interaural Level Difference
- the threshold or thresholds themselves may further be vector in nature (in other words that the level difference is more significant than the phase difference).
- two threshold values are determined/generated and passed to the coefficient comparator 608 to be checked against the sub-band difference vector distance.
- only one threshold value is determined/generated and checked against or in some other embodiments more than two threshold values may be used.
- the coefficient comparator 608 may determine that if the two DFT vectors, H L (k) and, H R (k) for a specific sub-band k are close, in other words less than the smaller threshold (threshold 1 ) value, or mathematically Difference vector distance ⁇ threshold 1 then a gain g k of 1 or 0dB is assigned to that sub-band. This is represented by a first region 721 in Figure 7 .
- the comparator 608 has determined that as the difference values (such as a selection of one of the ILD, IPD and ITD and for example for the lower frequency range based on a selection of the Interaural Level Difference (ILD) values, and Interaural Phase Difference (IPD) values and for the higher frequency range is based on the Interaural Level Difference (ILD) values only) between the two channels is small then this sub-band comprises audio information which with a high confidence level was originally centre channel audio signal.
- the difference values such as a selection of one of the ILD, IPD and ITD and for example for the lower frequency range based on a selection of the Interaural Level Difference (ILD) values, and Interaural Phase Difference (IPD) values and for the higher frequency range is based on the Interaural Level Difference (ILD) values only
- step 657 The comparison operation against the first threshold value is shown in Figure 6 by step 657. Furthermore the operation of the assignment of the gain g k of 1 where the difference is less than the threshold is shown in step 659. Following step 659 the method progresses to the operation of combining left and right channel audio signals.
- the coefficient comparator 608 furthermore determines that if the difference between the vectors (for the IPD and ILD lower frequency range) or scalars (for the ILD only higher frequency range) shown as the two DFT vectors, H L (k) and, H R (k) in Figure 7 for a specific sub-band k are greater than the lower threshold (threshold 1 ) value but less than a higher threshold (threshold 2 ) then a gain, g k , which is less than 1 but greater than 0 is assigned to that sub-band.
- This area is represented in Figure 7 by a second region 723.
- the comparator 608 has determined that as the difference values (such as a selection of at least one of the ILD, IPD and ITD as seen from the vector or scalar distances between the left and right channel sub-vector values H L and H R ) between the two channels is moderate then this sub-band comprises audio information which with a moderate confidence level was originally part of the centre channel audio signal.
- the assigned gain is a function of the difference distance and the threshold values.
- the assigned gain may be an interpolation of a value between 0 and 1 where the assigned gain is higher the nearer the difference value is to the lower threshold value. This interpolation may in some examples be a linear interpolation and may in some other examples be a non-linear interpolation.
- the coefficient comparator 608 furthermore determines that if the distance of the vector (for the IPD and ILD lower frequency range) or of the scalar (for the ILD only higher frequency range) is greater than the higher threshold (threshold 2 ) value then the gain, g k assigned for the sub-band is 0. This is represented in Figure 7 by a third region 725.
- the comparator 608 has determined that as the difference values (such as at least one of ILD, IPD and ITD) between the two channels is large then this sub-band comprises audio information which with a low or no confidence level was originally centre channel audio signal.
- step 661 The comparison operation against the second or higher, threshold value (threshold 2 ) is shown in Figure 6 by step 661. Furthermore the operation of the assignment of the gain of between 1 and 0 where the difference is less than the higher threshold (but implicitly greater than the lower threshold) is shown in step 665. Following step 665 the method progresses to the operation of: combining left and right channel audio signals.
- step 663 Furthermore the operation of the assignment of the gain of 0 where the difference is greater than the higher threshold (and implicitly greater than the lower threshold) is shown in step 663. Following step 663 the method progresses to the operation of combining left and right channel audio signals.
- the coefficient comparator 608 may for some sub-band compare a non-vector (scalar) difference distance against the threshold value or values.
- the non-vector difference is the difference between the magnitudes
- the magnitude or level (ILD) difference is compared against the threshold values in the same way as described above.
- the coefficient comparator 608 determines both vector and scalar differences and selects the result dependent on the sub-band being analysed.
- the magnitude (scalar) difference may be determined and compared for the higher frequency sub-bands and the vector (phase and level) difference values may be determined for the lower frequency sub-bands.
- the coefficient comparator 608 may in some examples compare the magnitude difference against the threshold values for sub-bands for the frequency range >1500Hz and the vector difference against the threshold values for the sub-bands in the frequency range ⁇ 1500Hz.
- difference thresholds or 'cue' values defined by the IPD and ILD it would be appreciated that other cues such as phase difference or inter-aural time difference (ITD) where the relative time difference between the right and left signals is determined and compared against time threshold values or value may be used in some other examples.
- ITD inter-aural time difference
- the ILD and ITD differences which would describe a vector difference may be employed in lower frequency ranges or sub-bands and ILD differences only which would describe a scalar difference in higher frequency ranges or sub-bands.
- the differences selected may be all three of the differences IPD, ILD and ITD which define a three dimensional vector.
- the distance between the left and right channels may then define a three dimensional space and be tested against at least one three dimensional threshold.
- the ILD may be employed for the whole frequency range being analysed and the IPD and ITD being selected dependent on the frequency range being analysed.
- a schematic view of a gain determiner 604 configured to determine a gain based on the selection of the ILD and ITD is shown.
- the sub-band signals for the left and right channels are passed to the cross correlator 1201 and the level difference calculator.
- the cross correlator 1201 may determine a cross correlation between the filterbank pairs, for example the cross correlation for the first band or sub-band may be determined between the output of the first band or sub-band of the left channel audio signal with the output of the first band or sub-band of the right signal.
- the cross correlation would in these examples reveal a maximum peak which would occur at the time delay between the two signals, or in other words generating a result similar to the ITD which is passed to the coefficient comparator 608.
- the group delays of each of the filtered signals may be calculated and the ITD between the right and left signals after the filterbanks be determined from these group delay values.
- level difference calculator 1203 may determine the magnitude of the sub band components and may further determine the difference between the magnitude of the components and furthermore pass these values to the coefficient comparator 608.
- the threshold determiner 614 in these embodiments may determine at least one threshold value for each of the ILD value and the ITD value. In other words two sets of thresholds are determined, received or generated, one for delay and one for timing.
- the coefficient comparator 608 may then compare the determined ITD and ILD values against the associated set of threshold values to generate the associated gain or pass value.
- the coefficient comparator 608 in some examples may generate the threshold values by using a lookup table. For example in the examples where the difference is the selection of the ITD and ILD values a two dimensional look-up table is used with delay on one axis and level difference on the other axis. The gain is then read from the look-up table based on the input delay and level difference values for that sub-band.
- one difference or cue may be used for one frequency range (or sub-band) and a second difference or cue for a different frequency range (or sub-band).
- the ITD cue may be used for higher frequency signals because ITD is effective at higher frequencies whereas the IPD is used at lower frequencies.
- the ITD can be thought as the time difference between the envelopes of signal pairs whereas IPD is the difference between the signal contents (in other words inside the envelope).
- the IPD and ITD may be determined at lower frequencies.
- any suitable combination of the IPD, ITD, and/or ILD cues may be used determine or identify the sub-band components which may be used to generate the centre channel audio signal by comparing the difference value against one or more threshold values.
- the ILD may be used on sub-bands analysed above 1500 Hz
- IPD may be used on sub-bands analysed below 1500 Hz
- ITD used for sub-bands analysed from 0 to 5000 Hz (This would for example from the viewpoint of frequency ranges could be seen as a lower frequency range ⁇ 1500 Hz with the IPD and ITD difference being selected and a higher frequency range > 1500 Hz with the ILD and ITD being selected).
- each of the differences may be used for different analysis ranges which may overlap or abut or be separated.
- the IPD is selected for a first frequency range from 0 Hz to 500 Hz
- the ITD is selected for a second frequency range from 501 Hz to 1500 Hz
- the ILD is selected for a third frequency range from 1501 Hz to 5000 Hz.
- two regions may be defined with different gain values.
- two regions may be defined with one region being a pass region (i.e. the switch is on or gain is equal to one) where the cue value is less than the threshold, and the second region being a block region (i.e. the switch is off or gain is zero) where the cue value is greater than the threshold.
- more than two thresholds would produce more than three regions.
- the comparator 608 applies an additional 1 st order low pass smoothing function to reduce any perceptible distortion because of time varying nature of the gain.
- the comparator 608 may apply a higher order smoothing function or any suitable smoothing function to the output gain values in order to attempt to reduce perceptible distortion.
- the gain values are in some embodiments output to the amplifier 610.
- the centre channel extractor 455 in some examples and embodiments comprises a sub-band combiner 602 which receives the left and right channel sub-band audio signals and outputs combined left and right sub-band audio signals.
- the sub-band combiner 602 is shown to comprise an array of adders. Each of the adders is in some examples shown to receive one sub-band left channel audio signal and the same sub-band right channel audio signal and output a combined signal for that sub-band.
- first adder 623 adding the left and right channel audio signals for the sub-band 0
- second adder 625 adding the left and right channel audio signals for the sub-band 1
- third adder 627 adding the left and right channel audio signals for the sub-band 2
- N+1th adder 629 adding the left and right channel audio signals for the sub-band N.
- the fourth to N'th adders are not shown in Figure 5 for clarity reasons.
- the combination is an averaging of the left and right channel audio signals for a specific sub-band.
- step 667 The process of combining the sub-band left and right channel audio signals is shown in Figure 6 by step 667.
- the centre channel extractor 455 comprises in some embodiments an amplifier 610 for amplifying the combined left and right channel audio signals for each sub-band by the assigned gain value for the sub-band and outputting an amplified value of the combined audio signal to the sub-band combiner 612.
- the amplifier 610 in some examples may comprise as shown in Figure 5 an array of variable gain amplifiers where the gain is set from a control signal from the gain determiner 604.
- a first variable gain amplifier 633 amplifying the sub-band 0 combined audio signal B 0 by the sub-band 0 assigned gain value g 0
- a second variable gain amplifier 635 amplifying the sub-band 1 combined audio signal B 1 by the sub-band 1 assigned gain value g 1
- a third variable gain amplifier 637 amplifying the sub-band 2 combined audio signal B 2 by the sub-band 2 assigned gain value g 2
- a N+1th variable gain amplifier 639 amplifying the sub-band N combined audio signal B N by the sub-band N assigned gain value g N .
- the fourth to Nth variable gain amplifiers are not shown in Figure 5 for clarity reasons.
- step 669 The operation of amplifying the combined values by the assigned gains is shown in Figure 6 by step 669.
- the centre channel extractor 455 may further in some examples comprise a sub-band combiner 612.
- the sub-band combiner 612 in some embodiments receives for each sub-band the amplified combined sub-band audio signal value and combines them to generate an extracted centre channel audio signal.
- the sub-band combiner 612 comprises an adder 651 for performing a summing of the amplified combined sub-band audio signals.
- step 673 The operation of the combining of the sub-bands is shown in figure 6 by step 673.
- FIG. 10a and 10b The difference between the basic averaging of the combined left and right channel signals and a centre channel signal extracted according to some of the embodiments is shown in figures 10a and 10b .
- the basic averaging of the left and the right channel signals does not detect the audio components where the signal is clearly in the left and right channel and thus audio sources which originated in the right or left sound stage 'bleed' into the extracted centre channel signal.
- the example embodiment where the allocated gain is applied to the combination of the left and right channel signals produces an extracted centre channel signal which is far less susceptible to audio signals originating from other channels.
- centre channel extraction may in further examples provide further use cases so that the user may control the centre channel depending on user preferences.
- the centre channel extractor has been described with respect to an upmixing and virtualisation process for headphones the centre channel extraction apparatus and method is suitable for many different audio signal processing operations.
- the apparatus may be employed to extract audio signals from pairs of channels at various directions to the pairs of channels.
- the same centre channel extraction process may be used to extract so called unknown sources.
- a device such as a camera, with microphones mounted on opposite sides to record stereo sound may generate a pair of audio signals which using the channel extraction apparatus or methods may then produce a centre channel audio signal for presentation.
- a centre channel signal may be determined in order to isolate an audio source located in the 'centre'.
- the vocalist audio components may be extracted from the signal containing the instruments and audience component signals.
- the extracted centre channel may be subtracted from the left L' and right R' channel audio signals to generate a modified left L" and right R" channel audio signal.
- This output stereo signal would conventionally thus remove the vocalist from the audio signal rendering the resultant stereo audio signal suitable for Karaoke.
- process and apparatus may be implemented by an electronic device (such as a mobile phone) or through a server/database.
- the centre channel extractor 455 further comprises a pre-processor.
- a left channel audio signal pre-processor part 1151 is shown. It would be appreciated that the pre-processor would further comprise a mirror image right channel audio signal pre-processor part which is not shown in order to clarify the figure.
- the pre-processor is implemented prior to the sub-band generator 601 and the output of the pre-processor in such embodiments is input to the sub-band generator 601.
- the pre-processor is configured to apply a pre-processing to the signal to remove some of the uncorrelated signal in left and right channels. Therefore in some examples the pre-processor attempts to remove these uncorrected signals from the left and right channel audio signals before the generation of the sub-band audio signals.
- the left channel audio signal may be expressed as a combination of two components. These two components are a component S(n) that is coherent with right channel audio signal and an uncorrelated component N 1 (n). Similarly the right channel signal may also be expressed as a combination of two components, the coherent component S(n) and an uncorrelated component N 2 (n).
- the left channel audio signal pre-processor part 1151 comprises a Least Mean Square (LMS) processor 1109 to estimate the uncorrelated component.
- LMS Least Mean Square
- the left channel audio signal is input into a delay 1101 with a length of T+1 and then passed to a first pre-processor combiner 1105 and a second pre-processor combiner 1107.
- the right channel audio signal is in these examples input to a filter W 1103 with a length of 2T+1 and whose filter parameters are controlled by the LMS processor 1109.
- the output of the filter is furthermore in these examples passed to the first pre-processing combiner 1105 to generate an estimate of the unrelated component N 1 ' which is passed to the second pre-processing combiner 1107 to be subtracted from the delayed left channel audio signal in an attempt to remove the uncorrelated information.
- the LMS processor 1109 in examples such as these receives both the n 1 ' estimate of the uncorrelated information and the right channel audio signal to choose the filter parameters such that the correlated information is output to be subtracted at the first pre-processing combiner 1105.
- embodiments of the invention operating within an electronic device 10 or apparatus
- the invention as described below may be implemented as part of any audio processor.
- embodiments of the invention may be implemented in an audio processor which may implement audio processing over fixed or wired communication paths.
- user equipment may comprise an audio processor such as those described in embodiments of the invention above.
- electronic device and user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers.
- the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
- some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
- firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
- While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
- the embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware.
- any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
- the software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
- At least some embodiments may be a computer-readable medium encoded with instructions that, when executed by a computer perform: filtering at least two audio signals to generate at least two groups of audio components per audio signal; determining a difference between the at least two audio signals for each group of audio components; and generating a further audio signal by selectively combining the at least two audio signals for each group of audio components dependent on the difference between the at least two audio signals for each group of audio components.
- the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
- the data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.
- Embodiments of the inventions may be practiced in various components such as integrated circuit modules.
- the design of integrated circuits is by and large a highly automated process.
- Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
- Programs such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules.
- the resultant design in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication.
- circuitry refers to all of the following:
- circuitry' applies to all uses of this term in this application, including any claims.
- the term 'circuitry' would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware.
- the term 'circuitry' would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or similar integrated circuit in server, a cellular network device, or other network device.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
Claims (13)
- Verfahren zur Verarbeitung von Audiosignalen, umfassend:Filtern von mindestens zwei Audiosignalen, um mindestens zwei Gruppen von mindestens zwei Audiokomponenten zu erzeugen;Bestimmen mindestens einer interauralen Differenz zwischen jeweiligen Paaren der mindestens zwei Gruppen, wobei die mindestens eine interaurale Differenz basierend auf mindestens einer Audiokomponente jeder Gruppe der mindestens zwei Gruppen bestimmt wird;Vergleichen der bestimmten mindestens einen interauralen Differenz zwischen den jeweiligen Paaren der mindestens zwei Gruppen mit mindestens einem Differenzschwellenwert für jedes jeweilige Paar der mindestens zwei Gruppen, wobei der mindestens eine Differenzschwellenwert aus einer linken und rechten kopfbezogenen Übertragungsfunktion für einen Hörer erzeugt wird;Bestimmen eines Verstärkungswerts für jede Gruppe der mindestens zwei Gruppen basierend auf dem Vergleich;Bestimmen einer Summe der mindestens zwei Audiokomponenten für jede Gruppe der mindestens zwei Gruppen;Bestimmen eines Produkts für jede Gruppe der mindestens zwei Gruppen durch Multiplizieren des Verstärkungswerts und der für jede Gruppe der mindestens zwei Gruppen bestimmten Summe; undErzeugen eines Zentrums-Audiosignals durch Kombinieren des für jede Gruppe der mindestens zwei Gruppen bestimmten Produkts.
- Verfahren nach Anspruch 1, wobei das Filtern der mindestens zwei Audiosignale das Filtern der mindestens zwei Audiosignale nach mindestens einem der Folgenden umfasst:Gruppen mit überlappenden Frequenzbereichen;Gruppen mit angrenzenden Frequenzbereichen;Gruppen mit linearen Intervallfrequenzbereichen; undGruppen mit nichtlinearen Intervallfrequenzbereichen.
- Verfahren nach Anspruch 1 oder 2, wobei die mindestens eine interaurale Differenz mindestens einen der Folgenden umfasst:Wert der interauralen Pegeldifferenz;Wert der interauralen Phasendifferenz; undWert der interauralen Zeitdifferenz.
- Verfahren nach Anspruch 1, wobei das Bestimmen des Verstärkungswerts ferner umfasst:Bestimmen eines ersten Verstärkungswerts für jede Gruppe der mindestens zwei Gruppen mit einer interauralen Differenz, die kleiner als ein erster Differenzschwellenwert ist;Bestimmen eines zweiten Verstärkungswerts für jede Gruppe der mindestens zwei Gruppen mit einer interauralen Differenz, die größer als oder gleich einem ersten Differenzschwellenwert des mindestens einen Differenzschwellenwerts und kleiner als ein zweiter Differenzschwellenwert des mindestens einen Differenzschwellenwerts ist;Bestimmen eines dritten Verstärkungswerts für jede Gruppe der mindestens zwei Gruppen mit einer interauralen Differenz, die größer als oder gleich dem zweiten Differenzschwellenwert des mindestens einen Differenzschwellenwerts ist.
- Verfahren nach Anspruch 1, ferner umfassend das Bestimmen der linken und rechten kopfbezogenen Übertragungsfunktion für den Hörer in Abhängigkeit von mindestens einer der Folgenden:einer gemessenen kopfbezogenen Übertragungsfunktion;einer gemessenen kopfbezogenen Impulsantwort;einer gewählten kopfbezogenen Übertragungsfunktion;einer gewählten kopfbezogenen Impulsantwort;einer modifizierten kopfbezogenen Übertragungsfunktion; undeiner modifizierten kopfbezogenen Impulsantwort.
- Verfahren nach Anspruch 1, wobei die mindestens zwei Audiosignale ein linkskanaliges Audiosignal und ein rechtskanaliges Audiosignal umfassen.
- Computerprogrammprodukt, umfassend Anweisungen, die bei Ausführen durch mindestens einen Prozessor den Prozessor veranlassen, das Verfahren zum Verarbeiten von Audiosignalen durchzuführen, umfassend:Filtern von mindestens zwei Audiosignalen, um mindestens zwei Gruppen von mindestens zwei Audiokomponenten zu erzeugen;Bestimmen mindestens einer interauralen Differenz zwischen jeweiligen Paaren der mindestens zwei Gruppen, wobei die mindestens eine interaurale Differenz basierend auf mindestens einer Audiokomponente jeder Gruppe der mindestens zwei Gruppen bestimmt wird;Vergleichen der bestimmten mindestens einen interauralen Differenz zwischen jeweiligen Paaren der mindestens zwei Gruppen mit mindestens einem Differenzschwellenwert für jedes jeweilige Paar der mindestens zwei Gruppen, wobei der mindestens eine Differenzschwellenwert aus einer linken und rechten kopfbezogenen Übertragungsfunktion für einen Hörer erzeugt wird;Bestimmen eines Verstärkungswerts für jede Gruppe der mindestens zwei Gruppen basierend auf dem Vergleich;Bestimmen einer Summe der mindestens zwei Audiokomponenten für jede Gruppe der mindestens zwei Gruppen;Bestimmen eines Produkts für jede Gruppe der mindestens zwei Gruppen durch Multiplizieren des Verstärkungswerts und der für jede Gruppe der mindestens zwei Gruppen bestimmten Summe; undErzeugen eines Zentrums-Audiosignals durch Kombinieren des für jede Gruppe der mindestens zwei Gruppen bestimmten Produkts.
- Vorrichtung zur Verarbeitung von Audiosignalen, umfassend Mittel zum:Filtern von mindestens zwei Audiosignalen, um mindestens zwei Gruppen von mindestens zwei Audiokomponenten zu erzeugen;Bestimmen mindestens einer interauralen Differenz zwischen jeweiligen Paaren der mindestens zwei Gruppen, wobei die mindestens eine interaurale Differenz basierend auf mindestens einer Audiokomponente jeder Gruppe der mindestens zwei Gruppen bestimmt wird; undVergleichen der bestimmten mindestens einen interauralen Differenz zwischen jeweiligen Paaren der mindestens zwei Gruppen mit mindestens einem Differenzschwellenwert für jedes jeweilige Paar der mindestens zwei Gruppen, wobei der mindestens eine Differenzschwellenwert aus einer linken und rechten kopfbezogenen Übertragungsfunktion für einen Hörer erzeugt wird;Bestimmen eines Verstärkungswerts für jede Gruppe der mindestens zwei Gruppen basierend auf dem Vergleich;Bestimmen einer Summe der mindestens zwei Audiokomponenten für jede Gruppe der mindestens zwei Gruppen;Bestimmen eines Produkts für jede Gruppe der mindestens zwei Gruppen durch Multiplizieren des Verstärkungswerts und der für jede Gruppe der mindestens zwei Gruppen bestimmten Summe; undErzeugen eines Zentrums-Audiosignals durch Kombinieren des für jede Gruppe der mindestens zwei Gruppen bestimmten Produkts.
- Vorrichtung nach Anspruch 8, wobei das Mittel zum Filtern der mindestens zwei Audiosignale ferner zum Filtern der mindestens zwei Audiosignale nach mindestens einem der Folgenden eingerichtet ist:Gruppen mit überlappenden Frequenzbereichen;Gruppe mit angrenzenden Frequenzbereichen;Gruppen mit linearen Intervallfrequenzbereichen; undGruppen mit nichtlinearen Intervallfrequenzbereichen.
- Vorrichtung nach Anspruch 8 oder 9, wobei die mindestens eine interaurale Differenz mindestens einen der Folgenden umfasst:Wert der interauralen Pegeldifferenz;Wert der interauralen Phasendifferenz; undWert der interauralen Zeitdifferenz.
- Vorrichtung nach Anspruch 8, wobei das Mittel zum Bestimmen des Verstärkungswerts ferner eingerichtet ist zum:Bestimmen eines ersten Verstärkungswerts für jede Gruppe der mindestens zwei Gruppen mit einer interauralen Differenz, die kleiner als ein erster Differenzschwellenwert des mindestens einen Differenzschwellenwerts ist;Bestimmen eines zweiten Verstärkungswerts für jede Gruppe der mindestens zwei Gruppen mit einer interauralen Differenz, die größer als oder gleich dem ersten Differenzschwellenwert des mindestens einen Differenzschwellenwerts und kleiner als ein zweiter Differenzschwellenwert des mindestens einen Differenzschwellenwerts ist;Bestimmen eines dritten Verstärkungswerts für jede Gruppe der mindestens zwei Gruppen mit einer interauralen Differenz, die größer als oder gleich dem zweiten Differenzschwellenwert des mindestens einen Differenzschwellenwerts ist.
- Vorrichtung nach Anspruch 8, ferner umfassend das Bestimmen der linken und rechten kopfbezogenen Übertragungsfunktion für den Hörer in Abhängigkeit von mindestens einer der Folgenden:einer gemessenen kopfbezogenen Übertragungsfunktion;einer gemessenen kopfbezogenen Impulsantwort;einer gewählten kopfbezogenen Übertragungsfunktion;einer gewählten kopfbezogenen Impulsantwort;einer modifizierten kopfbezogenen Übertragungsfunktion; undeiner modifizierten kopfbezogenen Impulsantwort.
- Vorrichtung nach Anspruch 8, wobei die mindestens zwei Audiosignale ein linkskanaliges Audiosignal und ein rechtskanaliges Audiosignal umfassen.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN2055DE2009 | 2009-09-30 | ||
PCT/FI2010/050709 WO2011039413A1 (en) | 2009-09-30 | 2010-09-15 | An apparatus |
Publications (3)
Publication Number | Publication Date |
---|---|
EP2484127A1 EP2484127A1 (de) | 2012-08-08 |
EP2484127A4 EP2484127A4 (de) | 2013-06-19 |
EP2484127B1 true EP2484127B1 (de) | 2020-02-12 |
Family
ID=43825606
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP10819956.3A Active EP2484127B1 (de) | 2009-09-30 | 2010-09-15 | Verfahren, computer-programm und vorrichtung zur verarbeitung von audiosignalen |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP2484127B1 (de) |
CN (1) | CN102550048B (de) |
WO (1) | WO2011039413A1 (de) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9794715B2 (en) | 2013-03-13 | 2017-10-17 | Dts Llc | System and methods for processing stereo audio content |
DE102013217367A1 (de) * | 2013-05-31 | 2014-12-04 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und verfahren zur raumselektiven audiowiedergabe |
CN104468991A (zh) * | 2014-11-24 | 2015-03-25 | 广东欧珀移动通信有限公司 | 一种移动终端及其音频收发方法 |
EP3373595A1 (de) | 2017-03-07 | 2018-09-12 | Thomson Licensing | Audiowiedergabe mit heimkinosystem und fernsehen |
EP3499915B1 (de) * | 2017-12-13 | 2023-06-21 | Oticon A/s | Hörgerät und binaurales hörsystem mit einem binauralen rauschunterdrückungssystem |
KR102531634B1 (ko) * | 2018-08-10 | 2023-05-11 | 삼성전자주식회사 | 오디오 장치 및 그 제어방법 |
CN108989688B (zh) * | 2018-09-14 | 2019-05-31 | 成都数字天空科技有限公司 | 虚拟相机防抖方法、装置、电子设备及可读存储介质 |
KR102613035B1 (ko) * | 2022-03-23 | 2023-12-18 | 주식회사 알머스 | 위치보정 기능의 이어폰 및 이를 이용하는 녹음방법 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002078100A (ja) * | 2000-09-05 | 2002-03-15 | Nippon Telegr & Teleph Corp <Ntt> | ステレオ音響信号処理方法及び装置並びにステレオ音響信号処理プログラムを記録した記録媒体 |
WO2008031611A1 (en) * | 2006-09-14 | 2008-03-20 | Lg Electronics Inc. | Dialogue enhancement techniques |
WO2011116839A1 (en) * | 2010-03-26 | 2011-09-29 | Bang & Olufsen A/S | Multichannel sound reproduction method and device |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0593128B1 (de) * | 1992-10-15 | 1999-01-07 | Koninklijke Philips Electronics N.V. | System zur Ableitung eines Mittelkanalsignals aus einem Stereotonsignal |
US6853732B2 (en) * | 1994-03-08 | 2005-02-08 | Sonics Associates, Inc. | Center channel enhancement of virtual sound images |
US7929708B2 (en) * | 2004-01-12 | 2011-04-19 | Dts, Inc. | Audio spatial environment engine |
KR101283741B1 (ko) * | 2004-10-28 | 2013-07-08 | 디티에스 워싱턴, 엘엘씨 | N채널 오디오 시스템으로부터 m채널 오디오 시스템으로 변환하는 오디오 공간 환경 엔진 및 그 방법 |
JP4479644B2 (ja) * | 2005-11-02 | 2010-06-09 | ソニー株式会社 | 信号処理装置および信号処理方法 |
WO2007106324A1 (en) * | 2006-03-13 | 2007-09-20 | Dolby Laboratories Licensing Corporation | Rendering center channel audio |
US8180062B2 (en) * | 2007-05-30 | 2012-05-15 | Nokia Corporation | Spatial sound zooming |
-
2010
- 2010-09-15 EP EP10819956.3A patent/EP2484127B1/de active Active
- 2010-09-15 CN CN201080044113.1A patent/CN102550048B/zh active Active
- 2010-09-15 WO PCT/FI2010/050709 patent/WO2011039413A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002078100A (ja) * | 2000-09-05 | 2002-03-15 | Nippon Telegr & Teleph Corp <Ntt> | ステレオ音響信号処理方法及び装置並びにステレオ音響信号処理プログラムを記録した記録媒体 |
WO2008031611A1 (en) * | 2006-09-14 | 2008-03-20 | Lg Electronics Inc. | Dialogue enhancement techniques |
WO2011116839A1 (en) * | 2010-03-26 | 2011-09-29 | Bang & Olufsen A/S | Multichannel sound reproduction method and device |
Non-Patent Citations (2)
Title |
---|
MACPHERSON EWAN A ET AL: "Listener weighting of cues for lateral angle: The duplex theory of sound localization revisiteda)", THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, AMERICAN INSTITUTE OF PHYSICS FOR THE ACOUSTICAL SOCIETY OF AMERICA, NEW YORK, NY, US, vol. 111, no. 5, 1 May 2002 (2002-05-01), pages 2219 - 2236, XP012002885, ISSN: 0001-4966, DOI: 10.1121/1.1471898 * |
SENGPIEL: "Die Duplex-Theorie von Lord Rayleigh", 30 April 2008 (2008-04-30), pages 1 - 1, XP055099399, Retrieved from the Internet <URL:http://www.sengpielaudio.com/Duplex-Theorie.pdf> [retrieved on 20140130] * |
Also Published As
Publication number | Publication date |
---|---|
EP2484127A4 (de) | 2013-06-19 |
CN102550048B (zh) | 2015-03-25 |
EP2484127A1 (de) | 2012-08-08 |
CN102550048A (zh) | 2012-07-04 |
WO2011039413A1 (en) | 2011-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101567461B1 (ko) | 다채널 사운드 신호 생성 장치 | |
EP2484127B1 (de) | Verfahren, computer-programm und vorrichtung zur verarbeitung von audiosignalen | |
US10313813B2 (en) | Apparatus and method for sound stage enhancement | |
KR101341523B1 (ko) | 스테레오 신호들로부터 멀티 채널 오디오 신호들을생성하는 방법 | |
JP6198800B2 (ja) | 少なくとも2つの出力チャネルを有する出力信号を生成するための装置および方法 | |
JP6377249B2 (ja) | オーディオ信号の強化のための装置と方法及び音響強化システム | |
JP5298199B2 (ja) | モノフォニック対応およびラウドスピーカ対応のバイノーラルフィルタ | |
US9743215B2 (en) | Apparatus and method for center signal scaling and stereophonic enhancement based on a signal-to-downmix ratio | |
CN112313970B (zh) | 增强具有左输入通道和右输入通道的音频信号的方法和系统 | |
EP3613221A1 (de) | Verbesserung der lautsprecherwiedergabe unter verwendung eines im räumlichen ausmass verarbeiteten audiosignals | |
JP2022536169A (ja) | 音場関連レンダリング | |
JP4810621B1 (ja) | 音声信号変換装置、方法、プログラム、及び記録媒体 | |
US9794717B2 (en) | Audio signal processing apparatus and audio signal processing method | |
JP2010217268A (ja) | 音源方向知覚が可能な両耳信号を生成する低遅延信号処理装置 | |
JP2015065551A (ja) | 音声再生システム | |
JP2020039168A (ja) | サウンドステージ拡張のための機器及び方法 | |
AU2012252490A1 (en) | Apparatus and method for generating an output signal employing a decomposer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20120420 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20130517 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04S 5/00 20060101ALI20130513BHEP Ipc: H04S 1/00 20060101ALI20130513BHEP Ipc: H04S 3/00 20060101AFI20130513BHEP |
|
17Q | First examination report despatched |
Effective date: 20140213 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: NOKIA CORPORATION |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: NOKIA TECHNOLOGIES OY |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20171205 |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTC | Intention to grant announced (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
INTG | Intention to grant announced |
Effective date: 20180525 |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTC | Intention to grant announced (deleted) | ||
INTG | Intention to grant announced |
Effective date: 20181119 |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
INTC | Intention to grant announced (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
INTG | Intention to grant announced |
Effective date: 20190802 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: NOKIA TECHNOLOGIES OY |
|
INTG | Intention to grant announced |
Effective date: 20190814 |
|
INTG | Intention to grant announced |
Effective date: 20190822 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1233718 Country of ref document: AT Kind code of ref document: T Effective date: 20200215 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602010063076 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200512 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20200212 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200513 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200612 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200512 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200705 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602010063076 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1233718 Country of ref document: AT Kind code of ref document: T Effective date: 20200212 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20201113 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20200915 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20200930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200915 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200930 Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200930 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200930 Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200915 Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200915 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200212 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230527 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240730 Year of fee payment: 15 |