EP2474975B1 - Verfahren zur Schätzung der Sprachqualität - Google Patents

Verfahren zur Schätzung der Sprachqualität Download PDF

Info

Publication number
EP2474975B1
EP2474975B1 EP12000483.3A EP12000483A EP2474975B1 EP 2474975 B1 EP2474975 B1 EP 2474975B1 EP 12000483 A EP12000483 A EP 12000483A EP 2474975 B1 EP2474975 B1 EP 2474975B1
Authority
EP
European Patent Office
Prior art keywords
signal
speech
spectrum
test
parts
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP12000483.3A
Other languages
English (en)
French (fr)
Other versions
EP2474975A1 (de
Inventor
Raphel Ullmann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Swissqual License AG
Original Assignee
Swissqual License AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Swissqual License AG filed Critical Swissqual License AG
Priority to EP12000483.3A priority Critical patent/EP2474975B1/de
Publication of EP2474975A1 publication Critical patent/EP2474975A1/de
Application granted granted Critical
Publication of EP2474975B1 publication Critical patent/EP2474975B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/69Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for evaluating synthetic or decoded voice signals

Definitions

  • the invention relates to a method for estimating speech quality.
  • telecommunication network services have an interest in monitoring the transmission quality of the telecommunication network as perceived by the end-user, in particular with respect to the transmission of speech.
  • instrumental or objective methods for speech quality estimation may be used that compare a reference speech signal in form of an undistorted high-quality speech signal, which enters the telecommunication network, with a test speech signal resulting from the reference speech signal, the test speech signal being a speech signal to be tested and analysed, respectively, after transmission via and/or processing by the telecommunication network (including a simulation of a telecommunication network, i.e. a simulated telecommunication network) and possible distortion, wherein the test speech signal is given by the reference speech signal after transmission via and/or processing by the telecommunication network.
  • spectral representations of the respective signals are usually used.
  • the aim of the comparison of the reference speech signal with the test speech signal is the determination of perceptually relevant differences between the reference speech signal and the test speech signal.
  • the spectral representations of the reference speech signal and of the test speech signal can be highly influenced by effects that have basically no or little disturbing character for the perception of the end-user such as time differences, e.g. signal delay, or differences in intensity (e.g. power, level or loudness) between the respective speech signals.
  • time differences e.g. signal delay
  • intensity e.g. power, level or loudness
  • differences are compensated by means of delay/time and intensity alignment procedures before the actual differences between the spectral representations of the reference speech signal and the test speech signal are computed.
  • Both the delay/time and the intensity alignment procedures are not restricted to the compensation of a fixed bias, but can also be applied for time-varying compensation.
  • the remaining differences in the spectral representations of corresponding sections of the reference speech signal and of the test speech signal are used to derive an estimate of their similarity.
  • similarity estimations are computed for a number of short segments of the reference speech signal and the test speech signal.
  • the similarity estimations computed for the respective segments are then aggregated.
  • the aggregated similarity estimations represent a raw estimation of the overall speech quality of the test signal (i.e. of the network transmission quality) with the raw estimation usually being transformed to a common quality scale such as the known so-called MOS scale (mean opinion score scale) ranging from 1 to 5.
  • MOS scale mean opinion score scale
  • the similarity estimation can be given by a weighted spectral difference or by a kind of spectrum correlation between the reference speech signal and the test speech signal.
  • the spectral representation of a signal segment is often derived from the common short-term Fourier transform that has been further transformed to a perceptual representation of its frequency content, e.g. by applying common loudness models as described in Paulus, E. and Zwicker, E., "Programme für Strukturen Betician der Lautheit aus Terzpegeln oder Frequenz tendencypegeln", Acustica, Vol. 27, No. 5, 1972 .
  • a reference speech signal 1 enters e.g. by means of a mobile phone 2 a telecommunication network 3 resulting in a usually degraded test speech signal 4 received e.g. by a mobile phone 5 or tapped/scanned after receipt e.g. by the mobile phone 5, that shall be perceived by an end-user 6.
  • Box 7 illustrates the method for speech quality estimation.
  • First time/delay and intensity differences are compensated by alignment procedures (box 8).
  • the spectral representations of the aligned speech signals 1, 4 are computed and compared to give similarity estimations (box 9), wherein the computation and the comparison is typically performed for short segments of the respective signals during their entire duration (illustrated by arrow 10). From the similarity estimations the speech quality of the test speech signal is estimated in box 11.
  • Beerends J.G., Hekstra, A.P., Rix, A.W., Hollier, M.P., "Perceptual Evaluation of Speech Quality (PESQ), the new ITU standard for end-to-end speech quality assessment.
  • Part I Time alignment
  • J. Audio. Eng. Soc. Vol. 50, No. 10, October 2002
  • Beerends, J.G., Hekstra, A.P., Rix, A.W. Hollier, M.P.
  • PESQ Personal Evaluation of Speech Quality
  • Part II - Psychoacoustic model J. Audio Eng. Soc., Vol. 50, No.
  • each known method lies mainly in the implementation details of the steps corresponding to the boxes 8, 9 and 10 in Figure 1 , as well as in the way the spectral representations of the respective signals are transformed to a perceptual representation of their frequency content. They also use different strategies for post-processing and weighing of the raw estimations (i.e. the above-mentioned similarity estimations).
  • the aim of each known method is to achieve a high prediction accuracy of the computed overall speech quality when compared to speech quality values obtained in listening tests with human participants.
  • changes to the speech spectrum intensity that are constant over all frequency bands usually contribute to the perceived speech quality only to a limited extent. Rather, changes/modifications in the relative intensity of the frequency bands within the speech spectrum have been found to have a more significant effect on the perceived speech quality.
  • Figure 2 shows an example of a reference speech spectrum and a test speech spectrum, wherein the test speech spectrum is uniformly attenuated over all frequency bands when compared with the reference speech spectrum.
  • the calculation of the difference D 1 (f) may yield a large absolute intensity difference despite the limited impairment of the perceived speech quality.
  • Figure 3 shows a further example of a reference speech spectrum and a test speech spectrum, wherein the test speech spectrum Y(f) differs from the reference speech spectrum X(f) only in a single frequency band f i .
  • the calculation of the difference D 1 (f) yields the desired measure of the perceived intensity difference, as the only non-zero result for D 1 (f) is obtained for the frequency band f being equal to f i .
  • known approaches are not just capable of computing the overall time difference between the reference speech signal and the test speech signal in time domain, but they can determine the time differences between individual parts of the respective signals. For this corresponding parts of the reference speech signal and of the test speech signal are matched. The signal parts are matched and their respective time differences are typically computed in the order of the temporal/chronological occurrence of the signal parts in the respective speech signals, i.e. signal parts occurring at the end of the respective signals are matched after signal parts occurring at the beginning of the respective speech signals have already been matched.
  • Figure 6 shows a corresponding example with a reference speech signal 201 and a test speech signal 202 in time domain.
  • the test speech signal 202 exhibits a positive time difference, i.e. it starts later in time, when compared to the reference speech signal 201.
  • the known matching procedure starts at the beginning of the signals 201, 202 and progresses monotonously in time, yielding e.g. the matched signal parts 203 and 204.
  • signal parts of the reference speech signal 201 following the match 204 can be matched with any signal part of the test speech signal 202 that lies chronologically after the match 204.
  • the already matched signal parts 203 and 204 thus limit the number of possible matches by not taking into account signal parts chronologically occurring before the signal part that is currently matched. This approach can therefore lead to incorrect matching as illustrated by the erroneous match 205.
  • Such a matching procedure that starts at the beginning of the speech signals 201 and 202 and progresses monotonously in time may only lead to a limited extent to correct matching of later occurring signal parts.
  • a miscalculation in known approaches for time/delay compensation may lead to the non-muted beginning of the reference speech signal being matched with an intact, non-muted part of the test speech signal that shares some similarities with the beginning of the reference speech signal that has been muted in the test speech signal, but occurs chronologically later in the test speech signal.
  • the part of the reference speech signal, that actually corresponds to the aforementioned intact part of the test speech signal that wrongly has been matched to the beginning of the reference speech signal can only be matched with signal parts of the test speech signal occurring after this already matched intact part by means of the above-described known approach (cf. Figure 6 ).
  • each signal part can only be matched once and from the matching being typically performed such that the temporal order of matched signal parts in either signal is preserved. Therefore, the occurrence of one incorrect matching may bias or deteriorate, respectively, the matching of signal parts occurring chronologically later in the respective speech signals.
  • One of the typical problems of speech signal transmission is the interruption or loss of speech.
  • Known approaches rate the portions of the test speech signal with missed speech by comparing the test speech signal with the reference speech signal and measuring the amount of missed speech intensity, wherein the amount of missed speech intensity can be computed from perceptual representations of the speech signals such as loudness spectra.
  • the amount of missed speech is related to the part of the reference speech signal that has actually been missed.
  • this approach might be disadvantageous as a human listener who listens to the test speech signal does not rate missed speech in such manner.
  • the human listener has no knowledge of the reference speech signal, he has no possibility to compare the test speech signal with the reference speech signal, and he hence has no knowledge of what is actually missing.
  • the actually perceived distortion that is caused by an interruption or loss of speech is rather related to the knowledge and expectations of the human listener formed on the basis of the received portions of the test speech signal.
  • EP 1 104 924 A1 discloses signal alignment on the basis of a comparison of energies within time windows of a certain duration.
  • a reference speech signal enters a telecommunication network, in particular a mobile network, resulting in a test speech signal, and the method comprises the steps of aligning the reference speech signal and the test speech signal by matching signal parts of the reference speech signal with signal parts of the test speech signal, wherein matched signal parts of the respective signals are of similar length in time domain and have similar intensity summed over their length or relative to their length, and of computing and comparing the speech spectra of the aligned reference speech signal and the aligned test speech signal, the comparison resulting in a difference measure that is indicative of the speech quality of the test speech signal.
  • the matching For matching signal parts of the reference speech signal with corresponding signal parts of the test speech signal, first the one or more signal parts of the reference speech signal that have the highest intensity summed over their length or relative to their length, respectively, are matched with signal parts of the test speech signal. Then the matching continues by matching signal parts of the reference speech signal with signal parts of the test speech signal, whereby for the signal parts of the reference speech signal to be matched the intensity summed over the length or relative to the length, respectively, decreases for each subsequent match.
  • a performance measure is preferably computed for each pair of matched signal parts.
  • the performance measure is in particular given by the maximum of the cross-correlation of the matched signal parts that is normalized by the signal powers of the matched signal parts. If the performance measure of a pair of matched signal parts lies beneath a pre-set threshold value, then the pair of matched signal parts is preferentially deemed to have insufficient performance. Alternatively, a pair of matched signal parts is deemed to have insufficient performance if its performance measure is significantly lower than the performance measures of other pairs of matched signal parts. If a pair of matched signal parts is deemed to have insufficient performance, then its signal parts are preferably re-matched, i.e. matched again with other signal parts of the respective signal.
  • a pair of matched signal parts that is deemed to have insufficient performance may be un-matched, i.e. the corresponding respective signal parts may be made available again for matching with other signal parts that have not yet been matched.
  • Re-matching of the now again unmatched reference speech signal part may be performed after further other reference speech signal parts have been matched.
  • employing the performance measure may result in a matching order of the reference speech signal parts that differs from a matching order given by the reference speech signal parts arranged in accordance to their respective intensity summed over or relative to their respective length with decreasing intensity.
  • the method of the invention may also comprise the further steps of identifying a number of perceptually dominant frequency sub-bands in one of the reference speech spectrum and the test speech spectrum, with the reference speech signal having a reference speech spectrum and the test speech signal having a test speech spectrum, computing an intensity scaling factor for each identified sub-band by minimizing a measure of the intensity difference between those parts of the reference speech spectrum and the test speech spectrum that correspond to the respective sub-band, multiplying the test speech spectrum with each intensity scaling factor thus generating a number of scaled test speech spectra, selecting one scaled test speech spectrum, and computing the difference between the selected scaled test speech spectrum and the reference speech spectrum. This difference is indicative of the speech quality of the test speech signal.
  • the measure of the intensity difference is preferably given by the squared intensity difference or the global maximum of the intensity difference between those parts of the reference speech spectrum and of the test speech spectrum that correspond to the respective sub-band.
  • the number of perceptually dominant sub-bands of one of the reference speech spectrum and the test speech spectrum is preferably identified by determining the local maxima in a perceptual representation of the respective spectrum and by selecting a predetermined frequency range around each local maximum, wherein the predetermined frequency range is preferentially determined by the local minima bordering the respective local maximum, with one local minimum on each side (in frequency domain) of the respective local maximum.
  • the predetermined frequency range shall be smaller or equal to 4 Bark.
  • the perceptual representation of the respective spectrum is preferably obtained by transforming the respective spectrum to a loudness spectrum as e.g. defined in Paulus, E. and Zwicker, E., "Programme für Strukturen Betician der Lautheit aus Terzpegeln oder Frequenz tendencypegeln", Acustica, Vol. 27, No. 5, 1972 .
  • the selected scaled test speech spectrum is preferably given by the scaled test speech spectrum yielding the lowest measure of the intensity difference between the reference speech spectrum and a scaled test speech spectrum with the intensity difference being computed for each scaled test speech spectrum.
  • the measure of the intensity difference is preferably given by the squared intensity difference or alternatively the global maximum of the intensity difference between the reference speech spectrum and a respective scaled test speech spectrum.
  • a difference between the reference speech spectrum and the test speech spectrum can be computed that is close to human perception, that in particular basically does not take into account amplifications and attenuations, respectively, that are constant over all frequency bands, but that places emphasis on modifications in the relative intensity of single frequency bands that contribute to a qualitative impairment that would be perceived by a human listener.
  • the method of the invention may also comprise the further steps of for at least one missed or interrupted signal part in the test speech signal computing the signal intensities of the signal parts of the test speech signal that are adjacent to the missed or interrupted signal part, deriving an expected signal intensity for the at least one missed or interrupted signal part from the computed signal intensities of the adjacent signal parts, computing a measure of the perceived distortion by comparing the actual intensity of the at least one missed or interrupted signal part in the test speech signal with the derived expected intensity for the at least one missed or interrupted signal part, computing a measure of the actual distortion by comparing the reference speech signal with the test speech signal, and combining the measure of the perceived distortion with the measure of the actual distortion to generated a combined measure of distortion that is indicative of the speech quality of the test speech signal.
  • the expected signal intensity of the at least one missed or interrupted signal part in the test speech signal is preferably derived from the computed signal intensities of the adjacent signal parts of the test speech signal by means of interpolation, in particular by means of linear interpolation and/or spline interpolation.
  • the method of the invention can advantageously be combined with existing methods for speech quality estimation that in particular have the structure depicted in and described with respect to Figure 1 to improve and extend the existing methods.
  • Figure 4 shows a flow chart of a first example of a method for estimating speech quality.
  • a certain number N of perceptually dominant frequency sub-bands b 1...N is identified in and selected from one of the reference speech spectrum X(f) and the test speech spectrum Y(f), for example from the undistorted reference speech spectrum X(f), or from both spectra.
  • the reference speech spectrum X(f) and the test speech spectrum Y(f) represent exemplary speech spectra of a reference speech signal and a test speech signal, respectively, which can both comprise several speech spectra.
  • the respective spectrum may be transformed to a perceptual representation of the respective spectrum, the perceptual representation corresponding to the frequency content that is actually received by a human auditory system. Then the local maxima of the perceptual representation are identified and a predetermined range of frequencies around each local maximum gives the perceptually dominant frequency sub-bands.
  • the limiting values of each predetermined range are preferentially given by the local minima adjacent to the respective local maximum with the condition that the entire range is in particular smaller or equal to 4 Bark.
  • the loudness spectrum is one example of a perceptual representation that has been found to represent to a high degree the human subjective response to auditory stimuli.
  • an intensity scaling factor c i is preferably computed for each identified sub-band b i .
  • the respective intensity scaling factor c i is computed such that the squared intensity difference
  • one scaled test speech spectrum Y sel (f) of the generated scaled test speech spectra Y i (f) is selected from the generated N scaled test speech spectra Y i (f).
  • the spectral difference function D(f) contains non-zero values for frequency bands that have been amplified and attenuated, respectively, when compared with the reference speech spectrum X(f). Positive values of D(f) correspond to amplified spectrum portions and negative values of D(f) correspond to attenuated spectrum portions.
  • the spectral difference function D(f) normally contains small absolute values.
  • the spectral difference function D(f) constitutes an estimate of the speech quality.
  • the computation of the spectral difference function D(f), that includes the computation and selection of a scaled test speech spectrum, fully compensates the difference between the reference speech spectrum X(f) and the test speech spectrum Y(f) yielding a spectral difference function D(f) that is zero at all frequencies f.
  • the test speech spectrum Y(f) differs from the reference speech spectrum X(f) only in a single frequency band f i .
  • the values of the computed spectral difference function D(f) depend on whether the frequency band f i is part of the selected sub-band b sel , i.e. the sub-band bi whose intensity scaling factor c i is the scaling factor of the selected scaled test speech spectrum Y sel (f). If fi lies outside the selected sub-band b sel , then the calculated scaling factor c is equal to 1 and the spectral difference function D(f) has non-zero values only for f being equal to f i .
  • the value of the scaling factor c depends on the modified intensity at the frequency f i (modified in comparison to the reference speech intensity) and the selected scaled test speech spectrum Y sel (f) differs from the reference speech spectrum X(f) at frequencies other than f i .
  • the spectral difference function D(f) hence has a large number of non-zero values, thereby reflecting the expected larger impact of a modification of intensities at a frequency band that belongs to a perceptually dominant sub-band.
  • the example of the method for estimating speech quality computes a difference that is indicative of the speech quality of the test speech signal in form of the spectral difference function D(f) that provides better approximations of the perceptions of speech spectrum intensity changes by a human listener when compared with existing methods.
  • Figure 5 illustrates a possible application of the example of the method for estimating speech quality.
  • An example of a perceptual representation of a reference speech spectrum 101 is shown along with an example of a perceptual representation of a test speech spectrum 102.
  • the perceptual representation of the test speech spectrum 102 features an amplification of intensities at lower frequencies, as well as a limitation of the bandwidth leading to a strong attenuation of the intensities at higher frequencies.
  • the slight amplification at lower frequencies is of rather limited influence on the perception of speech quality by a human listener.
  • the change in the relative intensities of the various frequency bands (when compared to each other) within the perceptual representation of the test speech spectrum 102, as well as the limitation of the bandwidth have a much higher impact on the perceived speech quality.
  • the perceptually dominant frequency sub-bands are identified in the perceptual representation of the reference speech spectrum 101 as described above, i.e. by determining the local maxima and selecting a predetermined frequency range around each local maximum.
  • Highlighted area 104 corresponds to one such perceptually dominant sub-band.
  • Each identified perceptually dominant sub-band gives rise to an intensity scaling factor and a correspondingly scaled test speech spectrum.
  • the dotted curve 103 in Figure 3 represents a perceptual representation of a scaled test speech spectrum that has been scaled with the intensity scaling factor associated with the sub-band 104.
  • An embodiment of the method of the invention avoids this disadvantage in that it attempts to first match signal parts of the reference speech signal with corresponding signal parts of the test speech signal, that are least likely to result in erroneous matches. This is achieved by first starting to match the one or more parts of the reference speech signal with the highest intensity summed over their length, e.g. the parts of the reference speech signal with the highest signal energy or loudness summed over their length. For the matching cross-correlation may be employed. Instead of the highest intensity summed over the respective length the highest intensity relative to the respective length may be used, and hence the highest signal energy or loudness relative to the respective length.
  • Degradations in the test speech signal such as introduced e.g. by packet loss concealment routines in packetized transmission systems often result in decreased signal energies of correspondingly degraded signal parts in the test speech signal.
  • High-energy signal parts of the reference signal are therefore more likely to be still present with sufficiently high energy in the test speech signal in comparison to low energy parts.
  • the length of signal parts to be matched can be in the range of 0.05 to 0.5 seconds.
  • the embodiment of the method of the invention attempts to match signal parts of the reference signal with decreasing intensity summed over the length (or relative to their length, respectively), i.e. it attempts to match signal parts in order of decreasing expected match accuracy rather than monotonously progressing from the beginning of the reference speech signal to the end of the reference speech signal.
  • Such the possibility of erroneous matching decreases with each further matched signal part of the reference speech signal, since the remaining amount of matchable signal parts in the test speech signal is limited by the amount of already matched signal parts, normally surrounding the signal parts still to be matched in the time domain.
  • the reference speech signal and the test speech signal are preferably pre-filtered by a bandpass filter to filter out irrelevant signal parts such as background noise.
  • the bandpass filter is preferably configured such that it passes frequencies within the audio band, in particular in the range of 700 Hz to 3000 Hz, and rejects or at least attenuates frequencies outside the thus defined range.
  • the reference speech signal and the test speech signal are further preferably thresholded, i.e. limited by a predefined threshold, and normalized with respect to their respective signal energies/signal powers to compensate for differences between corresponding signal parts of the reference speech signal and the test speech signal that are considered irrelevant.
  • Such irrelevant differences may, for example, be caused by varying gain properties of the transmission channel/telecommunication network in question.
  • the computational operations that are performed for the thresholding and normalization are preferably configured and performed such that a sliding window of preferably 26.625 ms length is moved over the entire length of both speech signals in time domain, that for each speech signal the average signal power within the sliding window is computed while the sliding window is moved over the respective speech signal, and that the average signal power within each sliding window is re-scaled to either a first predefined value if it exceeds a pre-set threshold value, or otherwise is set to a second predefined value.
  • This pre-set threshold value is preferentially set equal to (S + 3*N)/4 with S being the average signal level of the speech content within the respective speech signal and N being the signal level of the background noise in the respective speech signal.
  • the value for S may, for example, be computed as described in ITU-T Recommendation P.56 "Objective measurement of active speech level", Geneva, 1993.
  • the second predefined value is chosen smaller than the first predefined value.
  • the second predefined value may e.g. be equal to 0.
  • the respective intensities are preferably compared with a second pre-set threshold value and only those intensities are taken into account and summed up that exceed this second pre-set threshold value.
  • the second pre-set threshold value lies preferentially slightly above the above-mentioned first threshold value (S + 3*N)/4.
  • the second pre-set threshold value is preferably given by 0.4*S + 0.6*N with S and N as defined in the last paragraph.
  • FIG. 7 shows a diagram illustrating the embodiment of the method of the invention.
  • a reference speech signal 301 and a test speech signal 302 are shown in the time domain.
  • the speech signals 301 and 302 are subdivided into smaller sections. In Figure 7 these sections are given by the respective speech signals 301 and 302 without the already matched signal parts 303 and 304.
  • Signal parts within the remaining sections of the reference speech signal 301 can only be matched with signal parts of the test speech signal 302 that occur in the corresponding section of the test speech signal 302, with the temporal locations of the already matched signal parts surrounding or limiting, respectively, the sections.
  • the signal part of the reference speech signal 301 between the matches 303 and 304 can only be matched with a corresponding signal part of the test speech signal 302, i.e. with a signal part of the test speech signal 302 that lies between the signal parts of the test speech signal 302 of the matches 303 and 304 in the time domain.
  • the embodiment of the method of the invention thus reduces the possibility of incorrect matching by subdividing the reference speech signal and the test speech signal into smaller sections, the sections being separated by already performed matches.
  • a performance measure (also called performance metric) is computed for each matched pair 303, 304 of signal parts.
  • the performance measure may for example be given by the maximum of the waveform cross-correlation of the matched signal parts of the reference speech signal and the test speech signal, the waveform cross-correlation being normalized by the signal powers of the respective signal parts.
  • a decision unit may be provided to assess the performance of each pair of matched signal parts by evaluating their associated performance measure. The decision unit evaluates if the performance measure is equal to or exceeds a pre-set threshold value. If the value of the performance measure is neither equal to nor exceeds the pre-set threshold value then the decision unit interprets this finding as the pair of matched signal parts having insufficient performance, i.e. as the match being poor.
  • the decision unit may also compare the performance measure for a particular pair of matched signal parts with the performance measures computed for other pairs of matched signal parts or with the average value of the performance measures computed for other pairs of matched signal parts, respectively. If the performance measure of the particular pair of matched signals is significantly lower than the performance measures (or the average of the performance measures) of the other pairs of matched signal parts, i.e. if the difference between the performance measure of the particular pair of matched signals and the performance measures (or the average of the performance measures) of the other pairs of matched signal parts exceeds a pre-defined threshold value, then the decision unit may assess the particular pair of matched signal parts as having insufficient performance.
  • the decision unit may reject the particular pair of matched signal parts and skip those signal parts, so that the signal parts may be used for later matching, i.e. may be re-matched later.
  • the matching is then preferably first continued for different signal parts, thus subdividing the reference speech signal and the test speech signal into smaller sections. Matching of the skipped signal parts is then preferably reattempted when the possibility of erroneous matching has been further reduced by further subdivision of the reference speech signal and the test speech signal, or when no other unmatched signal parts of the reference signal are left for matching.
  • distortions may occur due to interruptions of the speech signal or missed speech (missed parts of the speech signal) caused for example by a temporary dead spot within the telecommunication network.
  • Common approaches calculate the amount of distortion caused by such an interruption or loss of speech based on the signal intensity (e.g. the power, level or loudness) that is missing or decreased in the test speech signal when compared to the corresponding signal part(s) in the reference speech signal.
  • the signal intensity e.g. the power, level or loudness
  • these common approaches do not take into account that a human listener who listens to the test speech signal has no knowledge of the reference speech signal as such and thus does not know how much signal intensity is actually missing.
  • the test speech signal is analysed shortly before and shortly after the location of an occurrence of an interruption or a speech loss, i.e. the analysing takes place at instances (i.e. signal parts) in the test speech signal that are known to a human listener. It is expected that low signal intensities (e.g. power, level or loudness) in these signal parts of the test speech signal lead to a relatively low perceived distortion for a human listener.
  • the actually missed or interrupted signal part may be of higher signal intensity than the remaining surrounding signal parts, it is assumed that a human listener does not perceive the interruption or speech loss as strong since he does not expect the signal intensity to be high, his expectation being based on the lower signal intensity of the surrounding signal parts in the test speech signal.
  • Figure 9 depicts an example of a reference speech signal 401 and a test speech signal 402 in the time domain, wherein a signal part 403 with high signal intensity is lost during transmission and thus missing in the test speech signal 402.
  • the corresponding signal part 404 in the test speech signal has in comparison extremely low signal intensity.
  • Figure 8 depicts a flow chart of the example of the method for estimating speech quality.
  • a first step 30 of the example of the method for estimating speech quality the signal intensities of the signal parts of the test speech signal are computed that lie adjacent to the missed or interrupted signal part, i.e. that surround the interruption or speech loss.
  • the expected signal intensity at the location of the interruption or speech loss in the test speech signal is computed. This expected signal intensity is derived from the computed signal intensities of the adjacent signal parts that have been computed in step 30.
  • the expected signal intensity at the interruption or speech loss may be derived from the computed signal intensities of the adjacent signal parts by means of interpolation, in particular by means of linear and/or spline interpolation.
  • a measure of the perceived distortion is computed by comparing a test speech signal, in which the interruption or loss has been replaced by the derived expected signal intensity of the missed or interrupted signal part, with the actual test speech signal.
  • the measure of the perceived distortion is computed by comparing the actual intensity of the at least one missed or interrupted signal part in the test speech signal with the derived expected intensity for the at least one missed or interrupted signal part.
  • the computed measure of the perceived distortion lies preferable in the range of 0 to 1.
  • a measure of the actual distortion is computed by comparing the reference speech signal with the actual test speech signal.
  • the order of steps 32 and 33 can be interchanged. Steps 32 and 33 may also be performed concurrently.
  • the computed measure of the perceived distortion is combined with the computed measure of the actual distortion yielding a combined measure of distortion that may be used to assess the speech quality impairment caused by the interruption or speech loss.
  • the measure of the perceived distortion may be multiplied with the measure of the actual distortion to compute the combined measure of distortion.
  • the combined measure of distortion may be given by the measure of the actual distortion limited to the measure of the perceived distortion if the measure of the actual distortion exceeds the measure of the perceived distortion.
  • the combined measure of distortion may be given by the measure of the actual distortion exponentially weighted by the measure of the perceived distortion, i.e.
  • the combined measure of distortion may be given by the difference (computed through subtraction) between the measure of the perceived distortion and the measure of the actual distortion.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Mobile Radio Communication Systems (AREA)

Claims (15)

  1. Verfahren zum Einschätzen von Sprachqualität, wobei ein Referenzsprachsignal (301) in ein Telekommunikationsnetzwerk eintritt, was in einem Testsprachsignal (302) resultiert, wobei das Verfahren die folgenden Schritte umfasst:
    - Ausrichten des Referenzsprachsignals (301) und des Testsprachsignals (302) durch Zuordnen von Signalteilen des Referenzsprachsignals (301) zu Signalteilen des Testsprachsignals (302), wobei zugeordnete Signalteile (303, 304) von ähnlicher Länge im Zeitbereich sind und ähnliche Intensität summiert über ihre Länge haben and
    - Berechnen und Vergleichen des Sprachspektrums des Referenzsprachsignals (301) und des Testsprachsignals (302), die aufeinander ausgerichtet sind, resultierend in einem Unterschiedsgrössenwert, wobei der Unterschiedsgrössenwert indikativ für die Sprachqualität des Testsprachsignals ist.
    wobei zum Zuordnen der Signalteile des Referenzsprachsignals (301) zu Signalteilen des Testsprachsignals (302) zuerst die einen oder mehreren Signalteile des Referenzsprachsignals (301) mit der höchsten Intensität über ihrer Länge summiert entsprechenden Signalteilen des Testsprachsignals (302) zugeordnet werden, dann das Zuordnen mit Signalteilen des Referenzsprachsignals (301) mit abnehmender Intensität summiert über ihrer Länge fortgesetzt wird.
  2. Verfahren gemäss Anspruch 1, wobei das Referenzsprachsignal (301) und das Testsprachsignal (302) jeweils mittels eines Bandpass-Filters vorgefiltert werden, insbesondere mittels eines Bandpassfilters mit einem Frequenzbereich, der dem Audioband entspricht.
  3. Verfahren gemäss Anspruch 1 oder 2, wobei eine Gütegrösse für jedes Paar zugeordneter Signalteile (303, 304) berechnet wird, wobei die Gütegrösse insbesondere das Maximum der Kreuzkorrelation der zugeordneten Signalteile (303, 304) normalisiert mit den Signalleistungen der zugeordneten Signalteile (303, 304) ist.
  4. Verfahren gemäss Anspruch 3, wobei ein Paar der zugeordneten Signalteile erachtet wird, ungenügende Güte zu haben, wenn seine Gütegrösse unterhalb eines vorgegebenen Grenzwertes zu liegen.
  5. Verfahren gemäss Anspruch 3, wobei ein Paar der zugeordneten Signalteile erachtet wird, ungenügende Güte zu haben, wenn seine Gütegrösse signifikant kleiner als die Gütegrössen anderer Paare zugeordneter Signalteile ist (303, 304).
  6. Verfahren gemäss Anspruch 4 oder 5, wobei jeder Signalteil eines Paares zugeordneter Signalteile mit als ungenügend erachteter Güte erneut zugeordnet wird.
  7. Verfahren gemäss einem der vorhergehenden Ansprüche, wobei das Referenzsprachsignal (101) ein Referenzsprachspektrum (X) hat und das Testsprachsignal (102) ein Testsprachspektrum (Y) hat und wobei das Verfahren weiter die folgenden Schritte umfasst:
    - Identifizieren einer Anzahl von wahrnehmungsmässig dominanten Teilbändern (bi) im Referenzsprachspektrum (X) oder im Testsprachspektrum (Y),
    - Berechnen eines Intensitätsskalierungsfaktors (ci) für jedes identifizierte Teilband (bi) durch Minimierung eines Grössenwerts für den Intensitätsunterschied zwischen denjenigen Teilen des Referenzsprachspektrums (X) and des Testsprachspektrums (Y), die den jeweiligen Teilbändern entsprechen,
    - Multiplizieren des Testsprachspektrums (Y) mit jedem Intensitätsskalierfaktor (ci), wobei eine Anzahl von skalierten Testsprachspektra (Yi) erzeugt wird,
    - Auswählen eines skalierten Testsprachspektrums und
    - Berechnen des Unterschiedes zwischen dem ausgewählten skalierten Testsprachspektrum (Ysel) und dem Referenzsprachspektrum (X), wobei der Unterscheid für die Sprachqualität des Testsprachsignals (102) indikativ ist.
  8. Verfahren gemäss Anspruch 7, wobei der Grössenwert des Intensitätsunterschiedes durch den Intensitätsunterschied zum Quadrat oder das globale Maximum des Intensitätsunterschieds zwischen denjenigen Teilen des Referenzsprachspektrums (X) und des Testsprachspektrums (Y), die dem jeweiligen Teilband (bi) entsprechen, gegeben ist.
  9. Verfahren gemäss Anspruch 7 oder 8, wobei die Anzahl der wahrnehmungsmässig dominanten Teilbänder (bi) des Referenzsprachspektrums (X) und des Testsprachspektrums (Y) durch Bestimmen der lokalen Maxima in einer wahrnehmungsmässigen Repräsentation des jeweiligen Spektrums und durch Auswählen eines vorbestimmten Bereichs von Frequenzen um jedes lokales Maximum identifiziert wird.
  10. Verfahren gemäss Anspruch 9, wobei der vorbestimmte Bereich von Frequenzen durch die lokalen Minima, die an das entsprechende lokale Maximum angrenzen, bestimmt wird, wobei der vorbestimmte Bereich von Frequenzen insbesondere kleiner oder gleich 4 Bark ist.
  11. Verfahren gemäss Anspruch 9 oder 10, wobei die wahrnehmungsmässige Repräsentation des jeweiligen Spektrums durch Transformieren des jeweiligen Spektrums in ein Lautheitsspektrum erhalten wird.
  12. Verfahren gemäss einem der vorhergehenden Ansprüche, wobei ein Grössenwert des Intensitätsunterschieds zwischen dem Referenzsprachspektrum (X) und einem entsprechenden skalierten Testsprachspektrum (Yi) für jedes skalierte Testsprachspektrum (Yi) berechnet wird und wobei das skalierte Testsprachspektrum (Yi) ausgewählt wird, das den niedrigsten Grössenwert des Intensitätsunterschiedes ergibt.
  13. Verfahren gemäss Anspruch 12, wobei der Grössenwert des Intensitätsunterschieds durch den Intensitätsunterschied zum Quadrat oder durch das globale Maximum des Intensitätsunterschieds zwischen dem Referenzsprachspektrum (X) und einem entsprechenden skalierten Testsprachspektrum (Yi) gegeben ist.
  14. Verfahren gemäss einem der vorhergehenden Ansprüche, wobei das Verfahren weiter die folgenden Schritte umfasst:
    - für zumindest einen fehlenden oder unterbrochenen Signalteil im Testsprachsignal (402) Berechnen der Signalintensitäten der an dem fehlenden oder unterbrochenen Signalteil angrenzenden Signalteile,
    - Herleiten einer erwarteten Signalintensität für den zumindest einen fehlenden oder unterbrochenen Signalteil von den berechneten Signalintensitäten der angrenzenden Signalteile des Testsprachsignals (402),
    - Berechnen eines Grössenwerts der wahrgenommenen Störung durch Vergleich der tatsächlichen Intensität des zumindest einen fehlenden oder unterbrochenen Signalteils in dem Testsprachsignal (402) mit der hergeleiteten erwarteten Intensität für den zumindest einen fehlenden oder unterbrochenen Signalteil,
    - Berechnen eines Grössenwerts der tatsächlichen Störung durch Vergleich des Referenzsprachsignals (401) mit dem Testsprachsignal (402) und
    - Kombinieren des Grössenwerts der wahrgenommenen Störung mit dem Grössenwert der tatsächlichen Störung um einen kombinierten Grössenwert der Störung zu erzeugen, der indikativ für die Sprachqualität des Testsprachsignals ist (402).
  15. Verfahren gemäss Anspruch 14, wobei die erwartete Signalintensität des zumindest einen fehlenden oder unterbrochenen Signalteils aus den berechneten Signalintensitäten der angrenzenden Signalteile des Testsprachsignals (402) mittels Interpolation, insbesondere mittels linearer Interpolation und/oder Spline-Interpolation, hergeleitet wird.
EP12000483.3A 2010-05-21 2010-05-21 Verfahren zur Schätzung der Sprachqualität Active EP2474975B1 (de)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP12000483.3A EP2474975B1 (de) 2010-05-21 2010-05-21 Verfahren zur Schätzung der Sprachqualität

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP10005327A EP2388779B1 (de) 2010-05-21 2010-05-21 Verfahren zur Schätzung der Sprachqualität
EP12000483.3A EP2474975B1 (de) 2010-05-21 2010-05-21 Verfahren zur Schätzung der Sprachqualität

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
EP10005327.1 Division 2010-05-21

Publications (2)

Publication Number Publication Date
EP2474975A1 EP2474975A1 (de) 2012-07-11
EP2474975B1 true EP2474975B1 (de) 2013-05-01

Family

ID=42938506

Family Applications (2)

Application Number Title Priority Date Filing Date
EP12000483.3A Active EP2474975B1 (de) 2010-05-21 2010-05-21 Verfahren zur Schätzung der Sprachqualität
EP10005327A Active EP2388779B1 (de) 2010-05-21 2010-05-21 Verfahren zur Schätzung der Sprachqualität

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP10005327A Active EP2388779B1 (de) 2010-05-21 2010-05-21 Verfahren zur Schätzung der Sprachqualität

Country Status (1)

Country Link
EP (2) EP2474975B1 (de)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903752B (zh) * 2018-05-28 2021-04-20 华为技术有限公司 对齐语音的方法和装置
JP7212925B2 (ja) * 2018-10-30 2023-01-26 国立大学法人九州大学 音声伝達環境評価システム及び感覚刺激提示装置
CN113409820B (zh) * 2021-06-09 2022-03-15 合肥群音信息服务有限公司 一种基于语音数据的质量评价方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE376295T1 (de) * 1998-03-27 2007-11-15 Ascom Schweiz Ag Verfahren und vorrichtung zur beurteilung der übertragungsqualität
DE19840548C2 (de) * 1998-08-27 2001-02-15 Deutsche Telekom Ag Verfahren zur instrumentellen Sprachqualitätsbestimmung
EP1104924A1 (de) * 1999-12-02 2001-06-06 Koninklijke KPN N.V. Bestimmung des Zeitrelation zwischen Sprachsignalen welche durch Zeitverschiebung beeinträchtigt sind

Also Published As

Publication number Publication date
EP2388779A1 (de) 2011-11-23
EP2474975A1 (de) 2012-07-11
EP2388779B1 (de) 2013-02-20

Similar Documents

Publication Publication Date Title
DK2465113T3 (en) PROCEDURE, COMPUTER PROGRAM PRODUCT AND SYSTEM FOR DETERMINING AN CONCEPT QUALITY OF A SOUND SYSTEM
EP2465112B1 (de) Verfahren, computerprogrammprodukt und system zur bestimmung der wahrgenommenen qualität eines audiosystems
CN106663450B (zh) 用于评估劣化语音信号的质量的方法及装置
EP2780909B1 (de) Verfahren und vorrichtung zur untersuchung der verständlichkeit eines verrauschten sprachsignals
CN104919525B (zh) 用于评估退化语音信号的可理解性的方法和装置
EP2410517B1 (de) Verfahren und System zum integralen und diagnostischen Testen der Qualität gehörter Sprache
EP1465156A1 (de) Verfahren und System zur Bestimmung der Qualität eines Sprachsignales
EP2474975B1 (de) Verfahren zur Schätzung der Sprachqualität

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AC Divisional application: reference to earlier application

Ref document number: 2388779

Country of ref document: EP

Kind code of ref document: P

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME RS

17P Request for examination filed

Effective date: 20120705

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AC Divisional application: reference to earlier application

Ref document number: 2388779

Country of ref document: EP

Kind code of ref document: P

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 610368

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130515

Ref country code: CH

Ref legal event code: NV

Representative=s name: E. BLUM AND CO. AG PATENT- UND MARKENANWAELTE , CH

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602010006859

Country of ref document: DE

Effective date: 20130627

REG Reference to a national code

Ref country code: SE

Ref legal event code: TRGR

RIN2 Information on inventor provided after grant (corrected)

Inventor name: ULLMANN, RAPHAEL

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 610368

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130501

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20130501

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130802

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130801

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130901

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130812

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130902

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130801

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20140131

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20140204

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130521

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602010006859

Country of ref document: DE

Effective date: 20140204

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130701

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20100521

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130521

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130501

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230419

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230519

Year of fee payment: 14

Ref country code: CH

Payment date: 20230602

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: SE

Payment date: 20230519

Year of fee payment: 14

Ref country code: FI

Payment date: 20230523

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230524

Year of fee payment: 14