WO2006033570A1 - Frequency compensation for perceptual speech analysis - Google Patents
Frequency compensation for perceptual speech analysis Download PDFInfo
- Publication number
- WO2006033570A1 WO2006033570A1 PCT/NL2005/000683 NL2005000683W WO2006033570A1 WO 2006033570 A1 WO2006033570 A1 WO 2006033570A1 NL 2005000683 W NL2005000683 W NL 2005000683W WO 2006033570 A1 WO2006033570 A1 WO 2006033570A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- density function
- loudness
- frequency
- input
- output
- Prior art date
Links
- 230000006870 function Effects 0.000 claims abstract description 180
- 230000004044 response Effects 0.000 claims abstract description 44
- 230000005540 biological transmission Effects 0.000 claims abstract description 24
- 238000000034 method Methods 0.000 claims description 43
- 238000001228 spectrum Methods 0.000 claims description 36
- 230000003595 spectral effect Effects 0.000 claims description 22
- 238000012935 Averaging Methods 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 15
- 238000013507 mapping Methods 0.000 claims description 9
- 230000008447 perception Effects 0.000 claims description 9
- 230000001131 transforming effect Effects 0.000 claims description 9
- 238000005303 weighing Methods 0.000 claims description 4
- 230000010354 integration Effects 0.000 abstract description 5
- 238000011156 evaluation Methods 0.000 abstract description 3
- 238000012360 testing method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 239000012634 fragment Substances 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 101100409074 Nicotiana tabacum PPXI gene Proteins 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- XOFYZVNMUHMLCC-ZPOLXVRWSA-N prednisone Chemical compound O=C1C=C[C@]2(C)[C@H]3C(=O)C[C@](C)([C@@](CC4)(O)C(=O)CO)[C@@H]4[C@@H]3CCC2=C1 XOFYZVNMUHMLCC-ZPOLXVRWSA-N 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/69—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for evaluating synthetic or decoded voice signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/22—Arrangements for supervision, monitoring or testing
- H04M3/2236—Quality of speech transmission monitoring
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/08—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
- G10L19/09—Long term prediction, i.e. removing periodical redundancies, e.g. by using adaptive codebook or pitch predictor
Definitions
- the invention relates to a method for establishing a frequency compensated input pitch power density function of a time framed input signal for application to an audio transmission system having an input and an output, and the output of which yields a time framed output signal.
- the invention also relates to a processing system for establishing a frequency compensated input pitch power density function.
- the invention also relates to a computer readable medium comprising computer executable software code.
- the method and system to which the invention relates may be used for example as part of a method or system for analysing the perceived quality of an audio transmission system.
- Such method and system for analysing a perceptual quality measure for the impact of linear frequency distortion are known from a previously published European patent application no
- EP1343145 and are also disclosed in references [1] ... [8].
- the disclosed system and method and its predecessors provide for perceptual speech evaluation as part of ITU-T recommendation P.862 (further referred to as P.862), whereby a single overall measure for the perceived quality of a degraded output signal with respect to an input signal is obtained.
- the disclosed method and system are based on the insight that speech and audio quality measurement should be carried out in the perceptual domain (see fig. 1). This goal is achieved by comparing a reference speech signal X n , that is applied to the system under test (1), with its degraded output signal Y n . By establishing the internal perceptual representations of these signals (0.1), (0.2) and comparing (0.3) them, an estimate can be made about the perceived quality by mapping (0.4) the result to a perceived quality scale, yielding a perceived quality measure PESQ.
- a perceived quality scale also known as a mean opinion scale (MOS) is established in the prior art by empirical estimation. Persons are asked to judge the quality of degraded or distorted speech fragments.
- MOS mean opinion scale
- the scores are then matched to the actual distortion and laid down in a perceptual scale.
- This scale can be used to predict the perceptual score depending on the distortion present in a signal.
- Currently available processing systems for determining perceived quality of an audio transmission system including P.862, suffer from the fact that a single number is outputted that represents the overall quality. This makes it impossible to find underlying causes for the perceived degradations.
- Classical measurements like signal to noise ratio, frequency response distortion, total harmonic distortion, etc. pre-suppose a certain type of degradation and then quantify this by performing a certain type of quality measurement. This classical approach finds one or more underlying causes for bad performance of the system under test but is not able to quantify the impact of the linear frequency response distortion in relation to the other types of distortion with regard to the overall perceived quality.
- the above methods utilise frequency compensation of an input power density function, derived from the input signal, for the purpose of quantifying the effect that linear frequency response distortions have less impact on the perceived speech quality than non-linear distortions.
- the known method of frequency compensation fails because they either use a hard clipping function or a modified clipping function that do not allow to quantify the impact of linear frequency response distortions on the perceived speech quality in a perceptual correct manner.
- the object of the invention can be achieved in a first aspect of the invention, by a method for frequency compensating an input pitch power density function of an audio transmission system having an input and an output, and to which input a time framed input signal is applied and the output of which yields a time framed output signal, wherein the method may comprise the steps of:
- the step of frequency compensating the input pitch power density function comprising a softscaling function using power compression function in the range of 0.5 , and an offset in the range of 4*10 5 .
- Pitch power density functions and soft-scaling per se are known from the prior art.
- This compression function the overall impact of linear frequency response distortions can be quantified to obtain a global score for the overall quality that includes the correct quantification of the linear frequency response distortions
- This single quality number may be calculated for example in the same manner as carried out in P.862 [3], i.e. for each time frame two different disturbances are calculated from a frequency integration of the loudness difference function. The final quality number is then derived from two different time integrations.
- the improvement provides a better correlation between objective speech quality measurements and subjective speech quality assessments, especially for speech transmission systems where linear frequency response distortions dominate the overall speech quality (e.g. systems that only carry out a bandwidth limitation).
- embodiments can provide for a method or system for determining the perceived quality of an audio transmission system, which give accurate results w.r.t. linear frequency distortion like P.862 and for a method or system that allow to obtain a single output value that is representative for the perceived distortion including linear frequency distortions.
- the method as such obtains a single quality measure for the linear frequency distortion, based upon the difference in the loudness spectrum.
- This measure however still requires mapping to a perceptual quality measure, which is achieved in the following embodiment according to the first aspect of the invention, further comprising the steps - establishing a roughness measure of the difference averaged loudness spectrum based on the absolute difference of consecutive frequency bin values
- Another embodiment according to the first aspect of the invention wherein the step of processing the time framed input signal further comprises frequency compensating an input pitch power density function with respect to an ideal spectrum, has the advantage that it compensates errors in the recording technique which often lead to unbalanced spectral power densities, in most cases an over emphasis of the lower frequencies (below 500 Hz).
- This step is applied on the input pitch power densities as obtained by Hanning windowing, FFT and frequency warping of the input signal according to reference [I].
- the first frequency compensation functions is expressed in terms of Bark bin values and is derived from averaging over at least two neighboring Bark bin values of the input and output pitch power density functions
- the second frequency compensation functions is also expressed in terms of Bark bin values and is derived from averaging over at least two neighboring Bark bin values of the input and output pitch power density functions.
- the averaging in the frequency compensation function calculation smoothes local peaks in the frequency compensation function which are less audible than would have been predicted from a direct calculation, without the smoothing.
- a processing system for measuring the transmission quality of an audio transmission system comprising: - a processor,
- the processor is arranged for executing the steps of the method according to the first aspect of the invention.
- the object of the invention is further achieved in a third aspect according to the invention by a software program storage means comprising computer executable software code, which when loaded on a computer system, enables the computer system to execute the steps of the method according to the first aspect of the invention.
- Fig 1 shows a general diagram of a method for determining the perceived quality of an audio transmission system according to the state of the art.
- Fig 2 shows a diagram representing a method for determining the perceived quality of an audio transmission system according to ITU-T recommendation P.862 according to the state of the art.
- Fig 3 shows a diagram representing a method for determining the perceived quality of an audio transmission system according to a preferred embodiment of the invention.
- Fig 4 shows an improvement according to a first embodiment of the invention
- Fig 5 shows a further improvement according to a second embodiment of the invention.
- Step 1 represents the conversion of an input signal X n to an output signal Y n by a system or a device under test 1, whereby the in- and output signals are represented by discrete time frames 1 .. n, wherein X n represents a reference signal and Y n represents the distorted response of the system under test 1 on X n .
- the frames may be 32 ms of duration, according to current PESQ embodiments. For the invention the frame duration may either be less than 32 ms or much longer. Durations covering a complete speech fragment, in the order of minutes, may also be feasible.
- the device or system under test may be a telecom network, a telecom terminal, e.g. a telephone, or any device or system for processing audio.
- the input signal may be a speech fragment, but application of the embodiments of the invention are not limited to speech.
- Step 2.1 and 2.4 represent the time windowing of the input signal X n frames and output signal Y n frames respectively, using a Hanning window.
- Steps 2.2 and 2.5 represent the discrete Fourier transforming frame by frame of the input and output signals respectively.
- Steps 2.3 and 2.6 represent the warping of the Fourier transformed in- and output signal into so-called Bark bands, thus obtaining the pitch power density functions in discrete frequency bands for the input signal and for the output signal, PPX(f) n and PPY(f) n respectively.
- Step 2.7 represents calculating a linear frequency compensation, which is used to weigh in step 2.8 the input pitch power density function PPX(f) n to obtain a frequency compensated input pitch power density function PPX' (f) n .
- the input pitch power density function PPX(Jf) n is to be frequency compensated for the filtering that takes place in the audio transmission system under test 1.
- the amount of compensation determines the contribution of linear frequency distortion in the ultimate PESQ value.
- the frequency compensation as disclosed in the state of the art uses an estimation of the linear frequency response of the system under test based on all frames for which the input reference signal is larger then a silence criterion value (speech active frames, PPX(f) n >10 7 , frames louder then about 70 dB SPL for P.862 when used with play back levels that are correctly set).
- the frequency response compensation in P.862 is carried out on the input pitch power density function PPX(I) n per frame. All power density functions and offsets in this description are scaled towards a ITU P.862 standard for power functions.
- a frequency response compensation function H(f) is calculated by averaging PPX(f) n and PPY(I) n , the outputs of 2.3 and 2.6 respectively, over time index n (plain power averaging) resulting in averaged pitch power density functions APPX and APPY (used in 2.7) from which a first frequency compensated function PPX' (f) n at the output of 2.8 is calculated by multiplication.
- the aim is to fully compensate for small, inaudible frequency response distortions, i.e. all deviations less than a prefixed amount of decibels are fully compensated.
- Step 2.9 represents calculating a local scaling function for compensating the output pitch power density function short-term gain variations, whereby the last local scaling function S n -I is stored in 2.10 for use in the next frame.
- the compensation is effected by multiplying in 2.11 the local scaling function S n with the output pitch power density function PPY(f) n , resulting in a locally scaled output pitch power density function PPY(f)n.
- the input and output pitch power density functions PPX'(f) n and PPY(f) n are transformed to a loudness scale in steps 2.12 and 2.13 in accordance with the Sone loudness scale using Zwicker's algorithm, resulting in input and output loudness density functions LX(f) n and LY(f) n respectively.
- the input and output loudness density functions LX(f) n and LY(f) n are thus representations of the loudness of the input and output signals in a perceptual frequency domain.
- step 2.14 the input and output loudness density functions LX(f) n and LY(f) n are subtracted, resulting in a difference loudness density function D(f) n from which a perceived quality measure can be derived.
- the difference loudness density function D(I) n is transformed in an asymmetric disturbance measure DA , which can be used as a perceived quality measure.
- steps 2.18 and 2.19 where the difference loudness density function D(f) n is transformed in a disturbance measure Dn , by frequency integration and emphasizing silent parts respectively but without asymmetry.
- step 2.20 the disturbance measure D and the asymmetrical disturbance measure DA are combined to a single PESQ score denoting a perceptive quality estimate for the audio transmission system 1.
- Fig 3 discloses a method measuring the transmission quality of an audio transmission system according to the invention, wherein the known steps 2.1 .. 2.11 are used to establish a frequency compensated input pitch power density function PPX' (f) n and wherein step 2.13 is used to establish a loudness density function LY(f) n .
- a new first frequency compensating function H 1 (I) is calculated.
- H 1 Xf is a power based softscaling function with offset [6], using the in time averaged input and output pitch power density functions APPX(f) and APPYC-) :
- ⁇ (/) ( ⁇ P7(/) + OF ⁇ % PX(/) + o 9(/) , with q(f) is in the range of 0.0
- OFFSET is in the range of 10 4 - 10 6 .
- q(f) is in the range of 0.5
- OFFSET is in the range of 4*10 5 .
- a first frequency compensated input pitch power density function PPX' (f) n is calculated in 2.8 by multiplying the input pitch power density function PPX(f) n with the first frequency compensating function H 1 (I).
- step 3.10 a second frequency compensation function H2(f) is calculated similar to step 2.7 over the same set of speech active frames using a power based softscaling function with offset but now with a higher offset
- n _ (APPY(f) + OFFSETLARGE/ Y (/) wherein ⁇ /A is
- q(f) is in the range of 0.4 and OFFSETLARGE is in the range of 5*10 6 .
- the secondary frequency compensation function ⁇ L2 ⁇ is used to multiply in step 3.11 the input pitch power density function PPX(f) n , resulting in a secondary compensated pitch power density function PPX" (f)n.
- the primary and second frequency compensation functions Hi(f) and ⁇ .2 ⁇ are not directly calculated from the APPX(f) and APPY(f) functions, but from a smoothed version of these functions.
- the averaging is carried out over bins 0, 1 and fkAx, fMAX - 1 respectively.
- the second and second last (1 and fMAX-1) the averaging is carried out over bins 0, 1, 2 and fMAX, UMAX-I, fMAX-2 respectively.
- this averaging is repeated up to a lower index of 10 and downto a higher index of fMAX-4. Between the indices 10 and fMAx-4 the averaging is carried out over five bins, from two to the left up to two to the right of the index value.
- step 3.14 similar to step 2.12, the secondary compensated pitch power density function PPX" (f) n is transformed to an input loudness density function LX'(f) n containing less linear frequency response distortion compensations then used within the loudness calculation according to the invention.
- the parameters q(f) and OFFSETLARGE in this step 3.10, 3.11 are to be tuned for optimum results in a linear frequency distortion quality measure.
- the new input loudness density function LX'(f) n and the P.862 alike output loudness density function LY(f) n are then used to calculate the averaged loudness density functions ALSX(f) and ALSY(f) by averaging in steps 3.4 and 3.5 the spectral loudness density functions LX'(f) n and LY(f) n .
- this averaging is performed only over the time frames for which both the input and output power per frame are larger then a silence criterion value, preferably PPX(f) n and PPY(f) n >10 7 , determined in step 3.1 and effected in steps 3.2 and 3.3.
- a difference averaged loudness function DALS (f) is defined between the averaged loudness densities ALSX(f) and NALSY(f) .
- this difference averaged loudness function is then integrated over the frequency axis using again Lebesque but now over the individual frequency band differences using a p ⁇ 1.0 (p preferably in the range of 0.2 to 0.4) for the loudness in each Bark frequency band.
- / denotes a frequency band in the difference averaged loudness spectrum.
- the roughness number RM can be combined in step 3.13 with the loudness frequency response distortion measure LSDM by means of multiplication, the result of which is mapped to a Mean Opinion Score table, resulting in a single frequency response impact quality measure FRIQM.
- FIG. 4 shows an embodiment according to the invention wherein, in step 4.1, the difference function DALS(f) is split into a positive part (input > output) and a negative part DALS+(f) and DALS-(f).
- steps 4.2 and 4.3 both parts, DALS(f)+ and DALS(f)- respectively, are then integrated according to Lebesque over the frequency axis using again the Lp norm but now over the individual frequency band differences using a p ⁇ 1.0 with 0.1 ⁇ p ⁇ 0.5 for the loudness in each Bark frequency band. This results in a positive and negative frequency response distortion number LSDM+ and LSDM- .
- the two linear frequency domain impact numbers FRIQM+ and FRIQM- are calculated from the positive and negative frequency response distortion number LSDM+ and LSDM-, by multiplying with the roughness number RM. These frequency response distortion numbers are then mapped in step 4.6 to a MOS (Mean Opinion Score) like scale for quantifying the impact of the linear frequency response distortion yielding the two linear frequency domain impact numbers FRI QM+ and FRIQM- respectively.
- MOS Mel Opinion Score
- the LSDM+ and LSDM- can of course also be combined in a fashion similar to the frequency response impact quality measures FRIQM+ and FRIQM-, after which a mapping to an MOS can occur to yield a single frequency response impact quality measure FRIQM. Furthermore the multiplication with the roughness measure can also be performed on LSDM alone in this embodiment.
- the input pitch power density function is frequency compensated, using Lebesque, on the basis of an Lp scaling with 0.3 ⁇ p ⁇ 0.6 towards an ideal spectral power density Ideal(f) of a speech signal.
- the input pitch power density function is calculated from the input reference speech signal by calculating the average power in each frequency Bark band over the complete speech fragment for which the quality of the distorted signal has to be calculated.
- the ideal spectral power density function Ideal(f) is defined on the basis of averaging of the long-term average spectral power density of many male and female voices which are recorded with a flat frequency response microphone. In each bark band as used in PESQ a density number is constructed on the basis of this ideal density function.
- This partial scaling towards an ideal spectral power density function Ideal(f) compensates errors in the recording technique. Recording techniques often lead to unbalanced spectral power densities, in most cases an over-emphasis of the lower frequencies (below 500 Hz). From the ideal and input spectrum smoothed versions of the ideal spectral power density function Ideal(f) and input pitch power density function PPX(f) n are calculated in step 5.1 by averaging over a number of consecutive frequency bands. From these smoothed versions compensation factors S(f) can be calculated for each bark band defined as the ratio of the powers "ideal/reference".
- step 5.2 the input pitch power density function PPX(f)n with S(f) p , with 0.3 ⁇ p ⁇ 0.8, to obtain an (idealized) input pitch power density function PPXI(f) n which can be used for further evaluation according to the invention instead of the input pitch power density function PPX(f) n .
- the invention can be embodied in a computer system comprising a processor, memory and an input and an output.
- the input may be a reading device like an analog input capable of sampling a reference input signal and a degraded output signal coming from an audio transmission system under test.
- the sampled signals can be stored in a memory, for example a fixed disk, and put into frames, by selecting rows of samples.
- the processor can then proceed and perform the steps as described above.
- a result, for example the linear frequency impact quality measure can be output to a display, or to a communication port , or stored in the memory for future reference.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Computational Linguistics (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
- Circuit For Audible Band Transducer (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
- Measuring Instrument Details And Bridges, And Automatic Balancing Devices (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
- Machine Translation (AREA)
Abstract
Description
Claims
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05787331A EP1792304B1 (en) | 2004-09-20 | 2005-09-20 | Frequency compensation for perceptual speech analysis |
JP2007532270A JP4879180B2 (en) | 2004-09-20 | 2005-09-20 | Frequency compensation for perceptual speech analysis |
CA2580763A CA2580763C (en) | 2004-09-20 | 2005-09-20 | Frequency compensation for perceptual speech analysis |
CN2005800377134A CN101053016B (en) | 2004-09-20 | 2005-09-20 | Method and system for constructing a first frequency compensation input spacing power density function |
DE602005009221T DE602005009221D1 (en) | 2004-09-20 | 2005-09-20 | FREQUENCY COMPENSATION FOR PERCEPTIONAL LANGUAGE ANALYSIS |
US11/663,138 US8014999B2 (en) | 2004-09-20 | 2005-09-20 | Frequency compensation for perceptual speech analysis |
AU2005285694A AU2005285694B2 (en) | 2004-09-20 | 2005-09-20 | Frequency compensation for perceptual speech analysis |
DK05787331T DK1792304T3 (en) | 2004-09-20 | 2005-09-20 | Frequency compensation for perceptual speech analysis |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04077601.5 | 2004-09-20 | ||
EP04077601 | 2004-09-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006033570A1 true WO2006033570A1 (en) | 2006-03-30 |
Family
ID=35355107
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/NL2005/000683 WO2006033570A1 (en) | 2004-09-20 | 2005-09-20 | Frequency compensation for perceptual speech analysis |
Country Status (12)
Country | Link |
---|---|
US (1) | US8014999B2 (en) |
EP (1) | EP1792304B1 (en) |
JP (1) | JP4879180B2 (en) |
CN (1) | CN101053016B (en) |
AT (1) | ATE405922T1 (en) |
AU (1) | AU2005285694B2 (en) |
CA (1) | CA2580763C (en) |
DE (1) | DE602005009221D1 (en) |
DK (1) | DK1792304T3 (en) |
ES (1) | ES2313413T3 (en) |
PT (1) | PT1792304E (en) |
WO (1) | WO2006033570A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1975924A1 (en) * | 2007-03-29 | 2008-10-01 | Koninklijke KPN N.V. | Method and system for speech quality prediction of the impact of time localized distortions of an audio transmission system |
JP2010534030A (en) * | 2007-07-13 | 2010-10-28 | ドルビー・ラボラトリーズ・ライセンシング・コーポレーション | Acoustic processing using auditory scene analysis and spectral distortion |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ES2403509T3 (en) * | 2007-09-11 | 2013-05-20 | Deutsche Telekom Ag | Method and system for the integral and diagnostic evaluation of the quality of the listening voice |
EP2048657B1 (en) * | 2007-10-11 | 2010-06-09 | Koninklijke KPN N.V. | Method and system for speech intelligibility measurement of an audio transmission system |
JP5157852B2 (en) * | 2008-11-28 | 2013-03-06 | 富士通株式会社 | Audio signal processing evaluation program and audio signal processing evaluation apparatus |
WO2011010962A1 (en) * | 2009-07-24 | 2011-01-27 | Telefonaktiebolaget L M Ericsson (Publ) | Method, computer, computer program and computer program product for speech quality estimation |
DK2465112T3 (en) | 2009-08-14 | 2015-01-12 | Koninkl Kpn Nv | PROCEDURE, COMPUTER PROGRAM PRODUCT, AND SYSTEM FOR DETERMINING AN EVALUATED QUALITY OF AN AUDIO SYSTEM |
KR101430321B1 (en) * | 2009-08-14 | 2014-08-13 | 코닌클리즈케 케이피엔 엔.브이. | Method and system for determining a perceived quality of an audio system |
JP5606764B2 (en) * | 2010-03-31 | 2014-10-15 | クラリオン株式会社 | Sound quality evaluation device and program therefor |
CN102456348B (en) * | 2010-10-25 | 2015-07-08 | 松下电器产业株式会社 | Method and device for calculating sound compensation parameters as well as sound compensation system |
TWI759223B (en) * | 2010-12-03 | 2022-03-21 | 美商杜比實驗室特許公司 | Audio decoding device, audio decoding method, and audio encoding method |
EP2733700A1 (en) * | 2012-11-16 | 2014-05-21 | Nederlandse Organisatie voor toegepast -natuurwetenschappelijk onderzoek TNO | Method of and apparatus for evaluating intelligibility of a degraded speech signal |
EP2922058A1 (en) * | 2014-03-20 | 2015-09-23 | Nederlandse Organisatie voor toegepast- natuurwetenschappelijk onderzoek TNO | Method of and apparatus for evaluating quality of a degraded speech signal |
CN104243723B (en) * | 2014-09-28 | 2017-03-29 | 辽宁省建设科学研究院 | Caller interphone system audio signal non-linear distortion detection method |
JP6461064B2 (en) * | 2016-09-28 | 2019-01-30 | 本田技研工業株式会社 | Acoustic characteristics calibration method |
CN112083807B (en) * | 2020-09-20 | 2021-10-29 | 吉林大学 | Foot terrain touch reproduction method and device based on sound-touch conversion |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1241663A1 (en) | 2001-03-13 | 2002-09-18 | Koninklijke KPN N.V. | Method and device for determining the quality of speech signal |
WO2003076889A1 (en) | 2002-03-08 | 2003-09-18 | Koninklijke Kpn N.V. | Method and system for measuring a system's transmission quality |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB8701365D0 (en) * | 1987-01-22 | 1987-02-25 | Thomas L D | Signal level control |
US5321636A (en) * | 1989-03-03 | 1994-06-14 | U.S. Philips Corporation | Method and arrangement for determining signal pitch |
US5687281A (en) * | 1990-10-23 | 1997-11-11 | Koninklijke Ptt Nederland N.V. | Bark amplitude component coder for a sampled analog signal and decoder for the coded signal |
US5588089A (en) * | 1990-10-23 | 1996-12-24 | Koninklijke Ptt Nederland N.V. | Bark amplitude component coder for a sampled analog signal and decoder for the coded signal |
US5765127A (en) * | 1992-03-18 | 1998-06-09 | Sony Corp | High efficiency encoding method |
JP2953238B2 (en) * | 1993-02-09 | 1999-09-27 | 日本電気株式会社 | Sound quality subjective evaluation prediction method |
US5632003A (en) * | 1993-07-16 | 1997-05-20 | Dolby Laboratories Licensing Corporation | Computationally efficient adaptive bit allocation for coding method and apparatus |
NL9500512A (en) * | 1995-03-15 | 1996-10-01 | Nederland Ptt | Apparatus for determining the quality of an output signal to be generated by a signal processing circuit, and a method for determining the quality of an output signal to be generated by a signal processing circuit. |
WO1997027578A1 (en) * | 1996-01-26 | 1997-07-31 | Motorola Inc. | Very low bit rate time domain speech analyzer for voice messaging |
ATE205009T1 (en) * | 1996-05-21 | 2001-09-15 | Koninkl Kpn Nv | APPARATUS AND METHOD FOR DETERMINING THE QUALITY OF AN OUTPUT SIGNAL TO BE GENERATED BY A SIGNAL PROCESSING CIRCUIT |
JP2000507788A (en) * | 1996-12-13 | 2000-06-20 | コニンクリジケ ケーピーエヌ エヌブィー | Apparatus and method for signal characterization |
DE19840548C2 (en) * | 1998-08-27 | 2001-02-15 | Deutsche Telekom Ag | Procedures for instrumental language quality determination |
JP3756686B2 (en) * | 1999-01-19 | 2006-03-15 | 日本放送協会 | Method and apparatus for obtaining evaluation value for evaluating degree of desired signal extraction, and parameter control method and apparatus for signal extraction apparatus |
WO2001065543A1 (en) * | 2000-02-29 | 2001-09-07 | Telefonaktiebolaget Lm Ericsson (Publ) | Compensation for linear filtering using frequency weighting factors |
DE60029453T2 (en) * | 2000-11-09 | 2007-04-12 | Koninklijke Kpn N.V. | Measuring the transmission quality of a telephone connection in a telecommunications network |
DE10134471C2 (en) * | 2001-02-28 | 2003-05-22 | Fraunhofer Ges Forschung | Method and device for characterizing a signal and method and device for generating an indexed signal |
EP1298646B1 (en) * | 2001-10-01 | 2006-01-11 | Koninklijke KPN N.V. | Improved method for determining the quality of a speech signal |
US7146313B2 (en) * | 2001-12-14 | 2006-12-05 | Microsoft Corporation | Techniques for measurement of perceptual audio quality |
US7457757B1 (en) * | 2002-05-30 | 2008-11-25 | Plantronics, Inc. | Intelligibility control for speech communications systems |
US7308403B2 (en) * | 2002-07-01 | 2007-12-11 | Lucent Technologies Inc. | Compensation for utterance dependent articulation for speech quality assessment |
ATE333694T1 (en) * | 2003-01-18 | 2006-08-15 | Psytechnics Ltd | TOOL FOR NON-INVASIVELY DETERMINING THE QUALITY OF A VOICE SIGNAL |
EP1465156A1 (en) * | 2003-03-31 | 2004-10-06 | Koninklijke KPN N.V. | Method and system for determining the quality of a speech signal |
US7526093B2 (en) * | 2003-08-04 | 2009-04-28 | Harman International Industries, Incorporated | System for configuring audio system |
-
2005
- 2005-09-20 CA CA2580763A patent/CA2580763C/en active Active
- 2005-09-20 DE DE602005009221T patent/DE602005009221D1/en active Active
- 2005-09-20 AT AT05787331T patent/ATE405922T1/en active
- 2005-09-20 PT PT05787331T patent/PT1792304E/en unknown
- 2005-09-20 WO PCT/NL2005/000683 patent/WO2006033570A1/en active Application Filing
- 2005-09-20 US US11/663,138 patent/US8014999B2/en active Active
- 2005-09-20 JP JP2007532270A patent/JP4879180B2/en active Active
- 2005-09-20 ES ES05787331T patent/ES2313413T3/en active Active
- 2005-09-20 AU AU2005285694A patent/AU2005285694B2/en active Active
- 2005-09-20 CN CN2005800377134A patent/CN101053016B/en active Active
- 2005-09-20 DK DK05787331T patent/DK1792304T3/en active
- 2005-09-20 EP EP05787331A patent/EP1792304B1/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1241663A1 (en) | 2001-03-13 | 2002-09-18 | Koninklijke KPN N.V. | Method and device for determining the quality of speech signal |
WO2003076889A1 (en) | 2002-03-08 | 2003-09-18 | Koninklijke Kpn N.V. | Method and system for measuring a system's transmission quality |
Non-Patent Citations (1)
Title |
---|
ITU-T RECOMMENDATION P 862: "Perceptual evaluation of speech quality (PESQ): An objective assessment of narrow-band telephone networks and speech codecs", ITU-T RECOMMENDATION P.862, XX, XX, 23 February 2001 (2001-02-23), pages 1 - 21, XP002327961 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1975924A1 (en) * | 2007-03-29 | 2008-10-01 | Koninklijke KPN N.V. | Method and system for speech quality prediction of the impact of time localized distortions of an audio transmission system |
WO2008119510A2 (en) * | 2007-03-29 | 2008-10-09 | Koninklijke Kpn N.V. | Method and system for speech quality prediction of the impact of time localized distortions of an audio trasmission system |
WO2008119510A3 (en) * | 2007-03-29 | 2008-12-31 | Koninkl Kpn Nv | Method and system for speech quality prediction of the impact of time localized distortions of an audio trasmission system |
JP2010534030A (en) * | 2007-07-13 | 2010-10-28 | ドルビー・ラボラトリーズ・ライセンシング・コーポレーション | Acoustic processing using auditory scene analysis and spectral distortion |
Also Published As
Publication number | Publication date |
---|---|
CN101053016B (en) | 2011-05-18 |
US8014999B2 (en) | 2011-09-06 |
CN101053016A (en) | 2007-10-10 |
JP2008513834A (en) | 2008-05-01 |
JP4879180B2 (en) | 2012-02-22 |
ATE405922T1 (en) | 2008-09-15 |
US20080040102A1 (en) | 2008-02-14 |
DE602005009221D1 (en) | 2008-10-02 |
ES2313413T3 (en) | 2009-03-01 |
AU2005285694B2 (en) | 2010-09-16 |
PT1792304E (en) | 2008-12-04 |
CA2580763C (en) | 2014-07-29 |
CA2580763A1 (en) | 2006-03-30 |
AU2005285694A1 (en) | 2006-03-30 |
DK1792304T3 (en) | 2009-01-05 |
EP1792304A1 (en) | 2007-06-06 |
EP1792304B1 (en) | 2008-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8014999B2 (en) | Frequency compensation for perceptual speech analysis | |
US9025780B2 (en) | Method and system for determining a perceived quality of an audio system | |
KR101430321B1 (en) | Method and system for determining a perceived quality of an audio system | |
US9953663B2 (en) | Method of and apparatus for evaluating quality of a degraded speech signal | |
US9659579B2 (en) | Method of and apparatus for evaluating intelligibility of a degraded speech signal, through selecting a difference function for compensating for a disturbance type, and providing an output signal indicative of a derived quality parameter | |
CA2891453C (en) | Method of and apparatus for evaluating intelligibility of a degraded speech signal | |
US20100211395A1 (en) | Method and System for Speech Intelligibility Measurement of an Audio Transmission System | |
US20100106489A1 (en) | Method and System for Speech Quality Prediction of the Impact of Time Localized Distortions of an Audio Transmission System | |
Côté et al. | An intrusive super-wideband speech quality model: DIAL | |
Olatubosun et al. | An Improved Logistic Function for Mapping Raw Scores of Perceptual Evaluation of Speech Quality (PESQ) | |
KR100275478B1 (en) | Objective speech quality measure method highly correlated to subjective speech quality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2580763 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007532270 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005285694 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005787331 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2005285694 Country of ref document: AU Date of ref document: 20050920 Kind code of ref document: A |
|
WWP | Wipo information: published in national office |
Ref document number: 2005285694 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580037713.4 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11663138 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 2005787331 Country of ref document: EP |