EP2096629B1 - Verfahren und gerät zur klassifizierung von tonsignalen - Google Patents

Verfahren und gerät zur klassifizierung von tonsignalen Download PDF

Info

Publication number
EP2096629B1
EP2096629B1 EP07855800A EP07855800A EP2096629B1 EP 2096629 B1 EP2096629 B1 EP 2096629B1 EP 07855800 A EP07855800 A EP 07855800A EP 07855800 A EP07855800 A EP 07855800A EP 2096629 B1 EP2096629 B1 EP 2096629B1
Authority
EP
European Patent Office
Prior art keywords
parameters
module
signal
parameter
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP07855800A
Other languages
English (en)
French (fr)
Other versions
EP2096629A1 (de
EP2096629A4 (de
Inventor
Wei Li
Lijing Xu
Qing Zhang
Jianfeng Xu
Shenghu Sang
Zhengzhong Du
Qin Yan
Haojiang Deng
Jun WANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP2096629A1 publication Critical patent/EP2096629A1/de
Publication of EP2096629A4 publication Critical patent/EP2096629A4/de
Application granted granted Critical
Publication of EP2096629B1 publication Critical patent/EP2096629B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/20Vocoders using multiple modes using sound class specific coding, hybrid encoders or object based coding

Definitions

  • the present invention relates to speech coding technologies, and in particular, to a method and apparatus for classifying sound signals.
  • the coder may encode the background noise and active speech at different rates. That is, the coder encodes the background noise at a lower rate, and encodes the active speech at a higher rate, thus reducing the average code rate and enhancing the variable-rate speech coding technology greatly.
  • VAD Voice Activity Detection
  • the VAD in the related art is developed for speech signals only, and categorizes input audio signals into only two types: noise and non-noise.
  • Later coders such as AMR_WB+ and SMV covers detection of music signals, serving as a correction and supplement to the VAD decision.
  • the AMR-WB+ coder is characterized that after VAD, the coding mode varies between a speech signal and a music signal, and depends on whether the input audio signal is a speech signal or music signal, thus minimizing the code rate and ensuring the coding quality.
  • the two different coding modes in the AMR-WB+ are: Algebraic Code Excited Linear Prediction (ACELP)-based coding algorithm, and Transform Coded eXcitation (TCX)-based coding algorithm.
  • ACELP Algebraic Code Excited Linear Prediction
  • TCX Transform Coded eXcitation
  • the ACELP sets up a speech phonation model, makes the most of the speech characteristics, and is highly efficient in encoding speech signals.
  • the ACELP technology is so mature that the ACELP may be extended on a universal audio coder to improve the speech coding quality massively.
  • the TCX may be extended on the low-bit-rate speech coder to improve the quality of encoding broadband music.
  • the ACELP mode selection algorithm and the TCX mode selection algorithm of the AMR-WB+ coding algorithm come in two types: open loop selection algorithm, and closed loop selection algorithm. Closed-loop selection corresponds to high complexity, and is default option. It is a traversal search selection mode based on a perceptive weighted Signal-to-Noise Ratio (SNR). Evidently, such a selection method is rather accurate, but involves rather complicated operation and a huge amount of codes.
  • SNR Signal-to-Noise Ratio
  • the open-loop selection includes the following steps.
  • step 101 the VAD module judges whether the signal is a non-usable signal or usable signal according to the Tone_flag and the sub-band energy parameter (Level[n]).
  • step 102 primary mode selection (EC) is performed.
  • step 103 the mode primarily determined in step 102 is corrected, and refined mode selection is performed to determine the coding mode to be selected. Specifically, this step is performed based on open loop pitch parameters and Immittance Spectral Frequency (ISF) parameters.
  • ISF Immittance Spectral Frequency
  • step 104 TCXS processing is performed. That is, when the number of times of selecting the speech signal coding mode continuously is less than three times, a small-sized closed-loop traversal search is performed to determine the coding mode finally, where the speech signal coding mode is ACELP and the music signal coding mode is TCX.
  • JELINEK M ET AL "Robust signal/noise discrimination for wideband speech and audio coding" discloses a robust discrimination method to separate information carrying signals from ambient noise coding applications.
  • the method consists of a two-stage procedure. First, a local decision is made based on a set of extracted parameters to update the estimated noise level. The parameters have been chosen to detect reliably speech as well as music signals. Then, the final decision is made based only on a frequency dependent Signal to Noise Ratio (SNR). The noise level update does not depend on the final decision to prevent the discriminator from locking when noise level changes suddenly. The performance is compared with the performance of the Voice Activity Detector (VAD) of G. 729, Annex G.
  • VAD Voice Activity Detector
  • ETSI TS 126290 V6.3.0 discloses an extended adaptive multi-rate wideband (AMR-WB+) codec with ACELP/TCX open-loop mode selection.
  • WO 02/065457 A2 discloses a speech coding system with a music classifier.
  • An encoder is disposed to receive an input signal and provides a bitstream based upon a speech coding of a portion of the input signal.
  • the encoder provides a classification of the input signal as one of noise, speech and music.
  • the music classifier analyzes or determines signal properties of the input signal.
  • the music classifier compares the signal properties to thresholds to determine the classification of the input signal.
  • a method according to claim 1 and an apparatus according to claim 4 for classifying sound signals are provided by the present invention to improve accuracy of sound signal classification. Further improvements of the method and the apparatus are provided in the respective dependent claims.
  • the update rate of the background noise is determined, the noise parameters are updated according to the update rate, the signals are classified primarily according to the sub-band energy parameters and the updated noise parameters, and the non-useful signals and the useful signals in the received speech signals are determined, thus reducing the probability of mistaking useful signals for noise signals and improving accuracy of classifying sound signals.
  • Figure 1 shows open loop selection of AMR-WB+ coding algorithm in the related art
  • Figure 2 is a general flowchart of a method for classifying sound signals in an embodiment of the present invention
  • Figure 3 is a schematic diagram showing an apparatus for classifying sound signals in an embodiment of the present invention.
  • Figure 4 is a schematic diagram showing a system in an embodiment of the present invention.
  • Figure 5 is a flowchart of calculating various parameters on a coder parameter extracting module in an embodiment of the present invention
  • Figure 6 is a flowchart of calculating various parameters on another coder parameter extracting module in an embodiment of the present invention.
  • Figure 7 shows composition of a PSC module in an embodiment of the present invention
  • Figure 8 shows how a signal type judging module determines characteristic parameters in an embodiment of the present invention
  • Figure 9 shows how a signal type judging module performs speech judgment in an embodiment of the present invention.
  • Figure 10 shows how a signal type judging module performs music judgment in an embodiment of the present invention
  • Figure 11 shows how a signal type judging module corrects a primary judgment result in an embodiment of the present invention
  • Figure 12 shows how a signal type judging module performs primary type correction for uncertain signals in an embodiment of the present invention
  • Figure 13 shows how a signal type judging module performs final type correction for signals in an embodiment of the present invention.
  • Figure 14 shows how a signal type judging module performs parameter update in an embodiment of the present invention.
  • the update rate of the background noise is determined according to the spectral distribution parameters of the current sound signal and the background noise, and the noise parameters are updated according to the update rate. Therefore, the useful signals and the non-useful signals in the received speech signals are determined according to the updated noise parameters, thus improving the accuracy of the noise parameters in determining the useful signals and non-useful signals, and improving the accuracy of classifying sound signals.
  • Figure 2 shows a method for classifying sound signals in an embodiment of the present invention, including the following process:
  • Block 201 Sound signals are received, and the update rate of background noise is determined according to the spectral distribution parameters of the background noise and the sound signals.
  • Block 202 The noise parameters are updated according to the update rate, and the sound signals are classified according to sub-band energy parameters and updated noise parameters.
  • the sound signals are classified into two types: useful signals, and non-useful signals.
  • the useful signals may be subdivided into speech signals and music signals, depending on whether the noise converges.
  • the subdividing may be based on open loop pitch parameters, ISF parameters, and sub-band energy parameters, or based on ISF parameters and sub-band energy parameters.
  • a determined useful signal type is obtained in an embodiment of the present invention.
  • the signal hangover length is determined according to the useful signal type, and the useful signals and the non-useful signals in the received speech signals are further determined according to the signal hangover length.
  • the music signal hangover may be set to a relatively great value to improve the sound effect of the music signal.
  • an apparatus for classifying sound signals in an embodiment of the present invention includes: a background noise parameter updating module, configured to: determine the update rate of background noise according to the spectral distribution parameters of the background noise and the current sound signals, and send the determined update rate to a PSC module; and a PSC module, configured to: update the noise parameters according to the update rate received from the background noise parameter updating module, perform primary classification for the signals according to the sub-band energy parameters and updated noise parameters, and determine the received speech signal to be a useful signal or non-useful signal.
  • a background noise parameter updating module configured to: determine the update rate of background noise according to the spectral distribution parameters of the background noise and the current sound signals, and send the determined update rate to a PSC module
  • a PSC module configured to: update the noise parameters according to the update rate received from the background noise parameter updating module, perform primary classification for the signals according to the sub-band energy parameters and updated noise parameters, and determine the received speech signal to be a useful signal or non-useful signal.
  • the apparatus for classifying sound signals may further include a signal type judging module.
  • the PSC module transfers the determined signal type to the signal type judging module.
  • the signal type judging module determines the type of a useful signal based on the open loop pitch parameters, ISF parameters, and sub-band energy parameters, or based on ISF parameters and sub-band energy parameters, where the type of the useful signal includes speech and music.
  • the apparatus for classifying sound signals may further include a classification parameter extracting module.
  • the PSC module transfers the determined signal type to the signal type judging module through the classification parameter extracting module.
  • the classification parameter extracting module is further configured to: obtain ISF parameters and sub-band energy parameters, or further obtain open loop pitch parameters, process the obtained parameters into signal type characteristic parameters, and send the parameters to the signal type judging module; and process the obtained parameters into spectral distribution parameters of sound signals and background noise, and transfer the spectral distribution parameters to the background noise parameter updating module. Therefore, the signal type judging module determines the type of useful signals according to the foregoing signal type characteristic parameter and the signal type determined by the PSC module, where the type of useful signals includes speech and music.
  • the PSC module may be further configured to transfer the sound signal SNR calculated in the process of determining the signal type to the signal type judging module.
  • the signal type judging module determines the useful signal to be a speech signal or music signal according to the SNR.
  • the apparatus for classifying sound signals may further include a coder mode and rate selecting module.
  • the signal type judging module transfers the determined signal type to the coder mode and rate selecting module, and the coder mode and rate selecting module determines the coding mode and rate of sound signals according to the received signal type.
  • the apparatus for classifying sound signals may further include a coder parameter extracting module, which is configured to extract ISF parameters and sub-band energy parameters or additionally open loop pitch parameters, transfer the extracted parameters to the classification parameter extracting module, and transfer the extracted sub-band energy parameters to the PSC module.
  • a coder parameter extracting module configured to extract ISF parameters and sub-band energy parameters or additionally open loop pitch parameters, transfer the extracted parameters to the classification parameter extracting module, and transfer the extracted sub-band energy parameters to the PSC module.
  • FIG. 4 is a schematic diagram showing a system in an embodiment of the present invention.
  • the system includes a Sound Activity Detector (SAD).
  • SAD Sound Activity Detector
  • the SAD sorts the audio digital signals into three types: non-useful signal, speech, and music, thus forming a basis for the coder to select the coding mode and rate.
  • the SAD module includes: a background noise estimation control module, a PSC module, a classification parameter extracting module, and a signal type judging module.
  • the SAD makes the most of the parameters of the coder in order to reduce resource occupation and calculation complexity. Therefore, the coder parameter extracting module in the coder is used to calculate the sub-band energy parameters and coder parameters, and provide the calculated parameters for the SAD module.
  • the SAD module finally outputs a determined signal type (namely, non-useful signal, speech, or music), and provides the determined signal type for the coder mode and rate selecting module to select the coder mode and rate.
  • the SAD-related modules in the coder, sub-modules in the SAD, and the interaction processes between the sub-modules are detailed below.
  • the coder parameter extracting module in the coder calculates the sub-band energy parameters and coder parameters, and provides the calculated parameters for the SAD module.
  • the sub-band energy parameters may be calculated through filtering of a filter group.
  • the specific quantity of sub-bands (for example, 12 sub-bands in this embodiment) is determined according to the calculation complexity requirement and classification accuracy requirement.
  • Figure 5 or Figure 6 shows how a coder parameter extracting module calculates various parameters required by the SAD module in this embodiment.
  • the process shown in Figure 5 includes the following process:
  • Block 501 The coder parameter extracting module calculates the sub-band energy parameters first.
  • Block 502 The coder parameter extracting module decides whether it is necessary to perform ISF calculation according to the primary signal judgment result (Vad_flag) received from the PSC module, and performs block 503 if necessary; or performs block 504 if not necessary.
  • Vad_flag the primary signal judgment result
  • the decision about whether to perform ISF calculation in this block includes: If the current frame is composed of non-useful signal signals, the mechanism of the coder applies.
  • the mechanism of the coder is: If ISF parameters are required when the coder encodes non-useful signals, the ISF calculation needs to be performed; otherwise, the operation of the coder parameter extracting module is finished. If the current frame is composed of useful signals, the ISF calculation needs to be performed. Most coding modes require calculation of ISF parameters for useful signals. Therefore, the calculation brings no redundant complexity to the coder.
  • the technical solution to calculation of ISF parameters is detailed in the instruction manuals of coders, and is not repeated here any further.
  • Block 503 The coder parameter extracting module calculates the ISF parameters and then performs block 504.
  • Block 504 The coder parameter extracting module calculates the open loop pitch parameters.
  • the sub-band energy parameters calculated through the process in Figure 5 are provided for the PSC module and the classification parameter extracting module in the SAD, and other parameters are provided for the classification parameter extracting module in the SAD.
  • Blocks 601-603 are basically identical to blocks 501-503 in Figure 5 .
  • open-loop pitch parameters are redundant to some coding modes such as TCX. In order to simplify calculation, it is basically certain that the corresponding coding mode of the signal does not need to calculate open loop pitch parameters after the noise estimation converges. Therefore, the open loop pitch parameters are not calculated any more.
  • the open loop pitch parameters need to be calculated in order to ensure convergence of the noise estimation and the convergence speed. However, such calculation occurs at the startup stage, and the complexity of calculation is ignorable.
  • the technical solution to calculation of open loop pitch parameters is detailed in the instruction about ACELP-based coding, and is not repeated here any further.
  • the basis for judging whether the noise estimation converges may be: The count of determining as noise frames continuously exceeds the noise convergence threshold (THR1). In an example in this embodiment, the value of THR1 is 20.
  • the foregoing extracted sub-band energy parameter is: level[i], where i represents a member index of the vector, and its value falls within 1...12 in this embodiment, corresponding to 0-200 Hz, 200-400 Hz, 400-600 Hz, 600-800 Hz, 800-1200 Hz, 1200-1600 Hz, 1600-2000 Hz, 2000-2400 Hz, 2400-3200 Hz, 3200-40000 Hz, 4000-4800 Hz, and 4800-6400 Hz, respectively.
  • ISF parameter is Isf n [i] , where n represents a frame index, and the value of i falls within 1...16, representing a member index in the vector.
  • the foregoing extracted open loop pitch parameters include: open_loop pitch gain (ol_gain), open_loop pitch lag (ol_lag), and tone_flag. If the value of ol_gain is greater than the value of tone threshold (TONE_THR), the tone_flag is set to 1.
  • the PSC module may be implemented through various VAD algorithms in the related art, and includes: background noise estimating sub-module, SNR calculating sub-module, useful signal estimating sub-module, judgment threshold adjusting sub-module, comparing sub-module, and hangover protective useful signal sub-module.
  • the implementation of the PSC module may differ from the VAD algorithm module in the related art in the following aspects:
  • the SNR calculating sub-module calculates the SNR according to sub-band energy estimation parameters of background noise and the sub-band energy parameters.
  • the calculated SNR parameter is not only applied inside the PSC module, but also transferred to the signal type judging module so that the signal type judging module identifies the speech and music more accurately in the case of low SNR.
  • the VAD in the related art underperforms in identifying noise and some types of music, and improvement is made for the VAD in this embodiment:
  • the calculation of the background noise parameter is controlled by the update rate (ACC) provided by the background noise parameter updating module.
  • the background noise estimating sub-module receives the update rate from the background noise parameter updating module, updates the noise parameter, and transfers the sub-band energy estimation parameters of background noise calculated out according to the updated noise parameter to the SNR calculating sub-module.
  • the calculation of the update rate is detailed in the instruction about the background noise parameter updating module hereinafter.
  • the update rate comes in 4 levels: acc1, acc2, acc3, and acc4.
  • different upward update parameters (update_up) and downward update parameters (update_down) are determined, where update_up corresponds to the upward update rate of background noise, and update_down corresponds to the downward update rate of background noise.
  • the solution to updating the noise parameter may be the solution in the AMR_WB+:
  • m frame index
  • n sub-band index
  • i element index of spectral distribution parameter vector
  • i 1,2,3,4 bckr_est: sub-band energy of background noise estimation
  • p ⁇ estimation of spectral distribution parameter vector of background noise
  • p spectral distribution parameter vector of the current signal
  • hangover is used to prevent useful signals from being mistaken for noise.
  • the hangover length should be tradeoff between signal protection and transmission efficiency.
  • the hangover length may be a constant after learning.
  • a multi-rate coder is oriented to audio signals such as music. Such signals tend to have a long low-energy hangover. It is difficult for a conventional VAD to detect such a hangover. Therefore, a relatively long hangover is required for protection.
  • the hangover length in the hangover protective useful signal sub-module is designed to be adaptive according to the SAD signal judgment result.
  • HANG_LONG 100
  • HANG_SHORT 20
  • the classification parameter extracting module is configured to: calculate the parameters required by the signal type judging module and the background noise parameter updating module according to the Vad_flag parameter determined by the PSC module and the sub-band energy parameters, ISF parameters, and open loop pitch parameters provided by the coder parameter extracting module; and provide the sub-band energy parameters, ISF parameters, open loop pitch parameters, and calculated parameters for the signal type judging module and the background noise parameter updating module.
  • the parameters calculated by the classification parameter extracting module include:
  • Difference of continuous open loop pitch lags is compared. If the increment of the open loop pitch lag is less than a set threshold, the lag count accrues; if the sum of the lag counts of two continuous frames is great enough, the pitch is set to 1; otherwise, the pitch is set to 0.
  • the formula for calculating the open loop pitch lag is specified in the AMR-WB+/AMR-WB standard document.
  • tone_flg 1000*tone_flg.
  • ra sublevel_high_energy sublevel_low_energy
  • sublevel_high_energy level 10 + level 11
  • sublevel_low_energy level 0 + level 1 + level 2 + level 3 + level 4 + level 5 + level 6 + level 7 + level 8 + level 9 ;
  • Sub-band energy standard deviation mean (level_meanSD) parameter average of the sub-band energy standard deviation (level_SD) of two adjacent frames, where the calculation method of the level_SD parameter is similar to the calculation method of the Isf_SD described above.
  • the parameters provided for the background noise parameter updating module include: zcr, ra, f_flux, and t_flux; the parameters provided for the signal type judging module include: pitch, meangain, isf_meanSD, and level_meanSD.
  • the signal type judging module is configured to sort the signals into non-useful(such as noise), speech, and music according to the snr and Vad_flag parameters received from the PSC module and the sub-band energy parameter, pitch, meangain, Isf_meansD, and level_meanSD parameters received from the classification parameter extracting module.
  • the signal type judging module may include:
  • the process of determining a useful signal to be a speech signal or music signal includes:
  • this embodiment provides a parameter flag hangover mechanism.
  • the characteristic parameter values such as pitch_flag, level_meanSD_high_flag, ISF_meanSD_high_flag, ISF_meanSD_low_flag, level_meanSD_low_flag, and meangain_flag are determined according to the hangover mechanism, as shown in Figure 8 .
  • the length of the hangover period is determined according to the hangover parameter flag value.
  • This embodiment provides two types of hangover settings (namely, two solutions to determining the hangover parameter flag value).
  • the corresponding parameter hangover counter value increases by one; otherwise, the corresponding parameter hangover counter value is set to 0, and different parameter hangover flags are set according to the value of the parameter hangover counter. If the value of the parameter hangover counter is higher, the parameter hangover flag value is greater. The specific value is determined as required at the time of setting the parameter hangover flag value according to the parameter counter, and is not described here any further.
  • the hangover length is controlled according to the Error Rate (ER) of the internal nodes of the decision tree corresponding to the training parameter. If the ER is lower, the hangover is shorter; if the ER is higher, the hangover is longer.
  • ER Error Rate
  • the signal is primarily sorted into either speech or music:
  • the first ISF speech threshold such as 1500
  • the speech flag bit is set to 1; otherwise, in block 904, a judgment is made about whether the number of continuous frames whose pitch value is 1 exceeds the preset threshold of the number of hangover frames (such as 2 frames). If yes, the speech flag bit is set to 1; otherwise: in block 905, a judgment is made about whether the meangain exceeds the preset threshold of the longtime correlation speech (such as 8000). If yes, the speech flag bit is set to 1; otherwise, in block 906, a judgment is made about whether either or both of the level_meanSD_high_flag value and the ISF_meanSD_high_flag value are 1. If yes, the speech flag bit is set to 1; otherwise, the value of the speech flag bit remains unchanged.
  • the sub-band energy threshold such as 5000
  • block 1109 is performed to judge whether pitch_flag is 1, the ISF_meanSD is less than the ISF music threshold (such as 900), and the number of continuous speech frames is less than 3. If yes, the signal is determined to be of the music type; otherwise, the signal is still determined to be of the uncertain type.
  • block 1110 is performed to judge whether the number of continuous music frames is greater than 3 and the ISF_meanSD is less than the ISF music threshold. If yes, the signal is determined to be a music signal; otherwise, the signal is determined to be a speech signal.
  • the signals of the uncertain type undergo the primary corrective classification process shown in Figure 12 , including:
  • the speech and music hangover flags are cleared. If the signals before this frame are continuous speech signals and the continuity is strong, the speech is judged according to the characteristic parameters of the speech. If the speech conditions are fulfilled, the speech_hangover_flag is set to 1, as illustrated in blocks 1203 to 1206 in Figure 12 . If the signals before this frame are continuous music signals and the continuity is strong, the music is judged according to the characteristic parameters of the music. If the music conditions are fulfilled, the music_hangover_flag is set to 1, as illustrated in blocks 1207 to 1210 in Figure 12 .
  • the speech hangover flag is 1 and the music hangover flag is 0, the current signal type is set to the speech class. If the music hangover flag is 1 and the speech hangover flag is 0, the current signal type is set to the music class. If both the music hangover flag and the speech hangover flag are 1 or both are 0, the signal type is set to the uncertain class. In this case, if more than 20 previous music frames are continuous, the signal is determined to be of the music class; if more than 20 previous speech frames are continuous, the signal is determined to be of the speech class.
  • the useful signal type is corrected finally in Figure 13 .
  • the type is further corrected according to the current context.
  • the current context is music and the continuity is longer than 3 seconds, namely, the current continuous music frames are more than 150 frames
  • mandatory correction may be performed according to the ISF_meanSD value to determine the music signal.
  • the current context is speech and the continuity is longer than 3 seconds, namely, the current continuous speech frames are more than 150 frames
  • mandatory correction may be performed according to the ISF_meanSD value to determine the speech signal class.
  • the signal type is still uncertain
  • the signal type is corrected according to the previous context in block 1303, namely, the current uncertain signal type is sorted into the previous signal type.
  • the three type counters and the threshold values in the signal type judging module need to be updated.
  • the music counter music_continue_counter
  • Other type counters are processed similarly as shown in Figure 14 , and are not detailed here any further.
  • the threshold values are updated according to the SNR output by the PSC module.
  • the threshold examples given in the embodiments herein are the values learned in the case that the SNR is 20 dB.
  • the background noise parameter updating module uses some spectral distribution parameters calculated in the classification parameter extracting module in the SAD to control the update rate of the background noise.
  • the energy level of the background noise may surge abruptly. In this case, it is probable that the background noise estimation remains non-updated because the signals are continuously determined to be useful signals. Such a problem is solved by the background noise parameter updating module.
  • the background noise parameter updating module calculates the vector of relevant spectral distribution parameters according to the parameters received from the classification parameter extracting module.
  • the vector includes the following elements:
  • This embodiment makes use of the stable spectral features of the background noise.
  • the elements of the spectral distribution parameter vector are not limited to the 4 elements listed above.
  • the update rate of the current background noise is controlled by a difference ( d cb ) between the current spectral distribution parameter and the spectral distribution parameter estimation of the background noise.
  • the difference may be implemented through the algorithms such as Euclidean distance and Manhattan distance.
  • the module if d cb ⁇ TH1, the module outputs an update rate acc1, which represents the fastest update rate; otherwise, if d cb ⁇ TH2, the module outputs an update rate acc2; otherwise, if d cb ⁇ TH3, the module outputs an update rate acc3; otherwise, the module outputs an update rate acc4.
  • TH1, TH2, TH3 and TH4 are update thresholds, and the specific threshold values depend on the actual environment conditions.
  • the update rate of the background noise is determined, the noise parameters are updated according to the update rate, the signals are classified primarily according to the sub-band energy parameters and the updated noise parameters, and the non-useful signals and the useful signals in the received speech signals are determined, thus reducing the probability of mistaking useful signals for noise signals and improving accuracy of classifying sound signals.
  • the embodiments of the present invention may be implemented through software in addition to a universal hardware platform or through hardware only. In most cases, however, software in addition to a universal hardware platform is preferred. Therefore, the technical solution under the present invention or contributions to the related art may be embodied by a software product.
  • the software product is stored in a storage medium and incorporates several instructions so that a computer device (for example, PC, server, or network device) may execute the method in each embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Telephonic Communication Services (AREA)

Claims (7)

  1. Verfahren zum Klassifizieren von Tonsignalen, umfassend:
    Empfangen der Tonsignale und Bestimmen einer Aktualisierungsrate von Hintergrundgeräusch gemäß Spektralverteilungsparametern des Hintergrundgeräuschs und Spektralverteilungsparametern der Tonsignale;
    Aktualisieren von Geräuschparametern gemäß der Aktualisierungsrate und Klassifizieren der Tonsignale in nichtnützliche Signale und nützliche Signale gemäß Subbandenergieparametern und den aktualisierten Geräuschparametern;
    Erhalten eines Immittanzspektralfrequenz- bzw. ISF-Parameters und eines Subbandenergieparameters; und
    für die durch Klassifikation erhaltenen nützlichen Signale: Bestimmen des Typs dieser nützlichen Signale auf der Basis des Immittanzspektralfrequenzparameters und des Subbandenergieparameters, wobei der Typ der nützlichen Signale Sprache und Musik umfasst.
  2. Verfahren nach dem vorhergehenden Anspruch, gekennzeichnet durch Bestimmen einer Signalüberhanglänge gemäß dem bestimmten Typ der nützlichen Signale und weiteres Klassifizieren der Tonsignale gemäß der Signalüberhanglänge.
  3. Verfahren nach einem der vorhergehenden Ansprüche, gekennzeichnet durch Berechnen einer Differenz zwischen den Spektralverteilungsparametern der Tonsignale und den Spektralverteilungsparametern des Hintergrundgeräuschs und Bestimmen der Aktualisierungsrate gemäß dieser Differenz.
  4. Vorrichtung zum Klassifizieren von Tonsignalen, umfassend:
    ein Hintergrundgeräusch-Parameteraktualisierungsmodul, das dafür ausgelegt ist, eine Aktualisierungsrate von Hintergrundgeräusch gemäß Spektralverteilungsparametern des Hintergrundgeräuschs und Spektralverteilungsparametern der Tonsignale zu bestimmen;
    ein Modul zur Primärsignalklassifikation PSC, das dafür ausgelegt ist, Geräuschparameter gemäß der von dem Hintergrundgeräusch-Parameteraktualisierungsmodul empfangenen Aktualisierungsrate zu aktualisieren, Primärklassifikation für die Tonsignale gemäß den Subbandenergieparametern und den aktualisierten Geräuschparametern durchzufiihren und die empfangenen Tonsignale als nützliche Signale oder nichtnützliche Signale zu bestimmen; und
    ein Signaltyp-Beurteilungsmodul, das dafür ausgelegt ist, für die durch das Primärsignalklassifikationsmodul klassifizierten nützlichen Signale den Typ dieser nützlichen Signale auf der Basis von Immittanzspektralfrequenz- bzw. ISF-Parametern und den Subbandenergieparametern zu bestimmen, wobei der Typ der nützlichen Signale Sprache und Musik umfasst.
  5. Vorrichtung nach dem vorhergehenden Anspruch, gekennzeichnet durch ein Klassifikationsparameter-Extraktionsmodul, das dafür ausgelegt ist, ISF-Parameter und Subbandenergieparameter zu erhalten oder ferner Open-Loop-Pitch-Parameter zu erhalten, die erhaltenen Parameter zu Signaltypcharakteristikparametern zu verarbeiten und die Parameter zu dem Signaltyp-Beurteilungsmodul zu senden und die erhaltenen Parameter zu Spektralverteilungsparametern von Tonsignalen und Hintergrundgeräusch zu verarbeiten und die Spektralverteilungsparameter zu dem Hintergrundgeräusch-Parameteraktualisierungsmodul zu transferieren, wobei das PSC-Modul ferner dafür ausgelegt ist, den bestimmten Signaltyp durch das Klassifikationsparameter-Extraktionsmodul zu dem Signaltyp-Beurteilungsmodul zu transferieren und das Signaltyp-Beurteilungsmodul ferner dafür ausgelegt ist, den Typ nützlicher Signale gemäß dem obigen Signaltypcharakteristikparameter und dem durch das PSC-Modul bestimmten Signaltyp zu bestimmen.
  6. Vorrichtung nach dem vorhergehenden Anspruch, gekennzeichnet durch ein Codierermodus- und Ratenauswahlmodul, das dafür ausgelegt ist, den Codierungsmodus und die Rate von Tonsignalen gemäß dem durch das Signaltyp-Beurteilungsmodul transferierten und von diesem empfangenen Signaltyp zu bestimmen.
  7. Vorrichtung nach Anspruch 4 bis 6, gekennzeichnet durch ein Codiererparameter-Extraktionsmodul, das dafür ausgelegt ist, ISF-Parameter und Subbandenergieparameter oder zusätzlich Open-Loop-Pitch-Parameter zu extrahieren, die extrahierten Parameter zu dem Klassifikationsparameter-Extraktionsmodul zu transferieren und die extrahierten Subbandenergieparameter zu dem PSC-Modul zu transferieren.
EP07855800A 2006-12-05 2007-12-26 Verfahren und gerät zur klassifizierung von tonsignalen Active EP2096629B1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN 200610164456 CN100483509C (zh) 2006-12-05 2006-12-05 声音信号分类方法和装置
PCT/CN2007/003798 WO2008067735A1 (fr) 2006-12-05 2007-12-26 Procédé et dispositif de classement pour un signal sonore

Publications (3)

Publication Number Publication Date
EP2096629A1 EP2096629A1 (de) 2009-09-02
EP2096629A4 EP2096629A4 (de) 2011-01-26
EP2096629B1 true EP2096629B1 (de) 2012-10-24

Family

ID=39491665

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07855800A Active EP2096629B1 (de) 2006-12-05 2007-12-26 Verfahren und gerät zur klassifizierung von tonsignalen

Country Status (3)

Country Link
EP (1) EP2096629B1 (de)
CN (1) CN100483509C (de)
WO (1) WO2008067735A1 (de)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8712076B2 (en) 2012-02-08 2014-04-29 Dolby Laboratories Licensing Corporation Post-processing including median filtering of noise suppression gains
WO2014077591A1 (ko) * 2012-11-13 2014-05-22 삼성전자 주식회사 부호화 모드 결정방법 및 장치, 오디오 부호화방법 및 장치와, 오디오 복호화방법 및 장치
US9173025B2 (en) 2012-02-08 2015-10-27 Dolby Laboratories Licensing Corporation Combined suppression of noise, echo, and out-of-location signals

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5168162B2 (ja) * 2009-01-16 2013-03-21 沖電気工業株式会社 音信号調整装置、プログラム及び方法、並びに、電話装置
WO2011044848A1 (zh) * 2009-10-15 2011-04-21 华为技术有限公司 信号处理的方法、装置和系统
CN102299693B (zh) * 2010-06-28 2017-05-03 瀚宇彩晶股份有限公司 音讯调整系统及方法
CA3160488C (en) 2010-07-02 2023-09-05 Dolby International Ab Audio decoding with selective post filtering
CN102446506B (zh) * 2010-10-11 2013-06-05 华为技术有限公司 音频信号的分类识别方法及装置
US9240191B2 (en) 2011-04-28 2016-01-19 Telefonaktiebolaget L M Ericsson (Publ) Frame based audio signal classification
US8990074B2 (en) * 2011-05-24 2015-03-24 Qualcomm Incorporated Noise-robust speech coding mode classification
US9099098B2 (en) * 2012-01-20 2015-08-04 Qualcomm Incorporated Voice activity detection in presence of background noise
EP3113184B1 (de) * 2012-08-31 2017-12-06 Telefonaktiebolaget LM Ericsson (publ) Verfahren und vorrichtung zur erkennung von sprachaktivitäten
CN102928713B (zh) * 2012-11-02 2017-09-19 北京美尔斯通科技发展股份有限公司 一种磁场天线的本底噪声测量方法
CN104347067B (zh) * 2013-08-06 2017-04-12 华为技术有限公司 一种音频信号分类方法和装置
CN106328169B (zh) 2015-06-26 2018-12-11 中兴通讯股份有限公司 一种激活音修正帧数的获取方法、激活音检测方法和装置
CN106328152B (zh) * 2015-06-30 2020-01-31 芋头科技(杭州)有限公司 一种室内噪声污染自动识别监测系统
CN105654944B (zh) * 2015-12-30 2019-11-01 中国科学院自动化研究所 一种融合了短时与长时特征建模的环境声识别方法及装置
CN107123419A (zh) * 2017-05-18 2017-09-01 北京大生在线科技有限公司 Sphinx语速识别中背景降噪的优化方法
CN108257617B (zh) * 2018-01-11 2021-01-19 会听声学科技(北京)有限公司 一种噪声场景识别系统及方法
CN110992989B (zh) * 2019-12-06 2022-05-27 广州国音智能科技有限公司 语音采集方法、装置及计算机可读存储介质
CN113257276B (zh) * 2021-05-07 2024-03-29 普联国际有限公司 一种音频场景检测方法、装置、设备及存储介质

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002065457A2 (en) * 2001-02-13 2002-08-22 Conexant Systems, Inc. Speech coding system with a music classifier

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5742734A (en) * 1994-08-10 1998-04-21 Qualcomm Incorporated Encoding rate selection in a variable rate vocoder
US6456964B2 (en) * 1998-12-21 2002-09-24 Qualcomm, Incorporated Encoding of periodic speech using prototype waveforms
US6691084B2 (en) * 1998-12-21 2004-02-10 Qualcomm Incorporated Multiple mode variable rate speech coding
JP3454206B2 (ja) * 1999-11-10 2003-10-06 三菱電機株式会社 雑音抑圧装置及び雑音抑圧方法
US6983242B1 (en) * 2000-08-21 2006-01-03 Mindspeed Technologies, Inc. Method for robust classification in speech coding
CN1175398C (zh) * 2000-11-18 2004-11-10 中兴通讯股份有限公司 一种从噪声环境中识别出语音和音乐的声音活动检测方法
EP2239733B1 (de) * 2001-03-28 2019-08-21 Mitsubishi Denki Kabushiki Kaisha Rauschunterdrückungsverfahren

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002065457A2 (en) * 2001-02-13 2002-08-22 Conexant Systems, Inc. Speech coding system with a music classifier

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8712076B2 (en) 2012-02-08 2014-04-29 Dolby Laboratories Licensing Corporation Post-processing including median filtering of noise suppression gains
US9173025B2 (en) 2012-02-08 2015-10-27 Dolby Laboratories Licensing Corporation Combined suppression of noise, echo, and out-of-location signals
WO2014077591A1 (ko) * 2012-11-13 2014-05-22 삼성전자 주식회사 부호화 모드 결정방법 및 장치, 오디오 부호화방법 및 장치와, 오디오 복호화방법 및 장치
AU2017206243B2 (en) * 2012-11-13 2018-10-04 Samsung Electronics Co., Ltd. Method and apparatus for determining encoding mode, method and apparatus for encoding audio signals, and method and apparatus for decoding audio signals

Also Published As

Publication number Publication date
CN101197135A (zh) 2008-06-11
CN100483509C (zh) 2009-04-29
WO2008067735A1 (fr) 2008-06-12
EP2096629A1 (de) 2009-09-02
EP2096629A4 (de) 2011-01-26

Similar Documents

Publication Publication Date Title
EP2096629B1 (de) Verfahren und gerät zur klassifizierung von tonsignalen
CN101197130B (zh) 声音活动检测方法和声音活动检测器
JP3197155B2 (ja) ディジタル音声コーダにおける音声信号ピッチ周期の推定および分類のための方法および装置
RU2441286C2 (ru) Способ и устройство для обнаружения звуковой активности и классификации звуковых сигналов
US6424938B1 (en) Complex signal activity detection for improved speech/noise classification of an audio signal
EP2159788B1 (de) Sprachaktivitätsdetektionseinrichtung und verfahren
US6202046B1 (en) Background noise/speech classification method
EP1340223B1 (de) Verfahren und vorrichtung zur robusten sprachklassifikation
US5930747A (en) Pitch extraction method and device utilizing autocorrelation of a plurality of frequency bands
JP4218134B2 (ja) 復号装置及び方法、並びにプログラム提供媒体
WO2001035395A1 (en) Wide band speech synthesis by means of a mapping matrix
EP2702585B1 (de) Rahmenbasierte audiosignalklassifizierung
WO2006019556A2 (en) Low-complexity music detection algorithm and system
WO2008082133A1 (en) Method, medium, and apparatus to classify for audio signal, and method, medium and apparatus to encode and/or decode for audio signal using the same
CN101149921A (zh) 一种静音检测方法和装置
JP2001005474A (ja) 音声符号化装置及び方法、入力信号判定方法、音声復号装置及び方法、並びにプログラム提供媒体
CN101393741A (zh) 一种宽带音频编解码器中的音频信号分类装置及分类方法
JPH10105194A (ja) ピッチ検出方法、音声信号符号化方法および装置
US6564182B1 (en) Look-ahead pitch determination
JP3331297B2 (ja) 背景音/音声分類方法及び装置並びに音声符号化方法及び装置
US6915257B2 (en) Method and apparatus for speech coding with voiced/unvoiced determination
CN106463140A (zh) 具有语音信息的改进型帧丢失矫正
CN101393744A (zh) 调整门限值的方法及检测模块
Zhang et al. A CELP variable rate speech codec with low average rate
JPH08305388A (ja) 音声区間検出装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20090608

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20101223

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 11/02 20060101AFI20080710BHEP

17Q First examination report despatched

Effective date: 20110524

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RTI1 Title (correction)

Free format text: METHOD AND APPARATUS FOR CLASSIFYING SOUND SIGNALS

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 581291

Country of ref document: AT

Kind code of ref document: T

Effective date: 20121115

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602007026318

Country of ref document: DE

Effective date: 20121220

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 581291

Country of ref document: AT

Kind code of ref document: T

Effective date: 20121024

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20121024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130224

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130125

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130225

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130124

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20121231

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

26N No opposition filed

Effective date: 20130725

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20121231

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20121226

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20121231

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130204

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602007026318

Country of ref document: DE

Effective date: 20130725

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20121226

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121024

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20071226

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 9

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 10

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230524

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20231102

Year of fee payment: 17

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20231108

Year of fee payment: 17

Ref country code: DE

Payment date: 20231031

Year of fee payment: 17