EP2485029B1 - Audio signal switching method and device - Google Patents

Audio signal switching method and device Download PDF

Info

Publication number
EP2485029B1
EP2485029B1 EP11774406.0A EP11774406A EP2485029B1 EP 2485029 B1 EP2485029 B1 EP 2485029B1 EP 11774406 A EP11774406 A EP 11774406A EP 2485029 B1 EP2485029 B1 EP 2485029B1
Authority
EP
European Patent Office
Prior art keywords
frequency band
speech
weight
signal
band signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP11774406.0A
Other languages
German (de)
English (en)
French (fr)
Other versions
EP2485029A4 (en
EP2485029A1 (en
Inventor
Zexin Liu
Lei Miao
Chen Hu
Wenhai Wu
Yue Lang
Qing Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to EP17151713.9A priority Critical patent/EP3249648B1/en
Publication of EP2485029A1 publication Critical patent/EP2485029A1/en
Publication of EP2485029A4 publication Critical patent/EP2485029A4/en
Application granted granted Critical
Publication of EP2485029B1 publication Critical patent/EP2485029B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/24Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/038Speech enhancement, e.g. noise reduction or echo cancellation using band spreading techniques

Definitions

  • the present invention relates to communication technologies, and in particular, to a method and an apparatus for switching speech or audio signals.
  • the network may intercept the bit stream of the speech or audio signals transmitted from an encoder to the network with different bit rates, so that the decoder may decode the speech or audio signals with different bandwidths from the intercepted bit stream.
  • the bidirectional switching from/to a narrow frequency band speech or audio signal to/from a wide frequency band speech or audio signal may occur during the process of transmitting speech or audio signals.
  • the narrow frequency band signal is switched to a wide frequency band signal with only a low frequency band component through up-sampling and low-pass filtering; the wide frequency band speech or audio signal includes both a low frequency band signal component and a high frequency band signal component.
  • the inventor discovers at least the following problems in the prior art: Because high frequency band signal information is available in wide frequency band speech or audio signals but is absent in narrow frequency band speech or audio signals, when speech or audio signals with different bandwidths are switched, a energy jump may occur in the speech or audio signals resulting in uncomfortable feeling in listening, and thus reducing the quality of audio signals received by a user.
  • N speech frames after the switch may be reconstructed with a TDBWE or TDAC decoding algorithm.
  • M may be any value less than N.
  • the higher-band signal components of N speech frames are shaped in time-domain to form a processed higher-band signal component s HB ts n , which incorporating with the decoded lower-band signal component s LB post n are performed a synthesis filtering to reconstructed a time-varying fadeout signal.
  • G.729-based embedded variable bit-rate coder An 8-32 kbit/s scalable wideband coder bitstream interoperable with G.729; G.729.1 (05/06) discloses a fade-in of the higher-band signal after a narrow band to a wide band switch, while the transition from wide band to narrow band is instantaneous.
  • Embodiments of the present invention provide a method and an apparatus for switching speech or audio signals to smoothly switch speech or audio signals between different bandwidths, thereby improving the quality of audio signals received by a user.
  • a method for switching the bandwidth of speech or audio signals includes:
  • An apparatus for switching speech or audio signals includes:
  • the first high frequency band signal of the current frame of speech or audio signal is processed according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal; the processed first high frequency band signal and the first low frequency band signal are synthesized into a wide frequency band signal.
  • these speech or audio signals can be smoothly switched, thus reducing the ill impact of the energy jump on the subjective audio quality of the speech or audio signals and improving the quality of speech or audio signals received by the user.
  • FIG. 1 is a flowchart of the first embodiment of a method for switching speech or audio signals. As shown in FIG. 1 , by using the method for switching speech or audio signals, when a switching of a speech or audio occurs, each frame after a switching frame is processed according to the following steps:
  • Step 102 Synthesize the processed first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal into a wide frequency band signal.
  • the previous M frame of speech or audio signals refer to M frame of speech or audio signals before the current frame.
  • the L frame of speech or audio signals before the switching refer to L frame of speech or audio signals before the switching frame When a switching of a speech or audio occurs. If the current speech frame is a wide frequency band signal but the previous speech frame is a narrow frequency band signal or if the current speech frame is a narrow frequency band signal but the previous speech frame is a wide frequency band signal, the speech or audio signal is switched and the current speech frame is the switching frame.
  • the first high frequency band signal of the current frame of speech or audio signal is processed according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal.
  • the high frequency band signal of these speech or audio signals can be smoothly switched.
  • the processed first high frequency band signal and the first low frequency band signal are synthesized into a wide frequency band signal; the wide frequency band signal is transmitted to a user terminal, so that the user enjoys a high quality speech or audio signal.
  • FIG. 2 is a flowchart of the second embodiment of the method for switching speech or audio signals. As shown in FIG. 2 , the method includes the following steps:
  • the first frequency band speech or audio signal in this embodiment may be a wide frequency band speech or audio signal or a narrow frequency band speech or audio signal.
  • the operation may be executed according to the following two cases: 1. If the first frequency band speech or audio signal is a wide frequency band speech or audio signal, the low frequency band signal and high frequency band signal of the wide frequency band speech or audio signals are synthesized into a wide frequency band signal. 2. If the first frequency band speech or audio signal is a narrow frequency band speech or audio signal, the low frequency band signal and the high frequency band signal of the narrow frequency band speech or audio signal are synthesized into a wide frequency band signal. In this case, although the signal is a wide frequency band signal, the high frequency band is null.
  • Step 201 When the speech or audio signal is switched, weight the first high frequency band signal of the current frame of speech or audio signal and the second high frequency band signal of the previous M frame of speech or audio signals to obtain a processed first high frequency band signal.
  • M is greater than or equal to 1.
  • the first high frequency band signal of the current frame of speech or audio signal is processed according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal.
  • the wide frequency band speech or audio signal is switched to the narrow frequency band speech or audio signal, because the high frequency band signal information corresponding to the narrow frequency band speech or audio signal is null, the component of the high frequency band signal corresponding to the narrow frequency band speech or audio signal needs to be restored to enable the wide frequency band speech or audio signal to be smoothly switched to the narrow frequency band speech or audio signal.
  • the narrow frequency band speech or audio signal is switched to the wide frequency band speech or audio signal
  • the high frequency band signal of the wide frequency band speech or audio signal is not null
  • the energy of the high frequency band signals of consecutive multiple-frame wide frequency band speech or audio signals after the switching must be weakened to enable the narrow frequency band speech or audio signal to be smoothly switched to the wide frequency band speech or audio signal, so that the high frequency band signal of the wide frequency band speech or audio signal is gradually switched to a real high frequency band signal.
  • the first high frequency band signal and the second high frequency band signal of the previous M frame of speech or audio signals may be directly weighted. The weighted result is the processed first high frequency band signal.
  • Step 202 Synthesize the processed first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal into a wide frequency band signal.
  • the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal of the current frame; then, in step 202, the processed first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal are synthesized into a wide frequency band signal, so that the speech or audio signals received by the user are always wide frequency band speech or audio signals. In this way, speech or audio signals with different bandwidths are smoothly switched, which helps improve the quality of audio signals received by the user.
  • the first high frequency band signal of the current frame of speech or audio signal is processed according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal.
  • the high frequency band signal of these speech or audio signals can be smoothly switched.
  • the processed first high frequency band signal and the first low frequency band signal are synthesized into a wide frequency band signal; the wide frequency band signal is transmitted to a user terminal, so that the user enjoys a high quality speech or audio signal.
  • speech or audio signals with different bandwidths can be switched smoothly, thus reducing the impact of the sudden energy change on the subjective audio quality of the speech or audio signals and improving the quality of audio signals received by the user.
  • the first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal are synthesized into a wide frequency band signal, so that the user can obtain high quality audio signal.
  • step 201 when a switching from wide frequency band speech or audio signal to a narrow frequency band speech or audio signal occurs, step 201 includes the following steps:
  • the speech or audio signal may be divided into fine structure information and envelope information, so that the speech or audio signal can be restored according to the fine structure information and envelope information.
  • the high frequency band signal needed by the current narrow frequency band speech or audio signal needs to be restored so as to implement smooth switching between speech or audio signals.
  • the predicted fine structure information and envelope information corresponding to the first high frequency band signal of the narrow frequency band speech or audio signal are predicted.
  • the first low frequency band signal of the current frame of speech or audio signal may be classified in step 301, and then the predicted fine structure information and envelope information corresponding to the first high frequency band signal are predicted according to the signal type of the first low frequency band signal.
  • the narrow frequency band speech or audio signal of the current frame may be a harmonic signal, or a non-harmonic signal or a transient signal.
  • the fine structure information and envelope information corresponding to the type of the narrow frequency band speech or audio signal can be obtained, so that the fine structure information and envelope information corresponding to the high frequency band signal can be predicted more accurately.
  • the method for switching speech or audio signals in this embodiment does not limit the signal type of the narrow frequency band speech or audio signal.
  • Step 302 Weight the predicted envelope information and the previous M frame envelope information corresponding to the second high frequency band signal of the previous M frame of speech or audio signals to obtain first envelope information corresponding to the first high frequency band signal.
  • the first envelope information corresponding to the first high frequency band signal may be generated according to the predicted envelope information and the previous M frame envelope information corresponding to the second high frequency band signal of the previous M frame of speech or audio signals.
  • the process of generating the first envelope information corresponding to the first high frequency band signal in step 302 may be implemented by using the following two modes:
  • the first low frequency band signal of the current frame of speech or audio signal is compared with the low frequency band signal of the previous N frame of speech or audio signals to obtain a correlation coefficient between the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous N frame of speech or audio signals.
  • the correlation between the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous N frame of speech or audio signals may be determined by judging the difference between a frequency band of the first low frequency band signal of the current frame of speech or audio signal and the same frequency band of the low frequency band signal of the previous N frame of speech or audio signals in terms of the energy size or the information type, so that the desired correlation coefficient can be calculated.
  • the previous N frame of speech or audio signals may be narrow frequency band speech or audio signals, wide frequency band speech or audio signals, or hybrid signals of narrow frequency band speech or audio signals and wide frequency band speech or audio signals.
  • Step 402 Judge whether the correlation coefficient is within a given first threshold range.
  • the correlation coefficient is calculated in step 401, whether the correlation coefficient is within the given threshold range is judged.
  • the purpose of calculating the correlation coefficient is to judge whether the current frame of speech or audio signal is gradually switched from the previous N frame of speech or audio signals or suddenly switched from the previous N frame of speech or audio signals. That is, the purpose is to judge whether their characteristics are the same and then determine the weight of the high frequency band signal of the previous frame in the process of predicting the high frequency band signal of the current speech or audio signal. For example, if the first low frequency band signal of the current frame of speech or audio signal has the same energy as the low frequency band signal of the current speech or audio signal.
  • the first low frequency band signal of the current frame of speech or audio signal has the same energy as the low frequency band signal of the previous frame of speech or audio signal and their signal types are the same, it indicates that the previous frame of speech or audio signal is highly correlated with the current frame of speech or audio signal.
  • the high frequency band envelope information or transitional envelope information corresponding to the previous frame of speech or audio signal occupies a larger weight; otherwise, if there is a huge difference between the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous frame of speech or audio signal in terms of energy and their signal types are different, it indicates that the previous speech or audio signal is lowly correlated with the current frame of speech or audio signal. Therefore, to accurately restore the first envelope information corresponding to the current frame of speech or audio signal, the high frequency band envelope information or transitional envelope information corresponding to the previous frame of speech or audio signal occupies a smaller weight.
  • Step 403 If the correlation coefficient is not within the given first threshold range, weight according to a set first weight 1 and a set first weight 2 to calculate the first envelope information.
  • the first weight 1 refers to the weight value of the previous frame envelope information corresponding to the high frequency band signal of the previous frame of speech or audio signal
  • the first weight 2 refers to the weight value of the envelope information.
  • the correlation coefficient is determined to be not within the given first threshold range in step 402, it indicates that the current frame of speech or audio signal is slightly correlated with the previous N frame of speech or audio signals. Therefore, the previous M frame envelope information or transitional envelope information corresponding to the first frequency band speech or audio signal of the previous M frames or the high frequency band envelope information corresponding to the previous frame of speech or audio signal has a slight impact on the first envelope information.
  • the previous M frame envelope information or transitional envelope information corresponding to the first frequency band speech or audio signal of the previous M frames or the high frequency band envelope information corresponding to the previous frame of speech or audio signal occupies a smaller weight.
  • the first envelope information of the current frame may be calculated according to the set first weight 1 and the first weight 2.
  • the first weight 1 refers to the weight value of the envelope information corresponding to the high frequency band signal of the previous frame of speech or audio signal.
  • the previous frame of speech or audio signal may be a wide frequency band speech or audio signal or a processed narrow frequency band speech or audio signal.
  • the previous frame of speech or audio signal is the wide frequency band speech or audio signal
  • the first weight 2 refers to the weight value of the predicted envelope information.
  • the product of the predicted envelope information and the first weight 2 is added to the product of the previous frame envelope information and the first weight 1, and the weighted sum is the first envelope information of the current frame.
  • subsequently transmitted speech or audio signals are processed according to this method and weight.
  • the first envelope information corresponding to the speech or audio signal is restored until a speech or audio signal is switched again.
  • Step 404 If the correlation coefficient is within the given first threshold range, weight according to a set second weight 1 and a set second weight 2 to calculate the transitional envelope information.
  • the second weight 1 refers to the weight value of the envelope information before the switching, and the second weight 2 refers to the weight value of the previous M frame envelope information, where M is greater than or equal to 1.
  • the current frame of speech or audio signal has characteristics similar to those of the previous consecutive N frame of speech or audio signals, and the first envelope information corresponding to the current frame of speech or audio signal is greatly affected by the envelope information of the previous consecutive N frame of speech or audio signals.
  • the transitional envelope information corresponding to the current frame of speech or audio signal needs to be calculated according to the previous M frame envelope information and the envelope information before the switching.
  • the first envelope information of the current frame of speech or audio signal is restored, the previous M frame envelope information and the previous L frame envelope information before the switching should occupy a larger weight. Then, the first envelope information is calculated according to the transitional envelope information.
  • the second weight 1 refers to the weight value of the envelope information before the switching
  • the second weight 2 refers to the weight value of the previous M frame envelope information.
  • the product of the envelope information before the switching and the second weight 1 is added to the product of the previous M frame envelope information and the second weight 2, and the weighted value is the transitional envelope information.
  • Step 405 Decrease the second weight 1 as per the first weight step, and increase the second weight 2 as per the first weight step.
  • Step 406 Judge whether a set third weight 1 is greater than the first weight 1.
  • the third weight 1 refers to the weight value of the transitional envelope information.
  • the impact of the transitional envelope information on the first envelope information of the current frame may be determined by comparing the third weight 1 with the second weight 1.
  • the transitional envelope information is calculated according to the previous M frame envelope information and the envelope information before the switching. Therefore, the third weight 1 actually represents the degree of the impact that the first envelope information suffers from the envelope information before the switching.
  • Step 407 If the third weight 1 is not greater than the first weight 1, weight according to the set first weight 1 and the first weight 2 to calculate the first envelope information.
  • the third weight 1 when the third weight 1 is determined to be smaller than or equal to the first weight 1 in step 406, it indicates that the current frame of speech or audio signal is a little far from the L frame of speech or audio signals before the switching and that the first envelope information is mainly affected by the previous M frame envelope information. Therefore, the first envelope information of the current frame may be calculated according to the set first weight 1 and the first weight 2.
  • Step 408 If the third weight 1 is greater than the first weight 1, weight according to the set third weight 1 and the third weight 2 to calculate the first envelope information.
  • the third weight 1 refers to the weight value of the transitional envelope information
  • the third weight 2 refers to the weight value of the predicted envelope information.
  • the third weight 1 is determined to be greater than the first weight 1 in step 406, it indicates that the current frame of speech or audio signal is closer to the L frame of speech or audio signals before the switching and that the first envelope information is greatly affected by the envelope information before the switching. Therefore, the first envelope information of the current frame needs to be calculated according to the transitional envelope information.
  • the third weight 1 refers to the weight value of the transitional envelope information
  • the third weight 2 refers to the weight value of the predicted envelope information.
  • the product of the transitional envelope information and the third weight 1 is added to the product of the predicted envelope information and the third weight 2, and the weighted value is the first envelope information.
  • Step 409 Decrease the third weight 1 as per the second weight step, and increase the third weight 2 as per the second weight step until the third weight 1 is equal to 0.
  • the purpose of modifying the third weight 1 and the third weight 2 in step 409 is the same as that of modifying the second weight 1 and the second weight 2 in step 405, that is, the purpose is to perform adaptive adjustment on the third weight 1 and the third weight 2 to calculate the first envelope information more accurately when the impact of the L frame of speech or audio signals before the switching on the subsequently transmitted speech or audio signals is decreased gradually. Because the impact of the L frame of speech or audio signals before the switching on the subsequent speech or audio signals is decreased gradually, the value of the third weight 1 turns smaller gradually, while the value of the third weight 2 turns larger gradually, thus weakening the impact of the envelope information before the switching on the first envelope information.
  • the sum of the first weight 1 and the first weight 2 is equal to 1; the sum of the second weight 1 and the second weight 2 is equal to 1; the sum of the third weight 1 and the third weight 2 is equal to 1; the initial value of the third weight 1 is greater than the initial value of the first weight 1; and the first weight 1 and the first weight 2 are fixed constants.
  • the weight 1 and the weight 2 in this embodiment actually represent the percentages of the envelope information before the switching and the previous M frame envelope information in the first envelope information of the current frame. If the current frame of speech or audio signal is close to the L frame of speech or audio signals before the switching and their correlation is high, the percentage of the envelope information before the switching is high, while the percentage of the previous M frame envelope information is low.
  • the current frame of speech or audio signal is a little far from the L frame of speech or audio signals before the switching, it indicates that the speech or audio signal is stably transmitted on the network; or if the current frame of speech or audio signal is slightly correlated with the L frame of speech or audio signals before the switching, it indicates that the characteristics of the current frame of speech or audio signal are already changed. Therefore, if the current frame of speech or audio signal is slightly affected by the L frame of speech or audio signals before the switching, the percentage of the envelope information before the switching is low.
  • step 404 may be executed after step 405. That is, the second weight 1 and the second weight 2 may be modified firstly, and then the transitional envelope information is calculated according to the second weight 1 and the second weight 2.
  • step 408 may be executed after step 409. That is, the third weight 1 and the third weight 2 may be modified firstly, and then the first envelope information is calculated according to the third weight 1 and the third weight 2.
  • the relationship between a frequency band of the first low frequency band signal of the current frame of speech or audio signal and the same frequency band of the low frequency band signal of the previous frame of speech or audio signal is calculated.
  • "corr" may be used to indicate the correlation coefficient. This correlation coefficient is obtained according to the energy relationship between the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous frame of speech or audio signal. If the energy difference is small, the "corr" is large; otherwise, the "corr” is small. For the specific process, see the calculation about the correlation of the previous N frame of speech or audio signals in step 401.
  • Step 502 Judge whether the correlation coefficient is within a given second threshold range.
  • the second threshold range may be represented by c1 to c2 in this embodiment.
  • Step 503 If the correlation coefficient is not within the given second threshold range, weight according to the set first weight 1 and the first weight 2 to calculate the first envelope information.
  • the first weight 1 refers to the weight value of the previous frame envelope information corresponding to the high frequency band signal of the previous frame of speech or audio signal
  • the first weight 2 refers to the weight value of the predicted envelope information.
  • the first weight 1 and the second weight 2 are fixed constants.
  • the first envelope information corresponding to the current frame of speech or audio signal is slightly affected by the envelope information of the previous frame of speech or audio signal before the switching. Therefore, the first envelope information of the current frame is calculated according to the set first weight 1 and the first weight 2. The product of the predicted envelope information and the first weight 2 is added to the product of the previous frame envelope information and the first weight 1, and the weighted sum is the first envelope information of the current frame.
  • subsequently transmitted narrowband speech or audio signals are processed according to this method and weight. The first envelope information corresponding to the narrowband speech or audio signal is restored until the speech or audio signals with different bandwidths are switched again.
  • the first weight 1 in this embodiment may be represented by a1; the first weight 2 may be represented by b1; the previous frame envelope information may be represented by pre_fenv; the predicted envelope information may be represented by fenv; and the first envelope information may be represented by cur_fenv.
  • Step 504 If the correlation coefficient is within the second threshold range, judge whether the set second weight 1 is greater than the first weight 1.
  • the second weight 1 refers to the weight value of the envelope information before the switching that corresponds to the high frequency band signal of the previous frame of speech or audio signal before the switching.
  • the degree of the impact of the envelope information before the switching and the previous frame envelope information on the first envelope information of the current frame may be obtained by comparing the second weight 1 with the first weight 1.
  • Step 505 If the second weight 1 is not greater than the first weight 1, weight according to the set first weight 1 and the first weight 2 to calculate the first envelope information.
  • Step 506 If the second weight 1 is greater than the first weight 1, weight according to the second weight 1 and the set second weight 2 to calculate the first envelope information.
  • the second weight 2 refers to the weight value of the predicted envelope information.
  • the second weight 1 may be represented by a2, and the second weight 2 may be represented by b2.
  • the first envelope information of the current frame may be calculated according to the set second weight 1 and the second weight 2.
  • the product of the predicted envelope information and the second weight 2 is added to the product of the envelope information before the switching and the second weight 1, and the weighted sum is the first envelope information of the current frame.
  • the envelope information before the switching may be represented by con_fenv.
  • Step 507 Decrease the second weight 1 as per the second weight step, and increase the second weight 2 as per the second weight step.
  • the impact of a speech or audio signal before the switching on the subsequent frame of speech or audio signal is gradually decreased.
  • adaptive adjustment needs to be performed on the second weight 1 and the second weight 2.
  • the impact of the speech or audio signal before the switching on the subsequent frame of speech or audio signal is gradually decreased, while the impact of the previous frame of speech or audio signal close to the current frame of speech or audio signal turns larger gradually. Therefore, the value of the second weight 1 turns smaller gradually, while the value of the second weight 2 turns larger gradually. In this way, the impact of the envelope information before the switching on the first envelope information is weakened, while the impact of the predicted envelope information on the first envelope information is enhanced.
  • the sum of the first weight 1 and the first weight 2 is equal to 1; the sum of the second weight 1 and the second weight 2 is equal to 1; the initial value of the second weight 1 is greater than the initial value of the first weight 1.
  • Step 303 Generate a processed first high frequency band signal according to the first envelope information and the predicted fine structure information.
  • the processed first high frequency band signal may be generated according to the first envelope information and predicted fine structure information, so that the second high frequency band signal can be smoothly switched to the processed first high frequency band signal.
  • the processed first high frequency band signal of the current frame is obtained according to the predicted fine structure information and the first envelope information.
  • the second high frequency band signal of the wide frequency band speech or audio signal before the switching can be smoothly switched to the processed first high frequency band signal corresponding to the narrow frequency band speech or audio signal, thus improving the quality of audio signals received by the user.
  • step 202 shown in FIG. 6 includes the following steps:
  • the first high frequency band signal of the narrowband speech or audio signal is null.
  • the energy of the processed first high frequency band signal is attenuated by frames until the attenuation coefficient reaches a given threshold after the number of frames of the wide frequency band signal extended from the narrow frequency band speech or audio signal reaches a given number of frames.
  • the interval between the current frame of speech or audio signal and the speech or audio signal of a frame before the switching may be obtained according to the current frame of speech or audio signal and the speech or audio signal of the frame before the switching.
  • the number of frames of the narrow frequency band speech or audio signal may be recorded by using a counter, where the number of frames may be a predetermined value greater than or equal to 0.
  • Step 602 If the processed first high frequency band signal does not need to be attenuated, synthesize the processed first high frequency band signal and the first low frequency band signal into a wide frequency band signal.
  • the processed first high frequency band signal and the first low frequency band signal are directly synthesized into a wide frequency band signal.
  • Step 603 If the processed first high frequency band signal needs to be attenuated, judge whether the attenuation factor corresponding to the processed first high frequency band signal is greater than the threshold.
  • the initial value of the attenuation factor is 1, and the threshold is greater than or equal to 0 and smaller than 1. If it is determined that the processed first high frequency band signal needs to be attenuated in step 601, whether the attenuation factor corresponding to the processed first high frequency band signal is greater than a given threshold is judged in step 603.
  • Step 604 If the attenuation factor is not greater than the given threshold, multiply the processed first high frequency band signal by the threshold, and synthesize the product and the first low frequency band signal into the wide frequency band signal.
  • the attenuation factor is determined to be not greater than the given threshold in step 603, it indicates that the energy of the processed first high frequency band signal is already attenuated to a certain degree and that the processed first high frequency band signal may not cause negative impacts. In this case, this attenuation ratio may be kept. Then, the processed first high frequency band signal is multiplied by the threshold, and then the product and the first low frequency band signal are synthesized into a wide frequency band signal.
  • Step 605 If the attenuation factor is greater than the given threshold, multiply the processed first high frequency band signal by the attenuation factor, and synthesize the product and the first low frequency band signal into the wide frequency band signal.
  • the processed first high frequency band signal may cause poor listening at the attenuation factor and needs to be further attenuated until it reaches the given threshold. Then, the processed first high frequency band signal is multiplied by the attenuation factor, and then the product and the first low frequency band signal are synthesized into a wide frequency band signal.
  • Step 606 Modify the attenuation factor to decrease the attenuation factor.
  • the impact of the speech or audio signals before the switching on subsequent narrowband speech or audio signals gradually turns smaller, and the attenuation factor also turns smaller gradually.
  • an embodiment of obtaining the processed first high frequency band signal through step 201 includes the following steps, as shown in FIG. 7 :
  • the energy of the high frequency band signal of the wide frequency band speech or audio signal needs to be attenuated to ensure that the narrow frequency band speech or audio signal can be smoothly switched to the wide frequency band speech or audio signal.
  • the product of the second high frequency band signal and the fourth weight 1 is added to the product of the first high frequency band signal and the fourth weight 2; the weighted value is the processed first high frequency band signal.
  • Step 702 Decrease the fourth weight 1 as per the third weight step, and increase the fourth weight 2 as per the third weight step until the fourth weight 1 is equal to 0. The sum of the fourth weight 1 and the fourth weight 2 is equal to 1.
  • the fourth weight 1 gradually turns smaller, while the fourth weight 2 gradually turns larger until the fourth weight 1 is equal to 0 and the fourth weight 2 is equal to 1. That is, the transmitted speech or audio signals are always wide frequency band speech or audio signals.
  • step 201 may further include the following steps:
  • a fixed parameter may be set to replace the high frequency band signal of the narrow frequency band speech or audio signal, where the fixed parameter is a constant greater than or equal to 0 and smaller than the energy of the first high frequency band signal.
  • the product of the fixed parameter and the fifth weight 1 is added to the product of the first high frequency band signal and the fifth weight 2; the weighted value is the processed first high frequency band signal.
  • Step 802 Decrease the fifth weight 1 as per the fourth weight step, and increase the fifth weight 2 as per the fourth weight step until the fifth weight 1 is equal to 0. The sum of the fifth weight 1 and the fifth weight 2 is equal to 1.
  • the transmitted speech or audio signals are always real wide frequency band speech or audio signals.
  • the high frequency band signal of the wide frequency band speech or audio signal is attenuated to obtain a processed high frequency band signal.
  • the high frequency band signal corresponding to the narrow frequency band speech or audio signal before the switching can be smoothly switched to the processed high frequency band signal corresponding to the wide frequency band speech or audio signal, thus helping to improve the quality of audio signals received by the user.
  • the envelope information may also be replaced by other parameters that can represent the high frequency band signal, for example, a linear predictive coding (LPC) parameter or an amplitude parameter.
  • LPC linear predictive coding
  • the program may be stored in a computer readable storage medium.
  • the storage medium may be a read only memory (ROM), a random access memory (RAM), a magnetic disk, or a compact disk-read only memory (CD-ROM).
  • FIG. 9 shows a structure of the first embodiment of an apparatus for switching speech or audio signals.
  • the apparatus for switching speech or audio signals includes a processing module 91 and a first synthesizing module 92.
  • the processing module 91 is adapted to weight the first high frequency band signal of the current frame of speech or audio signal and the second high frequency band signal of the previous M frame of speech or audio signals to obtain a processed first high frequency band signal When a switching of a speech or audio occurs.
  • M is greater than or equal to 1.
  • the first synthesizing module 92 is adapted to synthesize the processed first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal into a wide frequency band signal.
  • the processing module processes the first high frequency band signal of the current frame of speech or audio signal according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal can be smoothly switched to the processed first high frequency band signal.
  • the first synthesizing module synthesizes the processed first high frequency band signal and the first low frequency band signal into a wide frequency band signal; the wide frequency band signal is transmitted to a user terminal, so that the user enjoys a high quality speech or audio signal.
  • FIG. 10 shows a structure of the second embodiment of the apparatus for switching speech or audio signals.
  • the apparatus for switching speech or audio signals in this embodiment is based on the first embodiment, and further includes a second synthesizing module 103.
  • the second synthesizing module 103 is adapted to synthesize the first high frequency band signal and the first low frequency band signal into the wide frequency band signal when a switching of the speech or audio signal does not occur.
  • the second synthesizing module is set to synthesize the first low frequency band signal and the first high frequency band signal of the first frequency band speech or audio signals of the current frame into a wide frequency band signal when a switching between speech or audio signals with different bandwidths occurs. In this way, the quality of speech or audio signals received by the user is improved.
  • the processing module 101 includes the following modules, as shown in FIG. 10 and FIG. 11 :
  • the apparatus for switching speech or audio signals in this embodiment may include a classifying module 1010 adapted to classify the first low frequency band signal of the current frame of speech or audio signal.
  • the predicting module 1011 is further adapted to predict the fine structure information and envelope information corresponding to the first low frequency band signal of the current frame of speech or audio signal.
  • the predicting module predicts the fine structure information and envelope information corresponding to the first high frequency band signal, so that the processed first high frequency band signal can be accurately generated by the first generating module and the second generating module. In this way, the first high frequency band signal can be smoothly switched to the processed first high frequency band signal, thus improving the quality of speech or audio signals received by the user.
  • the classifying module classifies the first low frequency band signal of the current frame of speech or audio signal; the predicting module obtains the predicted fine structure information and predicted envelope information according to the signal type. In this way, the predicted fine structure information and predicted envelope information are more accurate, thus improving the quality of speech or audio signals received by the user.
  • the first synthesizing module 102 includes the following modules, as shown in FIG. 10 and FIG. 12 :
  • the initial value of the attenuation factor is 1, and the threshold is greater than or equal to 0 and smaller than 1.
  • the processed first high frequency band signal is attenuated, so that the wide frequency band signal obtained by processing the current frame of speech or audio signal is more accurate, thus improving the quality of audio signals received by the user.
  • the processing module 101 in this embodiment includes the following modules, as shown in FIG. 10 and FIG. 13a :
  • the processing module 101 in this embodiment may further include the following modules, as shown in FIG. 10 and FIG. 13b :
  • the apparatus for switching speech or audio signals in this embodiment, in the process of switching a speech or audio signal from a narrow frequency band speech or audio signal to a wide frequency band speech or audio signal, the high frequency band signal of the wide frequency band speech or audio signal is attenuated to obtain a processed high frequency band signal.
  • the high frequency band signal corresponding to the narrow frequency band speech or audio signal before the switching can be smoothly switched to the processed high frequency band signal corresponding to the wide frequency band speech or audio signal, thus helping to improve the quality of audio signals received by the user.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Telephone Function (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Circuits Of Receivers In General (AREA)
EP11774406.0A 2010-04-28 2011-04-28 Audio signal switching method and device Active EP2485029B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP17151713.9A EP3249648B1 (en) 2010-04-28 2011-04-28 Method and apparatus for switching speech or audio signals

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2010101634063A CN101964189B (zh) 2010-04-28 2010-04-28 语音频信号切换方法及装置
PCT/CN2011/073479 WO2011134415A1 (zh) 2010-04-28 2011-04-28 语音频信号切换方法及装置

Related Child Applications (2)

Application Number Title Priority Date Filing Date
EP17151713.9A Division EP3249648B1 (en) 2010-04-28 2011-04-28 Method and apparatus for switching speech or audio signals
EP17151713.9A Division-Into EP3249648B1 (en) 2010-04-28 2011-04-28 Method and apparatus for switching speech or audio signals

Publications (3)

Publication Number Publication Date
EP2485029A1 EP2485029A1 (en) 2012-08-08
EP2485029A4 EP2485029A4 (en) 2013-01-02
EP2485029B1 true EP2485029B1 (en) 2017-06-14

Family

ID=43517042

Family Applications (2)

Application Number Title Priority Date Filing Date
EP11774406.0A Active EP2485029B1 (en) 2010-04-28 2011-04-28 Audio signal switching method and device
EP17151713.9A Active EP3249648B1 (en) 2010-04-28 2011-04-28 Method and apparatus for switching speech or audio signals

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP17151713.9A Active EP3249648B1 (en) 2010-04-28 2011-04-28 Method and apparatus for switching speech or audio signals

Country Status (8)

Country Link
EP (2) EP2485029B1 (zh)
JP (3) JP5667202B2 (zh)
KR (1) KR101377547B1 (zh)
CN (1) CN101964189B (zh)
AU (1) AU2011247719B2 (zh)
BR (1) BR112012013306B8 (zh)
ES (2) ES2718947T3 (zh)
WO (1) WO2011134415A1 (zh)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101110800B1 (ko) * 2003-05-28 2012-07-06 도꾸리쯔교세이호진 상교기쥬쯔 소고겡뀨죠 히드록실기 함유 화합물의 제조 방법
US11431312B2 (en) 2004-08-10 2022-08-30 Bongiovi Acoustics Llc System and method for digital signal processing
US8284955B2 (en) 2006-02-07 2012-10-09 Bongiovi Acoustics Llc System and method for digital signal processing
US10848118B2 (en) 2004-08-10 2020-11-24 Bongiovi Acoustics Llc System and method for digital signal processing
US10158337B2 (en) 2004-08-10 2018-12-18 Bongiovi Acoustics Llc System and method for digital signal processing
US10701505B2 (en) 2006-02-07 2020-06-30 Bongiovi Acoustics Llc. System, method, and apparatus for generating and digitally processing a head related audio transfer function
US10848867B2 (en) 2006-02-07 2020-11-24 Bongiovi Acoustics Llc System and method for digital signal processing
CN101964189B (zh) * 2010-04-28 2012-08-08 华为技术有限公司 语音频信号切换方法及装置
US8000968B1 (en) 2011-04-26 2011-08-16 Huawei Technologies Co., Ltd. Method and apparatus for switching speech or audio signals
CN105761724B (zh) * 2012-03-01 2021-02-09 华为技术有限公司 一种语音频信号处理方法和装置
CN103295578B (zh) * 2012-03-01 2016-05-18 华为技术有限公司 一种语音频信号处理方法和装置
CN103516440B (zh) * 2012-06-29 2015-07-08 华为技术有限公司 语音频信号处理方法和编码装置
CN106847297B (zh) 2013-01-29 2020-07-07 华为技术有限公司 高频带信号的预测方法、编/解码设备
US9883318B2 (en) 2013-06-12 2018-01-30 Bongiovi Acoustics Llc System and method for stereo field enhancement in two-channel audio systems
US9906858B2 (en) 2013-10-22 2018-02-27 Bongiovi Acoustics Llc System and method for digital signal processing
US9397629B2 (en) * 2013-10-22 2016-07-19 Bongiovi Acoustics Llc System and method for digital signal processing
US20150170655A1 (en) * 2013-12-15 2015-06-18 Qualcomm Incorporated Systems and methods of blind bandwidth extension
CN103714822B (zh) * 2013-12-27 2017-01-11 广州华多网络科技有限公司 基于silk编解码器的子带编解码方法及装置
KR101864122B1 (ko) * 2014-02-20 2018-06-05 삼성전자주식회사 전자 장치 및 전자 장치의 제어 방법
US10820883B2 (en) 2014-04-16 2020-11-03 Bongiovi Acoustics Llc Noise reduction assembly for auscultation of a body
US10639000B2 (en) 2014-04-16 2020-05-05 Bongiovi Acoustics Llc Device for wide-band auscultation
EP2980794A1 (en) 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio encoder and decoder using a frequency domain processor and a time domain processor
WO2017140600A1 (en) 2016-02-17 2017-08-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Post-processor, pre-processor, audio encoder, audio decoder and related methods for enhancing transient processing
JP2021521700A (ja) 2018-04-11 2021-08-26 ボンジョビ アコースティックス リミテッド ライアビリティー カンパニー オーディオ強化聴力保護システム
CN110556116B (zh) 2018-05-31 2021-10-22 华为技术有限公司 计算下混信号和残差信号的方法和装置
US10959035B2 (en) 2018-08-02 2021-03-23 Bongiovi Acoustics Llc System, method, and apparatus for generating and digitally processing a head related audio transfer function
CN112002333B (zh) * 2019-05-07 2023-07-18 海能达通信股份有限公司 一种语音同步方法、装置及通信终端
CN117373465B (zh) * 2023-12-08 2024-04-09 富迪科技(南京)有限公司 一种语音频信号切换系统

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4330689A (en) * 1980-01-28 1982-05-18 The United States Of America As Represented By The Secretary Of The Navy Multirate digital voice communication processor
US4769833A (en) * 1986-03-31 1988-09-06 American Telephone And Telegraph Company Wideband switching system
US5019910A (en) * 1987-01-29 1991-05-28 Norsat International Inc. Apparatus for adapting computer for satellite communications
FI115329B (fi) * 2000-05-08 2005-04-15 Nokia Corp Menetelmä ja järjestely lähdesignaalin kaistanleveyden vaihtamiseksi tietoliikenneyhteydessä, jossa on valmiudet useisiin kaistanleveyksiin
US7113522B2 (en) * 2001-01-24 2006-09-26 Qualcomm, Incorporated Enhanced conversion of wideband signals to narrowband signals
KR100940531B1 (ko) * 2003-07-16 2010-02-10 삼성전자주식회사 광대역 음성 신호 압축 및 복원 장치와 그 방법
JP2005080079A (ja) * 2003-09-02 2005-03-24 Sony Corp 音声再生装置及び音声再生方法
FI119533B (fi) * 2004-04-15 2008-12-15 Nokia Corp Audiosignaalien koodaus
EP1758099A1 (en) * 2004-04-30 2007-02-28 Matsushita Electric Industrial Co., Ltd. Scalable decoder and expanded layer disappearance hiding method
EP1780895B1 (en) * 2004-07-28 2020-07-01 III Holdings 12, LLC Signal decoding apparatus
US7895035B2 (en) * 2004-09-06 2011-02-22 Panasonic Corporation Scalable decoding apparatus and method for concealing lost spectral parameters
CN101107650B (zh) * 2005-01-14 2012-03-28 松下电器产业株式会社 语音切换装置及语音切换方法
US8249861B2 (en) * 2005-04-20 2012-08-21 Qnx Software Systems Limited High frequency compression integration
JP5100380B2 (ja) * 2005-06-29 2012-12-19 パナソニック株式会社 スケーラブル復号装置および消失データ補間方法
US8194865B2 (en) * 2007-02-22 2012-06-05 Personics Holdings Inc. Method and device for sound detection and audio control
KR101290622B1 (ko) * 2007-11-02 2013-07-29 후아웨이 테크놀러지 컴퍼니 리미티드 오디오 복호화 방법 및 장치
CN100585699C (zh) 2007-11-02 2010-01-27 华为技术有限公司 一种音频解码的方法和装置
CN101425292B (zh) * 2007-11-02 2013-01-02 华为技术有限公司 一种音频信号的解码方法及装置
CN101964189B (zh) * 2010-04-28 2012-08-08 华为技术有限公司 语音频信号切换方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"G.729-based embedded variable bit-rate coder: An 8-32 kbit/s scalable wideband coder bitstream interoperable with G.729; G.729.1 (05/06)", ITU-T STANDARD, INTERNATIONAL TELECOMMUNICATION UNION, GENEVA ; CH, no. G.729.1 (05/06), 29 May 2006 (2006-05-29), pages 1 - 100, XP017466254 *

Also Published As

Publication number Publication date
BR112012013306B1 (pt) 2020-11-10
BR112012013306B8 (pt) 2021-02-17
EP2485029A4 (en) 2013-01-02
KR101377547B1 (ko) 2014-03-25
CN101964189B (zh) 2012-08-08
KR20120074303A (ko) 2012-07-05
EP3249648A1 (en) 2017-11-29
BR112012013306A2 (pt) 2016-03-01
JP5667202B2 (ja) 2015-02-12
JP2015045888A (ja) 2015-03-12
CN101964189A (zh) 2011-02-02
AU2011247719A1 (en) 2012-06-07
ES2718947T3 (es) 2019-07-05
JP6027081B2 (ja) 2016-11-16
ES2635212T3 (es) 2017-10-02
JP2013512468A (ja) 2013-04-11
WO2011134415A1 (zh) 2011-11-03
EP3249648B1 (en) 2019-01-09
EP2485029A1 (en) 2012-08-08
AU2011247719B2 (en) 2013-07-11
JP6410777B2 (ja) 2018-10-24
JP2017033015A (ja) 2017-02-09

Similar Documents

Publication Publication Date Title
EP2485029B1 (en) Audio signal switching method and device
US10559313B2 (en) Speech/audio signal processing method and apparatus
US10546594B2 (en) Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program
US8214218B2 (en) Method and apparatus for switching speech or audio signals
EP1775717B1 (en) Speech decoding apparatus and compensation frame generation method
EP2471063B1 (en) Signal processing apparatus and method, and program
EP1953736A1 (en) Stereo encoding device, and stereo signal predicting method
US20040138876A1 (en) Method and apparatus for artificial bandwidth expansion in speech processing
TW201140563A (en) Determining an upperband signal from a narrowband signal
US20140114670A1 (en) Adaptive Audio Signal Coding
WO2012169133A1 (ja) 音声符号化装置、音声復号装置、音声符号化方法及び音声復号方法
EP3007171B1 (en) Signal processing device and signal processing method

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20120503

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

A4 Supplementary search report drawn up and despatched

Effective date: 20121204

RIC1 Information provided on ipc code assigned before grant

Ipc: G01L 19/00 20060101AFI20121128BHEP

Ipc: G01L 19/12 20060101ALI20121128BHEP

Ipc: G01L 19/04 20060101ALI20121128BHEP

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20130805

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602011038742

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G01L0019000000

Ipc: G10L0019240000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 19/24 20130101AFI20161018BHEP

Ipc: G10L 21/038 20130101ALI20161018BHEP

INTG Intention to grant announced

Effective date: 20161108

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 901665

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170615

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602011038742

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

REG Reference to a national code

Ref country code: SE

Ref legal event code: TRGR

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2635212

Country of ref document: ES

Kind code of ref document: T3

Effective date: 20171002

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170914

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170915

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 901665

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170914

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171014

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 8

Ref country code: DE

Ref legal event code: R097

Ref document number: 602011038742

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

26N No opposition filed

Effective date: 20180315

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20180430

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180428

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180430

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180430

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180428

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180428

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20110428

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170614

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230524

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230529

P03 Opt-out of the competence of the unified patent court (upc) deleted
PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: ES

Payment date: 20230511

Year of fee payment: 13

Ref country code: DE

Payment date: 20230307

Year of fee payment: 13

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20240315

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240307

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: SE

Payment date: 20240312

Year of fee payment: 14

Ref country code: IT

Payment date: 20240313

Year of fee payment: 14

Ref country code: FR

Payment date: 20240308

Year of fee payment: 14